Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And the lanes are cheap.

https://www.avadirect.com/Tomcat-HX-S8030-S8030GM2NE-AMD-SoC... this is a $400 standard ATX board with 80 PCIe lanes (+ 16 more via risers). That's the equivalent of 160 3.0 PCIe lanes.



The lanes come from the CPU you put into the board, not the board itself. Although yes they are still cheap, at least if you go with something like the EPYC 7252 at ~$500 (which still has the full 128 PCI-E 4.0 lanes)

That said I have no idea how you would actually feed that many PCI-E lanes with an EPYC 7252, but if you can pull it off it's an insane $/lane value.


I know it's the CPU but the board I linked is a very rare standard ATX board, almost all boards are proprietary.

I presume you could build an insane fast fileserver with a real lot of M.2 disks and multiple 100GbE ports?


That's right. You can also use a PCIe expansion chassis (there are already ones supporting PCIe 4.0), giving you plenty of space for dual-slot-width cards.


Sure but I don't think the 8 core epyc would actually keep up with that many NVME drives. At least not if you tried to actually hit 24+ of them at once.

Linus tech tips tried this and had to upgrade the CPU from the 24 core epyc to the 32 core to get performance up to what they wanted. https://youtu.be/xWjOh0Ph8uM

Maybe just a bad deployment but there is overhead in filesystems. Especially with checksums and compression and redundancy and etc...


It's possible to bypass the CPU in some cases using NVMe over an RDMA layer with Infiniband. PCIe 4.0 dual-port 200Gbps Infiniband/Ethernet adapters exist[1] which are compatible with this approach: https://store.mellanox.com/products/mellanox-mcx653106a-hdat...

[1]Although you can't saturate both of them through even a 16 lane PCIe 4.0 port which has ~250Gbps of throughput each way.... Which to me means that PCIe 4.0 is not at all too soon.


Also if you calculate the USD per 3.0 lane value you will find you can go much,much higher in CPU prices. If you look at various combos you will find it very rare for the server CPU+board price divided by the number of 3.0 lanes or equivalent to be below 10USD.


> That's the equivalent of 160 3.0 PCIe lanes

PCIe lanes don't work like that, lanes are the unit of allocation, a lane is a lane regardless of the speed it runs at.

But yes, you can put more bandwidth down a 4.0 lane... if your device supports it. Most of the devices you will be putting on a budget home system don't support it.

It would, hypothetically, be more desirable to have 160 PCIe 3.0 lanes than 80 4.0 lanes. Of course there is no system with that many, but I'd take 128 3.0 lanes over 80 4.0 lanes for sure.


> PCIe lanes don't work like that, lanes are the unit of allocation, a lane is a lane regardless of the speed it runs at.

There's no need to be pedantic here. Just about nothing uses a single 3.0 lane, especially not in a system where you care about having a big count. For anything that was using 2-16 lanes, doubling the speed is basically the same as doubling the number of lanes. Except for the extra benefit that the max allocation goes up.

> I'd take 128 3.0 lanes over 80 4.0 lanes for sure

Maybe you'd take that today. In a few years when more devices support 4.0 that's not a great tradeoff. Especially when you can put switch chips in front of your 3.0 devices to keep all your lanes saturated.


Dual socket Epyc 2 systems provide up to 160 PCIe lanes (and they’re PCIe gen4, too).


It’s just some copper and fiberglass. It’s the sheer number of transistors necessary for that many SerDes that costs $$$.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: