Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even 100GbE is hardly seen on company datacenters. Yes, it's cheaper than before, but still more expensive than 10G, and that's extra cost multiplied by all the devices that need to have the improved hardware to take advantage. Plus, most servers won't saturate a 10G link without tweaks on the setup. For 100G it's even worse, I think it will take a long time to see them on datacenters outside of core links or for companies with heavy bandwidth use (storage, video).


I think the common knowledge that most servers can't saturate a 10Gb Ethernet link is no longer true. In my experience even saturating 25Gb links is rather easy to do when using 9000 byte MTU on mid-tier server hardware.

100Gb links do take some thought and work to saturate, but that's improving at a good rate lately so I expect it'll become more common rather soon.

The main downside to 25Gb and 100Gb links still seems to be hardware pricing. At these speeds, PCIe network adapters and switches get rather expensive rather quick and will make you really evaluate if your situation really demands those speeds. 10Gb SFP+ and copper network adapters and switches are quite inexpensive now in 2022.


> In my experience even saturating 25Gb links is rather easy to do when using 9000 byte MTU on mid-tier server hardware.

But that's tweaking the setup already, it requires changes, testing and verification, and can cause problems in downstream equipment. And for a lot of applications, a 9K MTU will not be enough to saturate the link because they'll need NUMA awareness, or the NIC queues will need tweaking to avoid imbalances, or the application is not ready to send at that speed...

I'm not saying it can't be done, of course it can. But it isn't "plug a bigger card and it'll go faster".


> Plus, most servers won't saturate a 10G link without tweaks on the setup.

That doesn't seem right. When I got my first 10G server, it was running dual Xeon E5-2690 (either v1 or v2), and I don't recall needing to tweak much of anything. That was mostly a single large file downloaded over http, so not super hard to tweak anyway, but server chips are a lot better now than sandy/ivy bridge. It could only get 9gbps out with https, but the 2690v4 could do 2x10G with https because aes acceleration.


> That was mostly a single large file downloaded over http, so not super hard to tweak anyway

Well, my point is that most servers don't just download single large files over HTTP. Even if you only look at storage servers, going into multiple files and connections you can easily find issues and have downgraded performance if you don't prepare the system for the workload.


I can saturate a 10G link on a $600 desktop PC with a consumer grade NVME SSD... serious servers are capable of far more than that.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: