More lanes, more wires, higher complexity, higher cost.
Not surprising at all given that the same vendor thought it was fine stratagem to market a midrange dedicated GPU with only 4x PCIe 4.0 lanes. They are banking heavily on the trend of turning PCIe into a serial bus of sorts.
> on the trend of turning PCIe into a serial bus of sorts.
PCIe switches used to be far more common before certain things happened in the market and they became uber-expensive. I have a few NICs where, instead of using multi-port controllers, they just used PCIe switches... arguably I/O hubs have probably been the cheapest PCIe switches around for some time now...
> They are banking heavily on the trend of turning PCIe into a serial bus of sorts.
It was always a serial bus? PCIe 1.0 was just too slow to handle everything without 16x systems. In fact, 32x slots/cards existed in the server market to make up for the missing bandwidth.
This only works if all devices are PCIe 5, though. There are still a lot of PCIe 3 or even PCIe 2 devices that could make do with very few PCIe 5 lanes but need a lot of lanes regardless.
Does an older device still occupy a whole lane right to the processor though? Or does the motherboard/chipset multiplex that onto a shared 5.0 lane into the CPU?
My read of it is the second, but I'd be happy to be shown wrong on this.
Only the CPU direct lanes are PCIe 5.0 and, being direct, none of that gets multiplexed. You either use all of a direct slot's assigned lanes at 5.0 speed or you miss out on that bandwidth.
Everything else not CPU direct (wired and wireless networking, SATA storage, non-primary USB ports, and 3 PCIe 4.0 addon cards) is connected via the chipset(s) which connect back via a single shared PCIe 4.0 x4 connection to the CPU. These are multiplexed but nothing here is using PCIe 5.0 bandwidth and even just 1 busy slot alone off this collection is able to consume the entire uplink bandwidth.