I was under the impression that AM5 would support PCIe 5.0 instead of 4.0. I’m becoming less excited about this next generation from AMD than I was a few months ago.
It will support PCIe 5.0 on the slots directly connected to the CPU, but probably only on X670E motherboards. Note that there will be very few 5.0 devices on the market and you won't be able to notice the performance difference.
My interpretation is that the chipsets wil have PCIEe 5 connections as follows:
X670E: "Everywhere" according to the talk, which I interpret as:
- 16 lanes of pcie 5 to the main pcie slots
- 4 lanes of pcie 5 to the primary nvme slot (i would not be surprised if some boards will expose this as a pcie x 16 slot (with 4 lanes)) though.
- 4 lanes of pcie 5 to the chipset (speculative)
For the regular X670:
- 16+4 pcie5 lanes to "GPU" and "nvme" slots, as for the X670, perhaps only 1 nvme slot, perhaps with an additional nvme x8 slot when running 8/8/4 instead of 16/0/4.
- "only" nvme 4 to the chipset
B570:
- Only the "nvme" slot will have version 5
If I'm right, the Extreme card will offer twice the performance over the chipset, compared to AM4, which could allow it to fill some of the use cases currently covered by the non-pro threadrippers. If I'm wrong, and even the extreme X670E chipset will not be a good choice for those who depend on a lot of IO, compared to threadripper or Intel.
Hopefully, I'm right, and in that case, I'm not sure if we need an intermediate step between the 7950x and lower end threadripper pro setups, at least not for connectivity and I/O. And those who actually need more compute than the 7950x will over will probably not care that much about the price premium of threadripper pro vs regular threadripper.
Saying there are few devices yet on the market is not helpful. You don't buy a PC for the next 6 months. In a 1-2-3 years when your PC will be still ok it is very likely that more PCIe5.0 devices will be out, and probably not compatible with your MB.
I upgrade every couple of years to the best hardware on the market, but even for me PCIe 5 is just “nice to have”. No GPUs support it yet and rumors suggest that RTX40 will also be PCIe 4.
This has been clarified by the actual presentation: X670E gets PCIe 5 on all CPU links, X670 gets PCIe 5 on the first slot and storage, B650 gets PCIe 5 only on storage.
Of course, the chipset itself has nothing to do with what connectivity is provided to the PEG slots since those run straight to the CPU - this is market segmentation in action, the CPU simply won't turn on those feature-bits if you don't pay for the premium chipset.
Also, as noted, on AMD the chipset has nothing to do with anything anyway. It is just an IO expander, it doesn't boot the CPU or do anything like that, and it is entirely uninvolved, out of the loop entirely, when the CPU is talking to things that are attached to PEG direct lanes.
I'd like room for addon cards with more NVME drives or existing RAID / JBOD bulk IO cards.
I'd be fine with PCI-E 4 across: x16 (gpu) + x16 (gpu/accessory) + x4 (nvme) + x4 (nvme) + x4 (their slow IO chipset).
PCI-E 5 is 2x the bandwidth, so maybe some higher end boards might offer more choices for splitting those lanes up at slower speeds that work well with older addin cards.
A (1.25GB/sec) 10GBit Ethernet card is on the radar for the lifetime of these systems. A 4.0 x1 link is sufficient at ~ 1.97GB/sec.
I'm worried too many boards will ship with two GPU slots (5.0? / 4.0 x8) and at most 2x NVME slots, and load _everything_ else on those x4 slots for the board chipsets, including accessory slots.
The PCIe root ports in CPUs usually don't support dividing up x16 links into anything smaller than x4 links. Even for dedicated PCIe switch chips, bifurcation down to x2 or x1 links is usually only found on the smaller switches that don't have multiple x16 links to begin with—though PCIe gen4 and gen5 have made x2 link width support show up for some of the big PCIe switches.
If you want to load up a bunch of low-speed devices (and 10Gbit/s is low-speed now), the chipset is the right place to do it. Consumer systems really are not going to be bottlenecked by lots of devices sharing one x4 uplink to the CPU.
#1 expected ASM4242 mux with 2x DP for APUs -- It'd be nice if these pins could export additional PCIe lanes for non-APU systems.
That 4.0 x 4 set of lanes is expected to service up to...
* 4.0 x4 (bridged daisy)
* 4.0 x4 NVMe
* 4x SATA 6Gbit/sec
* USB 3.2 20Gbit
* 4x USB 3.2 10Gbit
* 4x USB 2.0
+
* 4.0 x4 (daisy to cards)
* 4.0 x4 NVMe
* 3.0 x1 2.5Gbit Ether
* 3.0 x1 + 2.0 USB WiFi
* 2x SATA 6Gbit
* USB 3.2 20Gbit
* 4x USB 3.2 10Gbit
* 4x USB 2.0
__ideally__ they'd have done something more like x4 PCIe 5.0 for the secondary NVMe drives and downstream system ports to share, along with an x2 PCIe 5.0 link utilized for the other existing ports on the two chips.
The sort of person that buys the _high end_ prosumer boards DO use things in burst often enough for it to be a consideration.
In the end, the devices chipset AMD has sourced from 3rd parties is good as a general tool for manufacturers... My issue is that I'd _really_ like their high end chips to support more PCIe 5.0 x4 lanes, possibly used in aggregations.
Imagine if they instead supported 5.0 x16 (or x8+x8) AND 5.0 x16 (2x8 || x8+x4+x4 || 4x4). That'd allow for either a second full x16 slot for future mass IO devices (be it a GPU or NVMe riser card) or the full sized ATX boards with a good number of x4 slots.
Maybe that is what a lower core count, higher mhz, thread-ripper socket was really for.
> Maybe that is what a lower core count, higher mhz, thread-ripper socket was really for.
It is. Historically the HEDT sockets have often overlapped with the consumer socket in terms of core count - this was true of X58, X79, and sTR4, and X99 was so cheap that de facto it did overlap anyway (5820K basically cost the same as a 4790K, and motherboard costs were in-line with what we saw from X570 boards until the B550 line settled prices down a bit).
That’s fine because HEDT is not about core count, it’s about memory and PCIe lanes. The current offerings leave a void for "I want a big platform but I don't need >24 cores and I'm still somewhat price-sensitive", the classic "workstation/prosumer" tier that used to be serviced by things like the 5820K/3930K/1900X/1920X.
Potentially you could get to a similar place with a bunch of PCIe 5.0 slots attached to PCIe switch chips - this style of board used to be called “supercarrier” by one brand. Unfortunately it pretty much died out in the wake of SLI and crossfire becoming niche and then extinct. And the current crop of Intel and AMD boards only offer PCIe 5 on the first slot anyway so that isn’t quite as possible as you’d think at first glance.
It’s really a shame the way AMD hollowed out the HEDT segment and cranked prices. A 3960X is four 3600s on a HEDT package with a single bigger IO die instead of four little ones, it’s a very cheap chip to produce, it should really go for more in the $700-800 price range than $1600+.
(And the HEDT boards are also quite expensive for what they are - the ROMED8-2T gets you 7x slots of PCIe 4.0x16 full-capacity, with power delivery for 280W TDP CPUs, dual 10gbe, and BMC, for $600. Look at what a $1000 sTRX40 board buys you and just laugh, "gamer" boards are ripping you off.)
Again, the precedent is the 5820K and the TR1900 series where these savings were passed on to the consumer - it is historically abnormal for HEDT to be such a huge reach compared to desktop chips, but AMD isn’t interested in pursuing low-end (actually they aren’t even interested in releasing Zen3 HEDT chips at all outside WRX80) and Intel has abandoned the segment entirely for now. Maybe Alder Lake-X will change the situation and force AMD to pay a little more attention, just as it has forced some of the ridiculous 5000 series price increases to be backed down.
Right now it is actually worth a strong look at Epyc server boards like ROMED8-2T and chips like the 7402P because if you don’t need the absolute clock rate of Threadripper the Epyc chips are often cheaper per-core while offering a better PCIe and memory capability. That’s completely opposite from how HEDT has always worked but AMD is pushing hard in the server segment and sandbagging in the HEDT segment and that flips the math in a lot of homelab or workstation situations.
(note: the "pcie5 only on the first slot for X670E, no PCIe 5 for X670" appears to have been a false rumor, per the computex presentation it's "X670E is PCIe 5 on everything, X670 is PCIe5 on the first slot, B650 is pcie5 only on storage".)
To justify worrying about available bandwidth, you shouldn't be listing available ports but instead listing specific devices along with a use case that would actually have them actively transferring data simultaneously in the same direction at speeds that would make the x4 uplink problematic.
> #1 expected ASM4242 mux with 2x DP for APUs -- It'd be nice if these pins could export additional PCIe lanes for non-APU systems.
Keynote slides that just came out show RDNA2 in the Zen4 I/O die, so it looks like there won't be non-APU systems. I think you'd have trouble using pins for PCIe sometimes, and DisplayPorts other times, depending on the CPU you install, would make things more confusing, IMHO.
Yes, seeing those this morning, if even the higher end CPUs all come with at least an anemic framebuffer GPU a server could use the x8 and x8 links intended for a desktop's GPU as IO expansion slots. It looks more palatable as a 'could be pressed into service as a server' segment for both new and hand-it-down builds.
I have an old Zen 1 desktop right that has served me well, but is becoming outdated. If I am going to upgrade to DDR5 and AM5, I’d prefer to be PCIe 5 compatible so that my upgrade investment is maximized.