Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.

If that were the case the above system wouldn't have needed 8 sockets. With NUMA systems the app needs to be scheduling group aware anyways. The difference here really appears when you have a single socket with more than 64 hardware threads, which took until ~2019 for x86.



Why would an application need to be NUMA aware on Linux? Most software I've ever written or looked at has no concept of NUMA. It works just fine.


The same reasons it would on macOS or Windows, most people just aren't writing software which needs to worry about having a single process running many hundreds of threads across 8 sockets efficiently so it's fine to not be NUMA aware. It's not that it won't run at all, a multi-socket system is still a superset of a single socket system, just it will run much more poorly than it could in such scenarios.

The only difference with Windows is a single processor group cannot contain more than 64 cores. This is why 7-Zip needed to add processor group support - even though a 96 core Threadripper represents as a single NUMA node the software has to request assignment to 2x48 processor groups, the same as if it were 2 NUMA nodes with 48 cores each, because of the KAFFINITY limitation.

Examples of common NUMA aware Linux applications are SAP Hana and Oracle RDBMS. On multi-socket systems it can often be helpful to run postgres and such via https://linux.die.net/man/8/numactl too, even if you're not quite the scale you need full NUMA awareness in the DB. You generally also want hypervisors to pass the correct NUMA topologies to guests as well. E.g. if you have a KVM guest with 80 cores assigned on a 2x64 Epyc host setup then you want to set the guest topology to something like 2x40 cores or it'll run like crap because the guest is sees it can schedule one way but reality is another.


There were single image systems with hundreds of cores in the late 90s and thousands of cores in the early 2000s.

I absolutely stand by the fact that Intel and AMD didn't pursue high core count systems until that point because they were so focused on single core perf, in part because Windows didn't support high core counts. The end of Denmark scing forced their hand and Microsoft's processor group hack.


AMD and Intel were focused on single core performance, because personal desktop computing was the bigger business until around mid to late 2000s.

Single core performance is really important for client computing.


They were absolutely interested in the server market as well.


Do you have anything to say regarding NUMA for the 90s core counts though? As I said, it's not enough that there were a lot of cores - they have to be monolithically scheduled to matter. The largest UMA design I can recall was the CS6400 in 1993, to go past that they started to introduce NUMA designs.


Windows didn't handle numa either until they created processor groups, and there's all sorts reasons why you'd want to run a process (particularly on Windows which encourages single process high thread count software archs) that spans numa nodes. It's really not that big if a deal for a lot of workloads where your working set fits just fine in cache, or you take the high hatdware thread count approach of just having enough contexts in flight that you can absorb the extra memory latency in exchange for higher throughput.


3.1 (1993) - KAFFINITY bitmask

5.0 (1999) - NUMA scheduling

6.1 (2009) - Processor Groups to have the KAFFINITY limit be per NUMA node

Xeon E7-8800 (2011) - An x86 system exceeding 64 total cores is possible (10x8 -> requires Processor Groups)

Epyc 9004 (2022) - KAFFINITY has created an artificial limit for x86 where you need to split groups more granular than NUMA

If x86 had actually hit a KAFFINITY wall then the E7-8800 even would have occured years before processor groups because >8 core CPUs are desirable regardless if you can stick 8 in a single box.

The story is really a bit reverse from the claim: NT in the 90s supported architectures which could scale past the KAFFINITY limit. NT in the late 2000s supported scaling x86 but it wouldn't have mattered until the 2010s. Ultimately KAFFINITY wasn't an annoyance until the 2020s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: