Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're interested in more details on the cost of producing chips on the latest nodes, this is a good back of the envelope sort of breakdown for the Ryzen 7950X. [1] They use a visual die yield calculator to talk through the logic of improving economics using chiplet designs.

The figures on the actual full wafer costs for TSMC 5nm are not public, but analysts say a ballpark around $17K is pretty reasonable. Using all the figures they ballpark, the manufacturing costs of the 7950X die collection, fully packaged, sounds like it averages around $70. So the cost is high as far as large scale chip manufacture goes, but this is also a part that retails for $600; there's certainly plenty of room to remain profitable at the top.

Realistically, R&D costs are the biggest consumer of profit margins still.

  1: https://www.youtube.com/watch?v=oMcsW-myRCU


Yeah, that video is fantastic. I wonder if the higher costs will place more of an emphasis on running chips at power levels that will gives better longevity. We can effectively halve operating cost and double lifetime of chips by reducing their power levels 20%. Seems plausible if we don't have technological obsolescence to force depreciation.


Don't chips at stock power levels with reasonable cooling have extremely high resilience for at least 5 years? Most datacenters have no more than a 4 year refresh cycle for servers based on power and space efficiency optimizations.

For example, der8auer did a test [1] with chips like the 5800X with high overclock and high voltage with stress tests for over 4000 hours and came essentially to the conclusion that the chips are likely to endure for at least 5 years under rather extreme conditions. Likely much longer under normal circumstances. The number of systems he's testing isn't statistically significant like you might get by trying this sort of thing at datacenter scale, but it is fairly illustrative.

  1: https://www.youtube.com/watch?v=ZAww0c2m-ks


5 years is probably a fair current estimate, but that also doesn't come for free. The fans in DC servers move a ton of air, which itself further increases power use. I was thinking more on the order of 10-20 years.


Under most circumstances I can't imagine a reason to bother powering on a 10 year old system, short of nostalgia. The costs of running it will quickly eclipse the costs of buying something newer and more efficient.

Of course, there are plenty of edge cases, like needing something bare metal that has a particular sort of software compatibility or IO requirements. Some industrial computers still run 486 chips with ISA buses for this reason. These sorts of systems will have been engineered with longevity in mind from the outset though.

Other edge case, just for fun: embedded style systems like the Raspberry Pi. These are tiny, low power, and can be used for specialty purposes for ages. They are also engineered on nodes and setup in a manner that will likely leave plenty running successfully in 10-20 years' time as it is.

It is really only since we've entered the era below TSMC's 7nm node that longevity has become much of a concern at all. It would take a whole essay to even TLDR the constraints of why that only becomes very relevant in the period where those nodes start to become known as "mature", and this is already enough of a tangent, so I'll just leave this breadcrumb of a presentation on the lifecycle of silicon process nodes:

  https://www.youtube.com/watch?v=YJrOuBkYCMQ


He leaves out R&D and mask costs.

That last one is very significant. The hyper reflective EUV mask mirrors are incredibly hard to make and cost hundreds of millions. The mask alone no doubt raises chip cost by $10 or so.

Likewise, hundreds of engineers for 3-5 years can run up a couple billion dollars that must be recouped.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: