Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's interesting is that with modern GAA and nanosheet transistors at the "1-2nm" scale (really ~6-12nm gate length) is that there are only about 50-100k silicon atoms.

That means that the channels are inherently intrinsic. You can't really have doping when there's statistically less than an atom. There's a nice review from 2023.

https://semiengineering.com/what-designers-need-to-know-abou...

https://www.semiconductor-digest.com/the-shape-of-tomorrows-...

This is more recent with pretty pictures.

The key is that all increases in transistor count are now based on either stacking layers in the silicon (like flash scaling), stacking die with chiplets (we're already at 12-16 die for HBM) or scaling beyond the reticle limit (there's a lot of investment in larger substrates right now). None of these help cost scaling and all have huge yield challenges.

Moore's law really ends when the investors and CFOs decide it does. The generative AI boom has extended that for a while.



from a strict ML perspective, stacking, chiplets, multi-reticle are not going to help.

that is, ML is about periodic shrinks producing squared improvements to device density at iso-area, and thus costs. if you have to change equipment, ML (in this narrow sense) is out the window. if you have to spend linear resources to stack die or chiplets, that's also not ML-compatible (because linear, not 1/(shrink^2))

vertical flash is interesting because it appears to scale surprisingly well - that is, extra layers don't seem to exponentially degrade yield. I'm not sure any of that applies to logic, though.


Yep, only yield improvements and equipment depreciation are really in the cards. Cost per transistor increased from 7nm to 5nm, but we're about at breakeven with 3nm again. If there's not real fab competition at 2nm then prices per transistor likely won't fall. I suspect the same is true at 1nm. It doesn't feel like any of the alternatives (CNT etc) are really able to step up right now either.

https://www.chipstrat.com/p/what-happens-if-tsmc-controls-it

My feeling is that algorithmic improvements to LLMs may continue for another order of magnitude or two though. That could continue the scaling for a while.

Of course fixed development costs are now over $1B at 2nm, so if you're not making a large number of die (>1M at $1k/ea), it makes no sense to scale.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: