Absolutely ridiculous. No, Apple did not juice the M1 by giving it better cooling than x86. Quite the opposite, they took a big chunk of their design+process wins to the bank and exchanged them for even cooler and quieter operation because that's what they wanted all along. Cool and quiet was a deliverable.
It's absurd to point out that apple could have gotten higher perf from x86 at higher power as if it's some kind of win for intel. Yes, obviously they could have, that's true of every chip ever. They could take it even further and cool every macbook with liquid nitrogen and require lugging around a huge tank of the stuff just to read your email! They don't, because that's dumb. Apple goes even further than most laptop manufacturers and thinks that hissing fans and low battery life are dumb too. This leads them to low power as a design constraint and what matters is what performance a chip can deliver within that constraint. That's the perspective from which the M1 was designed and that's the perspective from which it delivered in spades.
Maybe it's not clear to everyone... hot transistors waste more power as heat. It's a feedback loop. And it doesn't require liquid nitrogen to nip in the bud. Running the chip hot benefits no one. Kicking on a fan, and racing to idle without throttling would use less battery power.
I'm not sure what's so upsetting about the assertion that chips composed of a similar number of transistors, on a similar process node, cooled similarly, might function similarly. Because when all variables are controlled for, that's what I see.
Even taking the latest Intel has to offer with the Core Ultra 9 185H in a system with active cooling and putting it up against the fanless model of the M1 MacBook Air comes out equal in performance at more than triple the power usage - 3 years after the fact. It's got nothing to do with a feedback loop, there is less cooling capacity and not even a fan on the Apple model, the fan wasn't somehow needed for the performance level. Not that a conspiracy of Apple to run all their MacBooks like shit for many years prior to switching all to make Intel look bad is a bit ludicrous - if the problem was ever that the fans weren't on enough they would have just turned them up. Turns out, they can remove them instead!
The one place I agree with you is in the > ~16 core space (Server style) with TBs of RAM, where total performance density is more important than power, they don't bother to really compete. Where I differ slightly is I don't think there is anything about the technical design that prevents this, just look how Epyc trounced Intel in the space by using a bunch of 8 core chip modules instead of building a monolith, rather they just don't have interest in serving that space. If Apple was able to turn a phone chip into something with the multicore performance of a 24 core 13900k it doesn't exactly scream "architectural limitation" to me.
Parent didn't deserve any charity--Apple didn't sandbag anyone. They made the laptop they felt was the future based off a processor roadmap Intel failed to deliver on.
The fact that the exact same laptop designs absolutely soared when an M1 was put in them with no changes tells you everything you need to know about how Intel dropped the ball.
> The fact that the exact same laptop designs absolutely soared when an M1 was put in them with no changes tells you everything you need to know about how Intel dropped the ball.
Intel did screw up and get stuck on 14nm for far too long. But then does Apple deserve credit for TSMC's process advantage? AMD was never stuck in the way Intel was, and they have had the performance crown since for most kinds of workloads. I suppose Apple figured if they were going to switch chip vendors again it might as well be to themselves.
True, it's not just a story of Apple besting Intel. AMD has been beating them too.
Rough recent history for Intel.
I agree that Apple figured if they were going to switch, they should just go ahead and switch to themselves. But the choice was really to switch to either themselves or AMD. Sticking with Intel at the time was untenable. 14nm is certainly a big part of that story, and I'm glad you at least finally recognize there was a serious problem.
If Intel had been able to deliver on their node shrink roadmap, perhaps Apple never would have felt the need to switch--or may have at least delayed those plans. Who knows, that's alternate history speculation at this point.
The article in question is about Intel potentially getting back to some level of process parity, perhaps even leadership. I'm looking forward to that because I think a competitive market is important.
But pretending Intel's laptop processors weren't garbage for most of the last 8 or so years is kind of living in an alternate reality.
I think a lot has happened in Intel land since Apple folk stopped paying attention, as well. Intel still has a lot of work to do to catch up to AMD, but they have been fairly consistently posting gains in all areas. Apple really doesn't have a power advantage other than that granted by their process node at this point, against either AMD or Intel. AMD has seemingly delayed the Strix Halo launch because it wasn't necessary to compete at the moment. And Qualcomm is taking the same path Apple has, but is willing to sell to anyone, and as a result has chips in all standalone VR headsets other than Apple's.
It remains to be seen if Apple is willing or able to scale their architecture to something workstation class (the last Intel Mac Pro supported 1.5TB of ram, it's easy to build a 4TB Epyc workstation these days).
It's absurd to point out that apple could have gotten higher perf from x86 at higher power as if it's some kind of win for intel. Yes, obviously they could have, that's true of every chip ever. They could take it even further and cool every macbook with liquid nitrogen and require lugging around a huge tank of the stuff just to read your email! They don't, because that's dumb. Apple goes even further than most laptop manufacturers and thinks that hissing fans and low battery life are dumb too. This leads them to low power as a design constraint and what matters is what performance a chip can deliver within that constraint. That's the perspective from which the M1 was designed and that's the perspective from which it delivered in spades.