And really, there was a version of what I'm talking about in the shorter timespan with LLMs - OpenAI's GPT models existed for several years before someone got the idea to put it behind a chat interface and the popularity / apparent capability exploded a few years ago.
> OpenAI's GPT models existed for several years before someone got the idea to put it behind a chat interface and the popularity / apparent capability exploded a few years ago.
That's exactly what I said in the post you responded to: there weren't erratic jumps, there was steady progress over decades.
You keep switching back and forth between short and long time periods, as if the rapid steady growth of the past couple years is how it's gone for decades. This is not the case - we're currently in a short* period of rapid growth after a decade or so of stagnation. That's what "erratic" means, it has not been steady - over the past several decades there have been several times where we've seen rapid growth for a short period, then it hits a wall and we see very little or no growth until the next breakthrough.
* Granted we don't know for sure it'll be short this time, but hints are that we're starting to hit that wall with improvements slowing down.
https://hai.stanford.edu/news/ais-ostensible-emergent-abilit...