There are plenty of people - technical and non-technical - who seem to be acting like AGI is right around the corner thanks to LLMs, and who are, more broadly, vastly overstating the current capabilities of LLMs. I’m observing this in real life as much as on the internet. There are two very distinct groups of people that stand out to me: (1) High level execs with vested interests around AI and (2) Managers who haven’t even bothered to create an OpenAI account and are asking their subordinates to use ChatGPT for them, in what is an unforeseen usage of LLMs: by human proxy.
I think you are missing a step. A lot of people believe AI will advance so much that it will be indistinguishable from the best possible human reasoning. The evolution of LLMs just give us a clue of the speed of improvement of AI. That does not mean that LLMs, which are one form of AI, will become AGI. It is just one path that AI is following. It will probably become a subset of something more advanced.