For me, the entire AGI conversation is hyperbolic / hype. How can we infer intelligence to something when we, ourselves, have such a poor (none) grasp of what makes us conscience? I'm associating intelligence with consciousness - because it seems correlated. Are we really ready to associate "AGI" with solving math problems ("new Q algo.")? That seems incredibly naive & reinforces my opinion that LLM's are much more like crypto, than actual progress.
It's not hype. It's a language problem that makes people like you think this way.
The problem is consciousness is a vocabulary word that establishes a hard boundary where such a boundary doesn't exit. The language makes you think either something is conscious or it is not when the reality is that these two concepts are actually extreme endpoints on a gradient.
The vocabulary makes the concept seem binary and makes it seem more profound then it actually is.
Thus we have no problem identifying things at the extreme. A rock is not conscious. That's obvious. A human IS conscious, that's also obvious. But only because these two objects are defined at the extremes of this gradient.
For something fuzzy like chatGPT, we get confused. We think the problem is profound, but in actuality it's just poorly defined vocabulary. The word consciousness, again, assumes the world is binary that something is either/or, but, again, the reality is a gradient.
When we have debates about whether something is "conscious" or not we are just arguing about where the line of demarcation is drawn along the gradient. Does it need a body to be conscious? Does it need to be able to do math? Where you draw this line is just a definition of vocabulary. So arguments about whether LLMs are conscious are arguments about vocabulary.
We as humans are biased and we blindly allow the vocabulary to mold our thinking. Is chatGPT conscious? It's a loaded question based on a world view manipulated by the vocabulary. It doesn't even matter. That boundary is fuzzy, and any vocab attempting to describe this gradient is just arbitrary.
But hear me out. chatGPT and DALL-E is NOT hype. Why? Because along that gradient it's leaps and bounds further than anything we had just even a decade ago. It's the closest we ever been to the extreme endpoint. Whichever side you are on in the great debate both sides can very much agree with this logic.
But it's not obvious at all. It may possess consciousness in a way we can't relate to or communicate with.
This is the whole problem with consciousness and has been discussed by philosophers for centuries. We each appear to be conscious but can't be certain anything else is or isn't.
Yeah I stopped reading since it's based on a false predicate. I've read the rest and it's still undermined by the same unprovable assumption.
Perhaps consciousness is binary and the rock is just as conscious as a human. We don't know. But consciousness != intelligence, and it's reasonable to assume humans are more intelligent than rocks of course, but we can't say anything about each one's level of consciousness.
"consciousness" is a word made up by humans. There's no way someone can make up a word without "knowing" what it means.
If we made up a word and we don't know what it means that means we "chose" not to know what it means. We made up the fact that we don't know. Ultimately all words in the english language are made up by humans.
The concept of consciousness itself doesn't exist. It exists because we made up a word for it. And the concept seems fuzzy because the word was made up and a fuzzy definition for it was chosen.
When you debate what "consciousness" is you are debating the definition of the word. This is not profound. The word is made up, the definition is arbitrarily chosen. You are debating a vocabulary problem what arbitrary definition should be attached to what arbitrary word. You are attempting to refine the fuzzy boundaries of a definition that we as humans already made fuzzy by our own choice.
Take a car and a boat. If I made some vehicle that can both drive on the road and sail on the water, is it a car or a boat? What you're not seeing is that it doesn't matter. It's just a vehicle, but the words "car" and "boat" lock you into this delusional debate that's attempting to classify the car-boat as one or the other. Do you understand? The concept of a car and a boat is poorly defined language influencing the way you think. Whether the vehicle is a car or a boat is meaningless. Same with consciousness.
If you still don't get it. How about this. I'll make up a new concept called Flurmo. Flurmo is something that is 30-40% a car and 60-70% a boat. Now when you debate whether the vehicle is a car or a boat you have to consider whether it's a flurmo as well. Car, boat or flurmo? It's easy to see how flurmo is made up, it's harder to see why car and boat are the SAME thing, they are also words that are made up. And so is consciousness. Consciousness is flurmo.
You stopped reading, because you made a false assumption. You then continued reading on my prompt and you came out with a conclusion based off of you misunderstanding the point. Hopefully you get what I'm saying now.
Sorry, just wrong on so many levels. Consciousness is not a concept, it's a self-evident experience. A rose by any other name would be just as beautiful, etc.
Also, you're not being rational and consistent. First you say no body truly knows what consciousness is, that philosophers debate about it... then you define it as an "experience." Two inconsistent statements. For your second statement? Isn't that just a definition you chose that many "philosophers" disagree/agree with?
Don't answer that question. It's rhetorical. A rational discussion. can't be had with someone who is inconsistent with the statements he makes or someone who is deliberately trolling. I suspect you are the later.
Completely agree, and while we are at it... look I'm just a guy, not an expert, but I can't understand why there's so much focus on AGI. It feels like there are so many niche areas where we could apply some kind of analytical augmentation and by solving problems in the small, might learn something that would help figure the larger question of intelligence. I don't need the AI to replace everything I do, I need it to solve 10,000 micro problems I solve every day - each of which is a business opportunity for someone.
AGI as in computer intelligence that out does humans would be a huge deal in practical terms. Chat GPT and similar are kind of like handy toys. With proper AGI you could link it to a robot body and tell it to go off, design a better version of itself, make a billion more robots and take over the world. It's a different category of thing. And if you think that's just sci-fi I think you'll get a surprise at some point during your life.
> With proper AGI you could link it to a robot body and tell it to go off, design a better version of itself, make a billion more robots and take over the world.
^^^^^ literally a plot ripped from the pages of science fiction used to reason about the real world.
> And if you think that's just sci-fi I think you'll get a surprise at some point during your life.
Such faith that fantasy can be made real. Wake me up when you have my hyperdrive ready.
I find that hard to believe. Ever watch Terminator?
But even if true, that science-fictional plot is so pervasive it would be easy to pick up from the millions who have the software engineer's blurry line between fantasy and reality.
> It's just logical really.
OK, then. You're a GI, go off and build an army of better yous and take over the world.
The idea is indeed logical and stupidly obvious, once you learn the basics of what "optimization" means, or what "recursion" is.
> I find that hard to believe. Ever watched Terminator?
Terminator has fuck all to do with recursive self-improvement. Don't confuse people who grew up on sci-fi with people who casually went to see Terminator or some other pop-culture artifact featuring some kind of "AI".
> OK, then. You're a GI, go off and build an army of better yous and take over the world.
What do you think the drama with eugenics, genetic engineering and designer babies is around? It's literally humans trying to make better humans in the only way that is available - reproduction.
AI made in silica would be more malleable, easier and cheaper to replicate. Self-improving software isn't even a fantasy; it exists in many forms - though it's far from open-ended like a self-improving GI would be.
It is not technically logical to think one can see the future, but it is colloquially logical.
Judging reality by how it appears is a bad strategy, this should be common knowledge by now.
What's concerning to me is that I suspect LLM's will be able to learn and remember thousands of basic facts like this, and ~reason on top of them. Perhaps they won't figure this out on their own, but what if all it takes is one individual to point them in this direction? I bet there are numerous people who know much more about this than me working for our various three letter agencies.
>> I find that hard to believe. Ever watched Terminator?
> Terminator has fuck all to do with recursive self-improvement. Don't confuse people who grew up on sci-fi with people who casually went to see Terminator or some other pop-culture artifact featuring some kind of "AI".
You're not following the thread. The future timeline in Terminator does involve something like an AI making "a billion more robots [to] take over the world." The popularity of that and similar sci-fi makes that claim that someone has never encountered it hard to believe.
> What do you think the drama with eugenics, genetic engineering and designer babies is around?
So how has that been going? Those things should also probably be labeled "science fiction."
> AI made in silica would be more malleable, easier and cheaper to replicate. Self-improving software isn't even a fantasy; it exists in many forms - though it's far from open-ended like a self-improving GI would be.
Fantasies based on squishy assumptions. How do you know it would have an easier job optimizing itself than humans have? How do you know there isn't some fundamental contradiction in the concept of "superintelligence" that these fantasies are based on? Or even just some practical resource limits that makes the fantasy impossible?
> The future timeline in Terminator does involve something like an AI making "a billion more robots [to] take over the world."
Yes. That's distinctly different from Skynet iterating on itself a billion times to make itself smarter, which AFAIK (I'm not up to date with full Terminatorverse, but then, most people aren't either), isn't something that happened in that story.
> The popularity of that and similar sci-fi makes that claim that someone has never encountered it hard to believe.
Again, there's very little in mass-market sci-fi of what we're discussing here. And most people, including many in tech, have a hard time wrapping their heads around the idea of a feedback loop, so no, I don't think it's something readily available from mass-market sci-fi.
(But the more niche, better thought-out works, will teach you feedback loops, and this is just one of the ways recursive self-improvement becomes an obvious idea.)
> So how has that been going? Those things should probably be labeled "science fiction."
Eugenics? We had to ban it and create such a strong cultural (and legal) repulsive field around it, that it impedes biotech and medical research.
Designer babies? Weren't attempts made in China recently? And in the West, we're already correcting congenital defects, so all in all, it's less "science fiction", and more "science someone is going to apply soon, if they haven't already".
> How do you know it would have an easier job optimizing itself than humans have?
Because it was created by us, using processes and media that are strongly optimized for malleability. Software, algorithms, digital data, optimization models. All well-defined (and comprehensible to an AGI, by definition) - unlike our own minds, which were not made by us but by a dumb, random process, and that the brains are made of stupidly complex nanotech instead of simple transistors is not helping.
Also because the kind of models we're now worried about gain capability through an optimization process that's open-ended, and limited only by availability of training data and compute. So if e.g. a successor of GPT-4 were to become AGI, it would be set up for recursive self-improvement from day one.
> How do you know there isn't some fundamental contradiction in the concept of "superintelligence" that these fantasies are based on? Or even just some practical resource limits that makes the fantasy impossible?
Maybe, but what makes you think this is the case? We know of some fundamental limits to compute, but we're very, very far from hitting them. Otherwise, I don't know of anything that would put a cap on intelligence at around human level. Remember: by the very nature of evolution, we're the dumbest possible beings capable of learning and building a technological civilization. There may be better brain designs than ours, but ours "took off", and we took over the world.
Sadly I can't build a better me as I'm not of robotic construction. And I was a being a bit flippant with the world takeover. But as soon as AI reached human level it would quickly go beyond it given the rate these things improve, allowing it to get to to work on improved models. As something along those lines in the real world think the Tesla robots but improved with far better AI.
Actually thinking about it I wouldn't rule out Musk/Tesla going for the world takeover thing;)
So I link this to my Roomba, and in a few days time it'll design and build a much better Roomba, while confined to my apartment with only wheels and a vacuum to actuate?
I think a much more realistic outcome is that you put it into a fancier robot body that can go outside, and by the end of the week it's scrap in a homeless camp chop shop
> It's interesting to see that in people who often view themselves as hyper-rational.
It's perhaps because they are rational enough to realize, thanks to the same knowledge/skill that put them on the software engineer careers, that AGI isn't a fantasy but a possibility and a potentially very big deal.
>I need it to solve 10,000 micro problems I solve every day - each of which is a business opportunity for someone.
Because you have to solve 10,000 different problems. And a huge number of those problems are going to have significant overlap, but sharing lessons between them is going to be difficult unless you have a generalized algorithm.
Many of the seemingly small problems do require a good model of the world for context and edge case solving, so they still get very close to general intelegence.
Yep, at least in my eyes you'll never be able to "solve" self driving without solving the G in AGI. You require a world model for predictions in order to have enough time to avoid many bad outcomes. Avoiding an empty soda can and avoiding a brick are similar problems, but one can easily lead to critical failures if you miss it.
I've thought about this a bit as well, and I think it's almost like this toxic concoction of incentive (how can "we" hype this until and make boatloads of money off of it, coupled with a genuine (if sub-conscious) desire to be seen as a visionary/great engineer who "created artificial life." I mean, at least on HN, I see lots of this aspirational attitude for living the sci-fi future circa. Star trek, ex-machina, etc, while couching their language in professions of expertise now that the firehose of cash has turned on.
Also there is the general hubris in all this to only look at the new and shiny, I remember when there was that pizza robot (some multi-dimension axis hand thing) that cost whatever in building and research, when the costco pizza "robot" is pretty darn good, but doesn't sell as "futuristic/cool" because its a spigot on a servo.
A(G)I models don't need higher order thinking or somesuch to be impactful. For that they just need to increase productivity with or without job loss (be Good Enough), which they are on a good track for.
The real impacts will come when they are properly integrated into the current computational fabric, which everyone is racing to do as we write this.
A particular subset of Connectivism have a philosophical belief that the mind IS a neutral net, not that it is a reductive practical model.
Hinton is one of these individuals and with no definition of what intelligence is it is an understandable of dogmatic position.
This whole problem of not being able to define what intelligence is pretty much allows us all to pick and choose.
In my mind BPP is the complexity class solvable by ANNs and it is a safe and educated guess that most likely BPP=P.
BPP being one of the largest practical complexity classes makes work in this area valuable.
But due to many reasons that I won't enumerate again AGI simply isn't possible and requires a dogmatic position to believe in for people who have even a basic understanding of how they work and the limits from the work of Gödel etc...
But many of the top scientists in history have been believers of numerology etc...
Associating math with LLMs is a useful too to avoid wasted effort by those who don't believe AGI is close, but it won't convince those who are true believers.
LLM's are very useful for searching very large dimensional spaces and for those problems that are ergotic with the Markov property they can find real answers.
But for most of what is popular in the press will almost certainly be a dead end for generalized use of the systems are not extremely error tolerant.
Unfortunately it may take another AI winter to break the hype train but I hope not.
IMHO it will have a huge impact but overconfident claims will cause real pain and misapplication for the foreseeable future.
Couldn't agree more. How about this -- I think we've already reached AGI. Let me know if this tracks: Pick a set of tasks that can be considered AGI tasks. Provided the task sequences can be compared as closer to AGI or further from AGI, we can create a reward model using the same techniques as were used by ChatGPT via RLHF. Thus, for any definition of AGI that is meaningful and selectable, even if subjectively selectable or arbitrarily preferential, we can create a reward model for it.
You might say, well thats not AGI, AGI must also do such and such. Well, we can get arbitrarily close to that definition as well via RLHF.
Another objection might be: well, if thats the definition of AGI, that seems really underwhelming compared to the hype train. This says nothing about autonomy, sentience, free will -- exactly. Those concepts can or should be orthogonal to doing productive work.l, IMHO.
So, there it is. We can now make a reward model for folding socks, and use gradient descent with RL to do the motion planning.
Maybe thats AGI and maybe its not, but I'd really love it if we had a golden period between now and total enshittification that involved laundry folding robots.
> Exactly. 5 years ago, we would have said what GPT-4 is doing now would be AGI.
okay, and 500 years ago we would have said it was magic, that doesn't make it magic. people who don't understand the thing often are confused about the thing. as soon as you explain how the whole mechanism works it's obvious that it's not that thing.
> People are forgetting how stupid people are. GPT is already better than average human. Most people can't do what we claim GPT must be capable of to qualify as AGI. The only logical conclusion is that many people are also not conscious and don't qualify as being able to reason.
citation massively needed, this sounds like it was written by someone who thinks idiocracy was a documentary and not a comedy.
Idiocracy wasn't a documentary?
Have you worked with people?
This sounds like someone sitting in their study, by the fire, smoking a pipe, adjusting their monocle and saying "hmm, Indubitably".
Think you are taking extreme opposite view, like humans have some holy essence that can't be understood or duplicated. Which is also not correct. Of course, technology that is not-understood can appear as magic. But nobody said anything of the sort.
The discussion was around AGI, will AI be as 'intelligent(loaded term)' as human. And I'm saying, a lot of people aren't doing anything that takes much 'intelligence' and current AI is already good enough to replace a large chunk of humans. Right now it is fragmented, there is AI that can beat GO, there is AI that can outfly an F-16 and beat human pilots, there is AI GPT/LLMs. It wont take much to start tying together technologies and have something that 'appears' human. And if it is indistinguishable, then how can we prove it isn't conscious, or opposite, prove that we are.
> The only logical conclusion is that many people are also not conscious and don't qualify as being able to reason.
Most humans aren't "able to" juggle 3 balls, but most humans are physically and mentally capable of learning to be able to juggle 3 balls, it's just not a common thing for most people to learn. The same used to be true of reading and basic math, but look where we are now with some good planning and hard work!
Well... humans have different mental "spaces" (not intended as a technical term).
Let's say I'm deep in a coding problem. A co-worker comes by and says "How did your team do in the game yesterday?". I say, "Um, uh... sorry, my head's not there right now." It takes us time to swap between mental "spaces".
So, if I have an AGI (defined as having a trained model for almost everything, even if that turns out to be a large number of different models), if it has to load the right model before it can reason on that topic, then that's pretty human-like. (As long as it can figure out which model to load...)
The one thing missing is that (at least some) humans can figure out linkages between different mental "spaces", to turn them into a more coherent holistic mental space, even if they don't have (all of) each space at front-of-mind at any moment. I'm not sure if this flavor of an AGI could do that - could see the connections between different models.
The power of analogy is one of the most important things that humans seem to have.
Humans typically use the toolset they've seen along the way to solve problems (hence if you have a hammer all problems become nails statement). When you get people that are multi-disciplinary they commonly can solve a complex problem in one field by bringing parts of solutions from other fields.
Hence if you have more life experiences (especially positive/learning ones) you are typically better off then a person who does not.
Also I think this is where a lot of interest in Q* learning after the OpenAI thing occurred, as this would be a means of allowing an AI to explore problem spaces and enlist specialist AI and tools for it to do so.
solid points. As a D&D nerd, might I offer that this is more along the lines of AGW (Artifical General Wisdom) than Artificial General Intelligence? Intelligence seems mode closely related to "IQ" as in (mechanical) "ability to solve a problem". But wisdom is knowing when to solve the problem and maybe when not to solve it, or which problem to solve. And of course, those times when instead of solving it, way better to talk about it with your fellow nerds on HN!!
It tends to boil down to semantics, but as I interpret it, a threshold for AGI has to involve some breakthrough in generalizable, adaptable intelligence. So, yeah, I would invoke the "that's not AGI" move.
An AGI should be able to solve any creative problem a human could, with drive and knowing purpose and coherent vision. The LLMs are still narrowly focused and require human supervision.
We might well get there with chained AIs automatically training new reward models for each new problem, or by some other paradigm, but I don't feel like we're past the threshold yet.