I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.
but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)
it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into
When it comes to recouping cost, a lot of people don't consider the insane amount of depreciation expense brought on by the up to 1 trillion (depending on which estimate) that has been invested in AI buildouts. That depreciation expense could easily be more than the combined revenue of all AI companies.
If birth rates are as much a cause for concern as many people seem to think, and we absolutely need to solve it (instead of solving, for instance, the fact that the economy purportedly requires exponential population growth forever), perhaps we should hope that AGI comes soon.
I am worried about what will happen to various nations' economies relatively soon, long before the population actually halves, but I'm not worried that the fertility rate would continue on its trend as demographics change. Ignoring the potential second-order effects of economic collapse, wars over resources, etc., I think fertility rate would stabilize given that culture and genetics would by definition quickly become dominated by the people who do reproduce.
I think it's rather easy for them to recoup those costs, if you can disrupt some industry with a full AI company with almost no employees and outcompete everyone else, that's free money for you.
I think they are trying to do something like this(1) by long term providing a "business suite", i.e. something comparable to g suite or microsoft 360.
For a lot of the things which work well with current AI technology it's supper convenient to have access to all your customer private data (even if you don't train on them, but e.g. stuff like RAG systems for information retrieval are one of the things which already with the current state of LLMs work quite well). This also allows you to compensate hallucinations, non understanding of LLMs and similar by providing (working) links (or inclusions of snippets of) sources where you have the information from and by having all relevant information in the context window of the LLM instead of it's "learned" data from training you in general get better results. I mean RAG systems already did work well without LLMs to be used in some information retrieval products.
And the thing is if your user has to manually upload all potentially relevant business documents you can't really make it work well, but what if they anyway upload all of them to your company because they use your companies file sharing/drive solution?
And lets not even consider the benefits you could get from a cheaper plan where you are allowed to train on the companies data after anonymizing (like for micro companies, too many people thing "they have nothing to hide" and it's anonymized so okay right? (no)). Or you going rogue and just steal trade secrets to then breach into other markets it's not like some bigger SF companies had been found to do exactly that (I think it was amazon/amazon basics).
(1:) Through in that case you still have employees until you AI becomes good enough to write all you code, instead of "just" being a tool for developers to work faster ;)
For me, "AGI" would come in with being able to reliably perform simple open-ended tasks successfully without needing any specialized aid or tooling. Not necessarily very well, just being capable of it in the first place.
For a specific example of what I mean, there's Vending-Bench - even very 'dumb' humans could reliably succeed on that test indefinitely, at least until they got terminally bored of it. Current LLMs, by contrast, are just fundamentally incapable of that, despite seeming very 'smart' if all you pay attention to is their eloquence.
If someone handed you an envelope containing a hidden question, and your life depended on a correct answer, would you rather pick a random person out of the phone book or an LLM to answer it?
On one hand, LLMs are often idiots. On the other hand, so are people.
That's not at all analogous to what I'm talking about. The comparison would be picking an LLM or a random person out of the phone book to, say, operate a vending machine... and we already know LLMs are unable to do that, given the results of Vending-Bench.
More than 10% of the global population is illiterate. Even in first world countries, numeracy rates are 75-80%. I think you overestimate how many people could pass the benchmark.
Edit - rereading, my comment sounds far too combative. I mean it only as an observation that AI is catching up quickly vs what we manage to teach humans generally. Soon, if not already, LLMs will be “better educated” than the average global citizen.
And yet, I would be completely confident that an average illiterate person could pass the Vending-Bench test indefinitely if you gave them interfaces that don't depend on the written word (phone calls, abacuses, piles of blocks, whatever), and that the "smartest" LLM in the world couldn't. It's not about level of education, beyond the bare minimum needed to have any kind of mental model of the world.
I'd learn as much as I could about what the nature of the question would be beforehand and pay a human with a great track record of handing such questions.
At the very least, it needs to be able to collate training data, design, code, train, fine tune and "RLHF" a foundational model from scratch, on its own, and have it show improvements over the current SOTA models before we can even begin to have the conversation about whether we're approaching what could be AGI at some point in the future.
Great, then we established that I (generally intelligent human), and LLMs, both can't perform the task in question without specific training. We still don't know if given that specific training we (a specific human or a specific LLM) would be able to perform it.
as in it can learn by itself to solve any kind of generic task it can practically interface it (at lest which isn't way to complicated).
to some degree LLMs can do so theoretically but
- learning (i.e. training them) is way to slow and costly
- domain adoption (later learning) often has a ton of unintended side effects (like forgetting a bunch of important previously learned things)
- it can't really learn by itself in a interactive manner
- "learning" by e.g. retrieving data from knowledge data base and including it into answers (e.g. RAG) isn't really learning but just information retrieval, also it has issues with context windows and planing
I could imagine OpenAI putting together multiple LLMs + RAG + planing systems etc. to create something which technically could be named AGI but which isn't really the break through people associate with AGI in the not too distant future.
I'd suggest anything able to match a professional doing knowledge work. Original research from recognisably equivalent cognition, or equal abilities with a skilled practitioner of (eg) medicine.
This sets the bar high, though. I think there's something to the idea of being able to pass for human in the workplace though. That's the real, consequential outcome here: AGI genuinely replacing humans, without need for supervision. That's what will have consequences. At the moment we aren't there (pre-first-line-support doesn't count).
This is a question of how we quantify intelligence, and there aren’t many great answers. Still, basic arithmetic is probably not the right guideline for intelligence. My guess has always been that it’ll lie somewhere in ability to think critically, which they still have not even attempted yet, because it doesn’t really work with LLMs as they’re structured today.
That would be human; I've always understood the General to mean 'as if it's any human', i.e. perhaps not absolute mastery, but trained expertise in any domain.
He created a company that tracks and profiles people, psychologically manipulates them, and sells ads. And has zero ethical qualms about the massive social harm they have left in their wake.
That doesn't tell me anything about his ability to build "augmented reality" or otherwise use artificial intelligence in any way that people will want to pay for. We'll see.
Ford and GM have a century of experience building cars but they can't seem to figure out EVs despite trying for nearly two decades now.
Tesla hit the ball out of the park with EVs but can't figure out self-driving.
Being good at one thing does not mean you will be good at everything you try.
I was rather sad when everyone moved away from MySpace to Facebook. While the interface today would likely be poor, I felt it was much better than what Facebook was offering. Still its hard to believe it was around 19 years ago many started to move over.
While I cannot remember the names of these sites, there were various attempts to create a shared platform website where you could create a profile and communicate with others. I remember joining a few at least back in 2002 before MySpace, Yahoo360. There was also Bebo which, I think, was for the younger kids of the day.
Lets not forget about friendsreunited.
Many Companies become successful being at the right place at the right time. Facebook is one of those companies.
Had facebook been created a year or so beforehand (or a year or two after) we would likely be using some other "social media" today. Be interesting how that would have compared to facebook. Would it be "more evil" ???
Regardless, whether its Facebook/MarkZuckerberg or [insert_social_media]/[owner]... we would still end up with a new celebrity millionnaire/billionnaire.. and would still be considered "a fool" one way or another.
I’m always fascinated when someone equates profit with intelligence. There are many very wealthy fools and there always have been. Plenty of ingredients to substitute for intelligence.
It certainly doesn't hurt when the government profiles and grooms intelligent people out of Stanford SRI the dark side, Harvard, hands them an unlimited credit card and says, "Make this thing that can do x,y,z." and then helps them network with like-minded creators and removes any obstacles in their path. One has to at least admit that was a contributing factor to their success as the vast majority of people do not get these perks.
Intelligence probably IS positively correlated with success, but the formula is complex and involves many other factors, so I have to believe it's relatively weak correlation. Anecdotally I know about as many smart failures as smart successes.
You can be a wealthy fool who inherited money, or married into it. It is also possible to be a wealthy fool who was just in the right place at the right time. But I would guess that people who appear to have "earned" their money are much less likely to be wealthy fools than those who appear to have inherited/married into it.
We see this all of the time. Business makes successful bets in one area and tries to make bets in new area and fails.
Once you achieve wealth it gives you the opportunity to make more bets many of which will fail.
The greater and younger the success the more hubris. You are more likely to see fools or people taking bad risks when they earned it themselves. They have a history of betting on themselves and past success that creates an ego that overrides common sense.
When you inherit money you protect it (or spend it on material things) because you have no history of ever being able to generate money.
Aren't there enough examples of successful people who are complete buffoons to nuke this silly trope from orbit? Success is no proof of wisdom or intelligence or whatever.
Fortunes are just bigger now in both notional and absolute terms, inevitable with Gini going parabolic, says nothing about the guy on top this week.
Around the turn of the century a company called Enron collapsed in an accounting scandal so meteoric it also took down Arthur Anderson (there used to be be a Big Five). Bad, bad fraud, buncha made up figures, bunch of shady ties to the White House, the whole show.
Enron was helmed by Jeff Skilling, a man described as "incandescently brilliant" by his professors at Wharton. But it was a devious brilliance: it was an S-Tier aptitude for deception, grandiosity, and artful rationalization. This is chronicled in a book called The Smartest Guys in The Room if you want to read about it.
Right before that was the collapse of Long Term Capital Management: a firm so intellectually star studded the book about that is called When Genius Failed. They almost took the banking system with them.
The difference between then and now is that it took a smarter class of criminal to pull off a smaller heist with a much less patient public and much less erosion of institutions and norms. What would have been a front page scandal with prison time in 1995 is a Tuesday in 2025.
The new guys are dumber, not smarter: there aren't any cops chasing them.
I think you'll find a consensus among clinical psychiatrists that the closest technical term for the colloquial notion of someone who puts all of their INT into LIE is Cluster B.
I see no evidence that great mathematicians or scientists or genre-defining artists or other admired abd beloved intellectual luminaries with enduring legacies or the recipients of the highest honors for any of those things skew narcissistic or with severe empathy deficits or any of that.
Brilliant people seem to be drawn from roughly the same ethical and moral distribution as the general public.
To be clear, I didn't mean to imply that all intelligent people are s-tier deceivers, but rather only that all s-tier deceivers are intelligent. Going with your metaphor, in order to put all of your INT into LIE, you need to have something in your INT pool.
The lesson here, and from pretty much any page of any history book you care to flip to, is that sooner or later there's a bill that comes due for advancing the worst people to the highest posts.
If you're not important to someone powerful, lying, cheating, stealing, and generally doing harm for personal profit will bring you to an unpleasant end right quick.
But the longer you can keep the con going, the bigger the bill: its an unserviceable debt. So Skilling and Meriwether were able to bring down whole companies, close offices across entire cities.
This is by no means the worst case though, because if your institutions fail to kick in? There's no ceiling, its like being short a stock in a squeeze.
You keep it going long enough, its your country, or your entire civilization.
are we going to overlook the big orange elephant in the white house? listen to him talk. it's hard for me to believe he wouldn't be labeled a moron by most if he wasn't the President.
SBF wasn't as successful though. His success wasn't even in the same stratosphere as Zuckerberg. His company was around for 3 years. Facebook has been around for over two decades. In terms of net worth, SBF was somewhere around 60th, I think? Zuckerberg was no. 2. Same thing with their respective companies.
> a special order, 350 gallons, and had it shipped from Los Angeles. A few days after the order arrived, Hughes announced he was tired of banana nut and wanted only French vanilla ice cream
yes, there are plenty
more recent example, every single person who touched epstein
Nobody involved with Epstein was as successful as Zuckerberg. Howard Hughes's net worth is 55B adjusted for inflation. And I don't think, "he became known for his eccentric behavior and reclusive lifestyle—oddities that were caused in part by his worsening obsessive-compulsive disorder (OCD), chronic pain from a near-fatal plane crash, and increasing deafness." fits my "total moron" criteria.
There's a big discussion in there about the inherent requirement of labor, the definition of leadership, collective vs hierarchical decision-making, hegemonic inertia and market capture and more. This is probably not the best place to have it.
Not to say that Zuckerberg is dumb but there's plenty of ways he could have managed to get where he is now without having the acumen to get to other places he wants to be.
No one is rejecting the role of leader, it's just extremely exaggerated nowadays, like everyone thinks Facebook==Zuckerberg. And the leaders don't worth x1000 (or even x1000000 for some) unless they are doing a job of 1000 people. In most cases they are not even capable of doing 99% of the work people in their companies can do. Egomaniac Musk has already published his thoughts on programming problems, only confirming how dumb he is in this field.
There were dozens of social networking companies at the time that FB was founded. If Zuck didn't exist those same or similar workers would have been building a similar product for whichever other company won the monopoly-ish social media market.
$1.8 trillion in investor hopes and dreams, but of course they make zero dollars in profit, don’t know how to turn a profit, don’t have a product anyone would pay a profitable amount for, and have yet to show any real-world use that isn’t kinda dumb because you can’t trust anything it says anyways.
Meta makes > $160 billion in revenue and is profitable itself, of course they’re going to invest in future longer term revenue streams! Apple is the counter example in a way who have maintained a lot of cash reserves (which seems to by the way have dwindled a LOT as I just checked..?)
All that money was outside the US. The theory was for some time that they were waiting for the right moment (change of administration/legislation) so that they could officially recognize the profit in the US cheaply
You're asking the wrong question and, predictably, some significant portion of people are going to answer "yes".
Better to ask the question "Are you ready to starve to death already?", which is a more accurate version of "Are you ready to lose you income, permanently".
but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)
it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into