Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No offence, but I think you are extremely wrong.

Creating an AGI is the endgame for everything. Who cares about ads when you have an AI that can learn to do anything and improve upon itself continuously?



Really? I created two GI's and it wasn't very hard and was actually quite fun. Training them is a bit of a pain though. I'm willing to bet that based on total calories consumed they are amazingly efficient compared to their hypothetical AGI counterparts.


Yes but can they:

- live forever

- grow their own mental capabilities exponentially over that unlimited lifespan

- turn themselves into universe-eating von Neumann probes


Nothing can

- live forever

- grow exponentially forever

- "eat the universe" (I know, the last point was sci-fi gibberish)

In fact, humans are already pretty good at reproducing themselves and have managed to travel to space, and have exhibited finite periods of exponential knowledge growth combined with periods of collapse, as nothing grows exponentially forever.


My three-year-old said yes to all questions


You don't know if an AGI will agree with your profit motives.


There is a huge assumption baked into your comment and I do not agree with it.

AGI does not necessarily require for it to be conscious or throw tantrums about its creators' purpose. AGI just means that it's an intelligence that can be thrown at any problem, not just a particular game or task, similar to how humans can specialize in CS or playing the violin.


Sure, it was somewhat tongue in cheek, but not entirely.

There is a semi-established definition that does include what I referred to:

> AGI can also be referred to as strong AI,[2][3][4] full AI,[5] or general intelligent action.[6] Some academic sources reserve the term "strong AI" for computer programs that can experience sentience, self-awareness and consciousness.[7]


That’s not going to happen until they have bodies.


“Who cares?” Well, the people who need to pay the people developing the endgame for everything you speak of.


I'm sorry, but this makes no sense to me.

The people paying for the development of the AGI can mean many things - the Google customers/users, Alphabet as a company, the executives throwing money at the problem?

Either way, I don't really get your point. Your initial post was about how it is counterintuitive for Google to allocate funds for an AGI, since it makes money out of ads. These are not mutually exclusive, you can have both, but my point is that if you develop an AGI, then you can pretty much "conquer" the world and revenue from ads becomes irrelevant.


How do you think they can conquer the world? How do you foresee governments not restricting a private company’s new powerful tool?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: