Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ray Kurzweil: A university for the coming singularity (ted.com)
27 points by quizbiz on June 8, 2009 | hide | past | favorite | 28 comments


ok, wtf is all this circle jerk with singularity... can anybody explain me why is this important that we need a whole university dedicated to it?

It just feels way too cultish


It feels cultish, but it's not. It's the real deal. It's the real deal because there will come a time in the future when computers are "smarter" than people. Computer analytical skills will progress faster than human analytical skills. Computers will be asking humans questions.

It's not a cult at all. You don't have to believe it is coming. You don't have to associate with people who do believe it is coming.

It seems like a logical certainty to me. Unless there is some sort of a human intervention or tragedy that stops us from progressing and innovating and building more powerful computing systems, then what is the alternative? That humans continue to outpace computers forever?

If you look around at the world already, most people have no idea what is going on with the internet, cloud computing, artificial intelligence. Robots are sweeping our floors and mowing our lawns and killing our enemies.

What is the alternative? How might The Singularity not occur?


> It seems like a logical certainty to me. ? What is the alternative? How might The Singularity not occur?

Human intervention and refusal to allow it, as you already mentioned.

It could be a self-fulfilling prophecy, but not necessarily a logical certainty. For example, if I could convince you (and with the same amplitude of conviction that you have right now towards a "smarter"-computer future) that such a future will inevitably result in the slavery of one man by another, and of ever-more powerful dictatorships (that call themselves democracies), this very thought would be the first step towards the non-conception of the singularity. I believe a person called Theodore Kacinzsky has previously argued along these lines, though he chose a more explosive form of argument than mere words :) http://en.wikipedia.org/wiki/Theodore_Kaczynski

One alternative is to restrict research to biological advances that benefit humans (stem cell research, for example) rather than trying to create even-more powerful machine learning tools all the time.

Just because "everyone" eats McDonalds, it does not take away the power of choice from you to never ever visit a McDonalds (for whatever reason), for example. Look at how scarily equipped are the police today, what things man has created for war (weapons that boil your invisibly from a remote distance), and you will understand that technology is fun as long it's not. It doesn't take a genius to look at each invention and decide whether it will most probably be used for the good or result in further loss of privacy and freedom. Two headings from today' ACM "TechNews" newsletter in my inbox are, "The Display That Watches You" and "Predictive Powers: A Robot That Reads Your Intention?". My reaction is : "Are you dumb? don't you see where this is leading?"


Innovations in one field are inevitably linked to innovations in others. Do you really think it's possible to hold back technological progress? That effectively means restraining human ambition. Would that not require an extreme police state?


What if it's impossible to develop an autonomous machine intelligence? I know that statement is equivalent to "what if it's impossible to develop heavier-than-air flight," and I think that we definitely should pursue AI research, but it is by no means guaranteed that we will end up with anything that behaves like a self-directed mind.

I'm choosing my words here, because obviously a Singularity situation would not require a human-like intelligence. But as far as I know, neurologists and psychologists don't really know how human minds work, and computer scientists haven't built a computer mind that showed even a glimmer of "free will" or "self-awareness," if you'll pardon the terms. How can we take it for granted that we'll get there?


The Singularity doesn't require computer sentience. Researchers at the Singularity Institute* refer to their goal as a "powerful optimization process."* All that's required is that it's better at general-purpose goal-seeking than humans; that would logically include the ability to wipe out the human race and tile the solar system with little smiley faces if we set its goal incorrectly.

* http://www.singinst.org/

* http://www.sl4.org/archive/0512/13006.html


And that's roughly what I meant by "self-directed."

Imagine a concrete goal: efficient fusion power, for instance. It's easy to define, easy to establish success metrics, and it's even easy to propose methods -- but success has eluded us for decades. A mind that could solve a problem like that would have to have intuition, lateral and parallel thinking, and creativity (or their machine equivalents; I'm open to an AI which might think utterly unlike a human).

My point is that we don't know where such traits come from in humans, and thus have no idea how to even begin to attempt to replicate them in computers except in the crudest and most rule-based ways.

Note: I would dearly like to be proven wrong, I'm just parroting things I've heard.


Not "if" but "when". There's nothing magical about the human brain. It's just a bunch of neurons. Eventually we'll be able to replicate it.


It could be impossible to build autonomous machine intelligence. Depending on how you definie intelligence. Perhaps the intelligence of machines is a tangent curve approaching human intelligence.

It is something in the future. We don't know for sure that it will happen, so by definition, it's an "opinion." All we have to go on are the trends we see around us now. Computers seem to be getting smarter and smarter, though it was a human that made them that way.

"Impossible" itself is an opinion. The Japanese have commited $12 Billion toward working around the physical laws preventing us from building a space elevator. We can work around physical laws just like we can work around bugs in lower levels of the stack.

But yes, there is a non-zero probability that it may not happen. I "believe" there is a greater non-zero probability that it will happen.

It starts to get pretty philosophical. You could say, "Maybe it'll kill all the humans." Okay, that's a probability. Usually the implication of that is that killing the humans would be a bad thing, because we attribute negative thoughts towards death. Not all cultures do.

If you think about life that reincarnates, then maybe intelligence is the life that thought refers to. Maybe it is a description of intelligences re-evolving. Maybe there have been intelligent species on this earth before with no fossil or material record to prove it.

When you start to get to that level of thought about the topic, it all starts to break down and people disregard it. Because there is no "proof" we revert to what we can prove and what we have proof for -- for ourselves -- is what we feel inside: hunger, desire, thirst, etc. Those innate feelings usually overrule the intellectual ones. Those feelings of hunger drive the acceleration of technology and the acceleration of technology leads to smarter beings.

Rewind to Einstein. He could have thought nuclear power was impossible. If he had described what he foresaw occurring as a result of the release of nuclear power, then many people would have said, "It's impossible to destroy a city and kill 100,000 people with a 5 ton bomb." Fat man was 10,200 lbs. We can destroy more now with less.

Did Einstein stop his work, even knowing the potential? No. He said this kind of power didn't create new social problems, it only made the solutions to them more pressing. I think we could say the same thing here. We need to figure out how to exist on this planet together before we invent something that allows us to destroy all of us.

From a more malthusian, perhaps, dark, perspective, the iRobot perspective, it may be a "smart" thing to erradicate humans. Another alternative may be that this thing we are building is smart enough to capture enough energy to vaporize the earth and it'll do that simply to answer a question. Maybe this thing we create will only be smart enough to cause total distruction, but not avoid it, or even know which of its actions will lead to it...

The topic really raises a lot of questions. More questions than answers. I don't know how to answer them all. I have to have faith that humans will answer them correctly when faced with the questions. Hopefully they'll make the right choices.


If we all die tomorrow, the Singularity does not occur.


I refer you to this paragraph from the Wikipedia by Mr.Kaczynski, which summarizes what I feel when I see people marching like lemmings towards our collective enslavement, especially those working on projects which directly invade our privacy, such as reading intention, following eye movements, etc; Don't they realize how these will be used by governments?

"...who participate in a powerful social movement to compensate for their lack of personal power. He further claims that leftism as a movement is led by a particular minority of leftists whom he calls "oversocialized":

    The moral code of our society is so demanding that no one can think, feel and act in a completely moral way. [...] Some people are so highly socialized that the attempt to think, feel and act morally imposes a severe burden on them. In order to avoid feelings of guilt, they continually have to deceive themselves about their own motives and find moral explanations for feelings and actions that in reality have a non-moral origin. We use the term "oversocialized" to describe such people.[35]
He goes on to explain how the nature of leftism is determined by the psychological consequences of "oversocialization." Kaczynski "attribute[s] the social and psychological problems of modern society to the fact that society requires people to live under conditions radically different from those under which the human race evolved and to behave in ways that conflict with the patterns of behavior that the human race developed while living under the earlier conditions." He further specifies the primary cause of a long list of social and psychological problems in modern society as the disruption of the "power process", which he defines as having four elements:

    The three most clear-cut of these we call goal, effort and attainment of goal. (Everyone needs to have goals whose attainment requires effort, and needs to succeed in attaining at least some of his goals.) The fourth element is more difficult to define and may not be necessary for everyone. We call it autonomy and will discuss it later.[36] [...] We divide human drives into three groups: (1) those drives that can be satisfied with minimal effort; (2) those that can be satisfied but only at the cost of serious effort; (3) those that cannot be adequately satisfied no matter how much effort one makes. The power process is the process of satisfying the drives of the second group.[37]
Kaczynski goes on to claim that "[i]n modern industrial society natural human drives tend to be pushed into the first and third groups, and the second group tends to consist increasingly of artificially created drives." Among these drives are "surrogate activities", activities "directed toward an artificial goal that people set up for themselves merely in order to have some goal to work toward, or let us say, merely for the sake of the 'fulfillment' that they get from pursuing the goal".[38] He claims that scientific research is a surrogate activity for scientists, and that for this reason "science marches on blindly, without regard to the real welfare of the human race or to any other standard, obedient only to the psychological needs of the scientists and of the government officials and corporation executives who provide the funds for research."


you can not seriously quote the unabomber as an authoritative source on the morality of human achievement.


Perhaps not, but could you follow up with a point-by-point refutation of the material cited, and ignore the character/actions of the author?


"I regard him as the essence of evil. He's evil and amoral. He has no compassion," said Dr. Charles Epstein, who was seriously injured in 1993 when a bomb went off in a piece of mail he opened at his home. The blast destroyed both of Epstein's eardrums, and he lost parts of three of his fingers.

Epstein, 75, is a world-renowned geneticist and retired professor at the University of California at San Francisco.


sorry about this, i know we're talking about humans and all that, but maybe its the intpness of me - but you didn't respond with any reasoning at all, but instead did exactly what was to be avoided - problems with the man, and not the ideas.

my problem with the excerpt is that it is not that meaningful. no one has ever been able to prevent technology from advancing, they've tried many times before, for whatever reasons - and i believe that if we can have a singularity, and if we survive, then we will.


I felt I didn't need to respond to the reasoning, it's the Unabomber. Do people need a point by point refutation of Mein Kampf too these days?


I understand your reaction, but it doesn't automatically mean that we want to bomb anyone, scientists or professors or others, just because we are discussing his ideas.

So in my case, I am saying that the ideas states by the Unabomber minus the bombing are still quite striking (especially about oversocialization, artificial drives, etc; as I mentioned above).


I seem to remember reading Drexler's book...something about "Let's keep the cultish bullshit down to a minmum."

Good advice.


What we mean by "intelligence," must also be considered. As they are, computers are most useful for solving problems where a solution is known, theoretically, to exist.

For example, you know there are articles on the internet concerning Javascript, and when you ask google to point you towards them, the relevance of the returned results is often viewed as a measure of the algorithm's "intelligence." But behind this intelligent response lies complex but defined probability calculations that are provably correct given a known algorithm. On the other hand, can you ever imagine asking the questions, "Computer, does this shirt look good on me?" or, "Computer, what is the most ethical course of action in this situation?" How a human responds to these questions helps indicate that human's level of intelligence. But different people could also disagree on the above questions and still be considered intelligent.

To me, it doesn't seem to make much sense to invest in computers that are modeled after humans, as there are certain types of situations where even the most intelligent of these machines could be wrong as a matter of opinion. Rather, the more powerful computers become, the more sense it makes to have them tackle finite problem spaces more efficiently. Because of this, I don't really see the case where human intelligence and computer intelligence would or should converge. Put simply, it would NEVER make sense to give Skynet control of our nukes.


hmmm, I'm curious about why people are against dedicating time and energy to this. Technology is already growing at a faster rate than we can effectively regulate it.

There will definitely come a point when technology will be able to improve the ones improving technology, whether they're AI or cybernetically enhanced humans. That's going to profoundly affect everything related to technology (which is pretty much everything). Imagine if your neighbor can just think up the plans to make a nuclear bomb by getting a cognitive implant, how would that affect the world?

There is a lot of speculation, but kurzweil has been fairly accurate with his predictions so far, so I'm curious to why people don't buy it.


People spend a lot of time and energy in machine learning and AI. Singularity is more of a land grab, and attempt to repurpose this research for a movement. Singularities obsession with prediction seems to have more to do with building a prophetic story in order to give itself power rather than useful future planning.


I don't know, I'd say building a prophetic story is pretty useful for future planning. Just think about all science fiction stories. We have been more prepared for government surveillance because of the book 1984. Rather than writing a fiction novel about the problems, they talk about predictions and scenarios and start universities to talk about these issues.

People in research focus on how to make it work, but we need people to think about how to plan for the future. I don't really agree with all the predictions they make on the singularity, but it's great that they make people think about talk about it.

quick plug -- I'm working on my own unique approach to AI, so find me if you're looking for a 0.0000001% of making the most important discovery ever. =P


Well, one reason I don't buy it is that it all seems premised on the notion that the human body is nothing more than a vehicle for the human mind, and that ones thoughts are the essence of oneself. The problem is that all current biomedical and neurological research points to the exact opposite: a human mind without a human body is not human.


"a human mind without a human body is not human"

Care to elaborate? Because it sounds like you're saying that, for example, someone with both legs amputated is somehow less human than non-amputees.

How much of the body would you have to lose to not be human any more? And what is the basis of this theory? I was under the impression that if you keep the brain, the spine, and some compatible means of sensory input, the rest is basically optional.


Care to elaborate? Because it sounds like you're saying that, for example, someone with both legs amputated is somehow less human than non-amputees.

The unfortunately reality is that there is some truth to this.

For a quick overview of the current research, I highly recommend the Radio Lab episode "Where Am I?" (http://www.wnyc.org/shows/radiolab/episodes/2006/05/05). Essentially, the way it works is this: Basic "animalistic", if you will, portions of the brain react to various stimuli, and in turn cause a reaction in various parts of the body, bypassing all of the higher-order logic and emotion circuits. It is only when these higher-order logic and emotion circuits realize that the body has already reacted, that they begin to comprehend a situation.

Another interesting aspect of the brain-body link is mirror neurons. These are neurons that run "backward", as it were, from your brain to your sensory organs, and send signals to them as opposed to receiving stimuli from them. It is thought that these neurons are vital for learning. For example, when you hear someone say something, your brain will send signals to your ear to learn how to imitate that sound.


What makes you think an apparent body can't be simulated as easily as a mind?


Or even a simulated "super" body. Human bodies are quite limited in the amount of information it can take in at once. I think you could potentially simulate a "body" with a million times more senses than a human. It's all a matter of sensory input.


$25,000 is a lot of money to attend. But I think it might be a valuable educational experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: