It feels cultish, but it's not. It's the real deal. It's the real deal because there will come a time in the future when computers are "smarter" than people. Computer analytical skills will progress faster than human analytical skills. Computers will be asking humans questions.
It's not a cult at all. You don't have to believe it is coming. You don't have to associate with people who do believe it is coming.
It seems like a logical certainty to me. Unless there is some sort of a human intervention or tragedy that stops us from progressing and innovating and building more powerful computing systems, then what is the alternative? That humans continue to outpace computers forever?
If you look around at the world already, most people have no idea what is going on with the internet, cloud computing, artificial intelligence. Robots are sweeping our floors and mowing our lawns and killing our enemies.
What is the alternative? How might The Singularity not occur?
> It seems like a logical certainty to me.
? What is the alternative? How might The Singularity not occur?
Human intervention and refusal to allow it, as you already mentioned.
It could be a self-fulfilling prophecy, but not necessarily a logical certainty. For example, if I could convince you (and with the same amplitude of conviction that you have right now towards a "smarter"-computer future) that such a future will inevitably result in the slavery of one man by another, and of ever-more powerful dictatorships (that call themselves democracies), this very thought would be the first step towards the non-conception of the singularity. I believe a person called Theodore Kacinzsky has previously argued along these lines, though he chose a more explosive form of argument than mere words :) http://en.wikipedia.org/wiki/Theodore_Kaczynski
One alternative is to restrict research to biological advances that benefit humans (stem cell research, for example) rather than trying to create even-more powerful machine learning tools all the time.
Just because "everyone" eats McDonalds, it does not take away the power of choice from you to never ever visit a McDonalds (for whatever reason), for example. Look at how scarily equipped are the police today, what things man has created for war (weapons that boil your invisibly from a remote distance), and you will understand that technology is fun as long it's not. It doesn't take a genius to look at each invention and decide whether it will most probably be used for the good or result in further loss of privacy and freedom. Two headings from today' ACM "TechNews" newsletter in my inbox are, "The Display That Watches You" and "Predictive Powers: A Robot That Reads Your Intention?". My reaction is : "Are you dumb? don't you see where this is leading?"
Innovations in one field are inevitably linked to innovations in others. Do you really think it's possible to hold back technological progress? That effectively means restraining human ambition. Would that not require an extreme police state?
What if it's impossible to develop an autonomous machine intelligence? I know that statement is equivalent to "what if it's impossible to develop heavier-than-air flight," and I think that we definitely should pursue AI research, but it is by no means guaranteed that we will end up with anything that behaves like a self-directed mind.
I'm choosing my words here, because obviously a Singularity situation would not require a human-like intelligence. But as far as I know, neurologists and psychologists don't really know how human minds work, and computer scientists haven't built a computer mind that showed even a glimmer of "free will" or "self-awareness," if you'll pardon the terms. How can we take it for granted that we'll get there?
The Singularity doesn't require computer sentience. Researchers at the Singularity Institute* refer to their goal as a "powerful optimization process."* All that's required is that it's better at general-purpose goal-seeking than humans; that would logically include the ability to wipe out the human race and tile the solar system with little smiley faces if we set its goal incorrectly.
And that's roughly what I meant by "self-directed."
Imagine a concrete goal: efficient fusion power, for instance. It's easy to define, easy to establish success metrics, and it's even easy to propose methods -- but success has eluded us for decades. A mind that could solve a problem like that would have to have intuition, lateral and parallel thinking, and creativity (or their machine equivalents; I'm open to an AI which might think utterly unlike a human).
My point is that we don't know where such traits come from in humans, and thus have no idea how to even begin to attempt to replicate them in computers except in the crudest and most rule-based ways.
Note: I would dearly like to be proven wrong, I'm just parroting things I've heard.
It could be impossible to build autonomous machine intelligence. Depending on how you definie intelligence. Perhaps the intelligence of machines is a tangent curve approaching human intelligence.
It is something in the future. We don't know for sure that it will happen, so by definition, it's an "opinion." All we have to go on are the trends we see around us now. Computers seem to be getting smarter and smarter, though it was a human that made them that way.
"Impossible" itself is an opinion. The Japanese have commited $12 Billion toward working around the physical laws preventing us from building a space elevator. We can work around physical laws just like we can work around bugs in lower levels of the stack.
But yes, there is a non-zero probability that it may not happen. I "believe" there is a greater non-zero probability that it will happen.
It starts to get pretty philosophical. You could say, "Maybe it'll kill all the humans." Okay, that's a probability. Usually the implication of that is that killing the humans would be a bad thing, because we attribute negative thoughts towards death. Not all cultures do.
If you think about life that reincarnates, then maybe intelligence is the life that thought refers to. Maybe it is a description of intelligences re-evolving. Maybe there have been intelligent species on this earth before with no fossil or material record to prove it.
When you start to get to that level of thought about the topic, it all starts to break down and people disregard it. Because there is no "proof" we revert to what we can prove and what we have proof for -- for ourselves -- is what we feel inside: hunger, desire, thirst, etc. Those innate feelings usually overrule the intellectual ones. Those feelings of hunger drive the acceleration of technology and the acceleration of technology leads to smarter beings.
Rewind to Einstein. He could have thought nuclear power was impossible. If he had described what he foresaw occurring as a result of the release of nuclear power, then many people would have said, "It's impossible to destroy a city and kill 100,000 people with a 5 ton bomb." Fat man was 10,200 lbs. We can destroy more now with less.
Did Einstein stop his work, even knowing the potential? No. He said this kind of power didn't create new social problems, it only made the solutions to them more pressing. I think we could say the same thing here. We need to figure out how to exist on this planet together before we invent something that allows us to destroy all of us.
From a more malthusian, perhaps, dark, perspective, the iRobot perspective, it may be a "smart" thing to erradicate humans. Another alternative may be that this thing we are building is smart enough to capture enough energy to vaporize the earth and it'll do that simply to answer a question. Maybe this thing we create will only be smart enough to cause total distruction, but not avoid it, or even know which of its actions will lead to it...
The topic really raises a lot of questions. More questions than answers. I don't know how to answer them all. I have to have faith that humans will answer them correctly when faced with the questions. Hopefully they'll make the right choices.
I refer you to this paragraph from the Wikipedia by Mr.Kaczynski, which summarizes what I feel when I see people marching like lemmings towards our collective enslavement, especially those working on projects which directly invade our privacy, such as reading intention, following eye movements, etc; Don't they realize how these will be used by governments?
"...who participate in a powerful social movement to compensate for their lack of personal power. He further claims that leftism as a movement is led by a particular minority of leftists whom he calls "oversocialized":
The moral code of our society is so demanding that no one can think, feel and act in a completely moral way. [...] Some people are so highly socialized that the attempt to think, feel and act morally imposes a severe burden on them. In order to avoid feelings of guilt, they continually have to deceive themselves about their own motives and find moral explanations for feelings and actions that in reality have a non-moral origin. We use the term "oversocialized" to describe such people.[35]
He goes on to explain how the nature of leftism is determined by the psychological consequences of "oversocialization." Kaczynski "attribute[s] the social and psychological problems of modern society to the fact that society requires people to live under conditions radically different from those under which the human race evolved and to behave in ways that conflict with the patterns of behavior that the human race developed while living under the earlier conditions." He further specifies the primary cause of a long list of social and psychological problems in modern society as the disruption of the "power process", which he defines as having four elements:
The three most clear-cut of these we call goal, effort and attainment of goal. (Everyone needs to have goals whose attainment requires effort, and needs to succeed in attaining at least some of his goals.) The fourth element is more difficult to define and may not be necessary for everyone. We call it autonomy and will discuss it later.[36] [...] We divide human drives into three groups: (1) those drives that can be satisfied with minimal effort; (2) those that can be satisfied but only at the cost of serious effort; (3) those that cannot be adequately satisfied no matter how much effort one makes. The power process is the process of satisfying the drives of the second group.[37]
Kaczynski goes on to claim that "[i]n modern industrial society natural human drives tend to be pushed into the first and third groups, and the second group tends to consist increasingly of artificially created drives." Among these drives are "surrogate activities", activities "directed toward an artificial goal that people set up for themselves merely in order to have some goal to work toward, or let us say, merely for the sake of the 'fulfillment' that they get from pursuing the goal".[38] He claims that scientific research is a surrogate activity for scientists, and that for this reason "science marches on blindly, without regard to the real welfare of the human race or to any other standard, obedient only to the psychological needs of the scientists and of the government officials and corporation executives who provide the funds for research."
"I regard him as the essence of evil. He's evil and amoral. He has no compassion," said Dr. Charles Epstein, who was seriously injured in 1993 when a bomb went off in a piece of mail he opened at his home. The blast destroyed both of Epstein's eardrums, and he lost parts of three of his fingers.
Epstein, 75, is a world-renowned geneticist and retired professor at the University of California at San Francisco.
sorry about this, i know we're talking about humans and all that, but maybe its the intpness of me - but you didn't respond with any reasoning at all, but instead did exactly what was to be avoided - problems with the man, and not the ideas.
my problem with the excerpt is that it is not that meaningful. no one has ever been able to prevent technology from advancing, they've tried many times before, for whatever reasons - and i believe that if we can have a singularity, and if we survive, then we will.
I understand your reaction, but it doesn't automatically mean that we want to bomb anyone, scientists or professors or others, just because we are discussing his ideas.
So in my case, I am saying that the ideas states by the Unabomber minus the bombing are still quite striking (especially about oversocialization, artificial drives, etc; as I mentioned above).
It just feels way too cultish