Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with their financial approach which is funded through voluntary donations, so more power to them. But then I think NASA should be funded voluntarily and would probably have a larger budget if they did. I would donate to NASA if they repudiated government money.

I reject the "how many peer-reviewed, university-associated journal articles have they published" as any measure of success. The university system is a closed guild and doomed in the internet age. I'd bet my bottom dollar the next big breakthrough in AI (or any field) comes from outside that system. Anyone who has an original, new idea to break the AI logjam (P=NP for instance) has no peers in the university system but has a handful of peers world-wide reading arxiv.org or other open forums, even HN.

As far as MIRI's AI program, I think they commit the same error as everyone else; confounding the processing of meaningless symbols (what computers can do) with actual sensory awareness of existence (what brains do). The latter is what gives the symbols meaning and humans are not threatened by the former. Few people truly understand the import of Searle's Chinese Room thought experiment and its relevance to AI and computers. But these are philosophic and epistemological questions that most people dismiss or ignore at their own peril.



> The latter is what gives the symbols meaning and humans are not threatened by the former.

For what it's worth, I think you're wildly wrong a) that the Chinese room experiment has anything profound to say about conscious experience, and b) that most AI researchers haven't thought about this.

That said, MIRI is concerned exactly with threats from machines that are very very good at processing meaningless symbols. If someone writes a simple reinforcement learning algorithm, asks it to produce paperclips, and it destroys the human race (http://wiki.lesswrong.com/wiki/Paperclip_maximizer), we're really past the point of caring whether the algorithm has awareness of its own experience. There are interesting philosophical questions there, but it's not within the domain of solving the problem MIRI cares about.


> For what it's worth, I think you're wildly wrong a) that the Chinese room experiment has anything profound to say about conscious experience, and b) that most AI researchers haven't thought about this.

We'll just have to disagree about a) but see my answer to the other reply about the implicit question behind the CRE.

Regarding b), I never said they haven't thought about it. I said "few people truly understand" by which I meant they have failed to understand the implications and have drawn the wrong conclusions. You don't get gold stars for thinking hard.

Regarding LessWrong, all I can say is that you can't know you are "less wrong" until you know what is true. In logic you can't assume an unknowable as your standard of the true. But to reiterate my point, these are questions of philosophy and epistemology, fields which absolutely essential to the "domain of solving the problem MIRI cares about".


Chinese Room is not an old, still unanswered philosophical problem. The answer is obvious - it's not the person that understand Chinese, but the system of person + room, with the room setup probably doing most of the work.


The university system is a closed guild? What are you talking about? Half the startups in the Valley started at Stanford. Who do you think posts on arxiv.org? Snowmen?


>humans are not threatened by the former

So you categorically reject the scenario of a machine "processing meaningless symbols" being harmful in any way? How does that follow from the Chinese Room experiment?


> So you categorically reject the scenario of a machine "processing meaningless symbols" being harmful in any way?

How does this follow from what I said?

> How does that follow from the Chinese Room experiment?

The implicit (philosophic) question behind the Chinese Room experiment is; where does meaning come from? What imbues meaning to the meaningless symbols that computers process\? This is an old, unanswered philosophic question. I implied it in my answer but I will make it explicit; it comes from our perceptual awareness of existence via the senses.


> How does this follow from what I said?

You said "humans are not threatened by the former". Not threatened implies that they can't be harmful.

>The implicit (philosophic) question behind the Chinese Room experiment is; where does meaning come from? What imbues meaning to the meaningless symbols that computers process\? This is an old, unanswered philosophic question. I implied it in my answer but I will make it explicit; it comes from our perceptual awareness of existence via the senses.

I don't see why that would imply anything at all about the safety of machines.


> where does meaning come from?

I am sure humankind will be able to construct artificial intelligences long before we are able to answer to that question. And the AI's will also start to ponder that question, and they will make no more progress than we.


You know we can give a computer senses super-easily, right?


> Few people truly understand the import of Searle's Chinese Room thought experiment

If you are talking about the AI community, this just isn't true. I have a degree in Cognitive Science, and I took a class with John Searle as an undergrad. Chinese Room was hammered away at in intro to philosophy and cogsci 101 classes, people understand it just fine. For some reason armchair philosophers seem to find it fascinating, but it fundamentally misses the point has largely been ignored in modern times for good reason.


Machines without "subjective experience" or "semantic content" can still fire guns.


> onfounding the processing of meaningless symbols (what computers can do) with actual sensory awareness of existence (what brains do).

I am highly dubious that the the qualitative nature of awareness when removed from symbols is of any value.

I have long ago accepted that I "observe" my brain—that is, my brain registers its own actions. At this point, understanding in the sense of the Chinese Room Experiment is mostly a question of "how do you want to serialize these symbols?". The meaning is in the relations between symbols in that person (or computer's) mind—mind-bogglingly complex, sure, but hardly non-serializable.

I find it very interesting that consciousness itself has had such a difficult time being hammered out. I have read quite a bit of philosophy on the subject (Searle's publications among them), and it seems that there are fundamental disagreements over diction. I have had a very, very difficult time wrapping my head around the arguments for both "free will" as some kind of quantum effect (see: Penrose's The Emperor's New Mind and its "sequel") and arguments for some specially defined consciousness with requirements for "awareness". Think about a dolphin: it's fairly easy to imagine what it might be aware of, although the particulars are obviously unreachable because our brains are not wired to be aware of the same sensors of which a dolphin brain is aware. That's the practical limit, though, in terms of having difficulty grasping what a dolphin might experience—you might have the same difficulty understanding how a blind-from-birth person is aware of the world because you can't cancel out your own visual wiring. This isn't really a barrier in terms of having the same ability as a blind person (or vice versa) except where that sensory awarenesses is critical.

Now, in terms of meaning, talk to people with abnormal thought patterns—schizophrenics, bi-polar people, borderline personality people, OCD people. For instance, many with severe personality disorders have a tendency to dichotomize everything with difficulty integrating "shades of grey" into every day thinking. Others—e.g. some compulsive liars—have a very difficult time pinning down specific meanings from an objective standpoint. Compartmentalization is a mechanism allowing multiple truths in compartments while allowing contradictions in a general sense. Meaning is evidently subjective. Which is more "meaningful" to you, attempting to understand the mapping of incomprehensible numbers of physical neurons, or attempting to understand the mapping of incomprehensible numbers of non-linear equations? They both come out to about the same level of "meaningless symbol" processing with no "magic" to me.

The structure of models of current neural networks are very, very rudimentary now, and are probably dozens of orders of magnitudes less complex than those of the human brain. But current research—including analyzing the brain, understanding how to describe it in terms of current understanding of neural networks, serializing it to a model, and simulating the model—are even now, within reach of being able to simulate a nematode brain. At that point, the arguments over consciousness merge for "how big/smart/aware/whatever does a brain need to be to be conscious" and "do we have strong AI"? It becomes a game of "is this particular instance of AI closer topologically to something we have qualitatively shown to be useful as a strong AI contender via something like the chinese room experiment (I'm pretty sure a lot of humans would fail a chinese room experiment because we can be really dumb, so we can compare/contrast against a human success rate), or to something we modeled after biological research.

Consciousness is nothing special anymore. The magic is modeling the facets of awareness you find fascinating or unique, in any language you want. If you don't think you can model it, try to articulate what quality you would have difficulty modeling. I suspect I would have difficulty understanding the quality in my own experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: