"Human or superhuman performance in one task is not necessarily a stepping-stone towards near-human performance across most tasks.
By the same token, the ability of neural networks to learn interpretable word embeddings, say, does not remotely suggest that they are the right kind of tool for a human-level understanding of the world. It is impressive and surprising that these general-purpose, statistical models can learn meaningful relations from text alone, without any richer perception of the world, but this may speak much more about the unexpected ease of the task itself than it does about the capacity of the models. Just as checkers can be won through tree-search, so too can many semantic relations be learned from text statistics. Both produce impressive intelligent-seeming behaviour, but neither necessarily pave the way towards true machine intelligence."
So true, and this is why I don't listen when Elon Musk or Stephen Hawkings spread fear about the impending AI disaster; they think because a neural network can recognize an image like a human can, that it's not a huge leap to say it will be able to soon think and act like a human, but in reality this is just not the case.
I think you've misrepresented their views. They doubtless know roughly what level the technology's at, even if it's not their speciality. They're worried about what level AI might go to in the next few decades/centuries. And you say they worry about an "impending AI disaster", but I don't think either of them has said they believe such a catastrophe to be imminent.
There are good reasons to be at least a little worried about AI in the long-term future. (Not to mention the imminent social effects of replacing various categories of worker with AI -- many examples of this have already happened, or are happening right now.)
Also, a minor nitpick, but you've misspelled Stephen Hawking's name. There's no "s" on the end.
People were worried about losing their jobs back when computers first became widespread too, but actual statistical evidence suggests that the amount of jobs available actually rose.
People did lose their jobs, and the new jobs available went to a new set of differently skilled people, that doesn't help those who lost their jobs. Technological change brings with it social upheaval that must be dealt with; AI brings potentially mass unemployment because any "new" jobs will be handled the by the AI, not the displaced people.
That would be a threat solely if AI were very expensive and only realizable by a very small group of people. Computers have caused significant damage to our society because of mostly cultural reasons that are going to work themselves out over the next decades. Once people realize that the exact computing technology which made their jobs redundant is equally effective at competing against their prior employers, and that they have the stupendous advantage of having near nil overhead while their ex-employer is a monolith unwilling to adapt properly, that harm will be ameliorated, IMO.
With AI, though, I think we have good reason to believe that the change would be much more sudden, and much more widespread. You can freeze wages and scalp every bit of productivity increase for yourself and shareholders as a corporate owner and get away with it for 30+ years (as has been done). But if even, say, 15% of the total workforce were laid off in the first year... things would change rapidly. It would be quickly realized that the displaced employees could easily simply get an AI for themselves, and price the ex-employer out of business rapidly. Nothing motivates like fear of starvation. Computers were only able to result in the problems we're facing today because it was a slow constriction of suffocation, cost of living gradually chipping away at the frozen median wage, rather than a brick to the back of the head as the introduction of generalized AI would be.
Honestly, another 99% protest is more likely then a massive showing of competition in hypothetical markets. The average person isnt that technologically inclined. on top of that, technology costs money. Rasberry pis wont be cheap forever and even then, the expectation a person is going to understand the tests they want to run, have masive use case data and the time to surpervise will probably be a fraction of those interested in it.
Im all for decentralization of manufacturing and development in software. But realistically, money isnt going anywhere and far scarier things may occur than an AI takeover
> It would be quickly realized that the displaced employees could easily simply get an AI for themselves, and price the ex-employer out of business rapidly.
I think you're missing the point. It doesn't matter who winds up running the AI, the AI is still doing all the work many people used to do, all of those people are still out of work, even if some of them displaced the old boss running lower overhead, most will still be unemployed due to the collapse of the demand for human labor. Increased productivity in the AI age means human jobs lost with no replacements coming.
>> Once people realize that the exact computing technology which made their jobs redundant is equally effective at competing against their prior employers, and that they have the stupendous advantage of having near nil overhead while their ex-employer is a monolith unwilling to adapt properly, that harm will be ameliorated, IMO.
This hasn't worked anywhere nearly that well in the past, why should it work
any better in the future?
The reason it can't is that we can't both all use new technologies _and_ expect that we can make a comfortable living out of it. The reason is good old supply and demand. If everyone can use a technology, then the ability to use it is not something you can sell very highly.
Take writing for example. Back in the time of the pharaohs, priests were the only ones who knew writing and reading and they had immense power as a result. Today, knowing how to read and write is more of a prerequisite to be able to find work, not a skill you can sell in and of itself.
Also, it happens that of all the people I know who work with computers, I can't think of anyone, off the top of my head, who switched from a different kind of job. I'm sure there are some, but they'll be few and far between. For most people, losing their job to a computer means just that: that they lose their job. To a computer.
And think of all the McJobs that keep unemployment low in most western economies: flipping burgers, stacking shelves, cold-calling, part-time stuff. _That_ is the kind of job most people do in the information age.
Aye, a monolith with the capital to buy them out of the market, if they look even remotely threatening, just to be sure. Then that same boss can take the tech they bought for a tiny cost (to them), encase it in patents and copyrights and leave it to rot where nobody can get big ideas about using it to give power to the people and all that.
Snark about 'actual statistical evidence' notwithstanding, people losing their jobs is entirely compatible with an increase in the number of available jobs because, unsurprisingly, not all jobs are fungible.
It doesn't even have to be identical, it just has to be acceptably humanoid. For example I imagine that in 5 stars restaurants human waiters will still be the norm for some time, but enterprises like McDonalds or Starbucks would probably not blink an eye in replacing people with machines if possible, as there is no human element in a McDrive. Would go hand in hand with the edible substances they sell as food.
They could easily do that right now, the technology exists. It just wouldn't be cost-effective. They can get humans to work for so little that the government will pick up part of taking care of them in the form of food stamps. Partner that with the inability to survive a business climate which would penalize the short-term expenditure necessary to retrofit locations (even staggered and done slowly) with an immediate response that would result in insolvency in very short order, and they'll keep humans around for awhile longer.
What mystifies me is why Netflix DVD processing centers have human beings in them. From their initial conception they could have been almost totally unmanned. No idea why they weren't. Maybe it would have actually had significant negative social response? I know that in my state, an automobile manufacturer was going to open a plant, but once the politicians found out that the plant was going to be heavily automated and provide 300 rather than 4000 jobs, all tax breaks were rescinded so they went elsewhere.
> What mystifies me is why Netflix DVD processing centers have human beings in them.
Watch an episode of "How It's Made" or similar, and note the places where an assembly line includes a human step. Those steps often involve fine adaptable manipulation (e.g. of inconsistently placed items), pattern recognition, or similar.
In general, a machine usually could handle that manufacturing step just as well if not better, but doing so would incur the development costs and construction of a highly specialized machine, plus maintenance. The hourly cost of a human employee may well work out cheaper, either in the short term or potentially even in the long run.
And that leaves aside other potential value of employing humans in those roles, including goodwill/PR/perception. An apparent cost savings from automation will quickly evaporate if the company finds itself on the evening news. That holds doubly true if replacing an existing human role with automation.
Rather than be afraid of losing their jobs, they should have been afraid of something much worse - keeping their jobs, creating 10x as much value per year for their employer as before, but seeing their pensions disappear (with no gigantic raise to compensate), their working hours expand, raises shrink to less than the rate at which cost of living increases, and society turning their back on them and calling them entitled brats for even wanting better pay. Oh, and effectively being on-call 24/7 too.
Every economist and their dog agrees that Finland should lower cost of employing so we could get out of seven year long downturn. But unions have choke hold of this country. My opinion is that unions are OK, but you have to regulate against monopolistic developments. Just like with private companies. Currently there is single union guy who can shut down all Finnish exports for a week.
You could say much the same thing about horses before the invention of the internal combustion engine. Technology (as plows did for horses) creates more wealth for the worker until it doesn't.
The potential consequences here are rather more dire than slight shifts in the relative size of employment sectors. A hard takeoff of superintelligent AI could be an extinction-level event, or even the extinction-level eventuality that solves the Drake Equation and makes the universe devoid of life.
I certainly can't say that the pessimists are correct, but I also can't say with certainty that they're incorrect. I do think it's silly to imagine that the only configuration for intelligence requires human-grade empathy, or that all sorts of differently-intelligent systems well below the complexity of the human brain, couldn't be lethal.
There's no reaction mechanism to explain how technology magically creates jobs. Jobs ARE created by energy use via fairly obvious mechanism of processes using energy.
One of the more useful heuristics I've heard is that when humans make split second decisions (like recognizing a face in an image or choosing the next word to say) there's only time for a few - like five to seven - layers of neurons to fire between impulse and action. These are the sort of tasks that neural networks are currently making great progress on.
But things like long-term planning and goal formation aren't in this quick-reflex domain. (Though humans are arguably pretty bad at these sorts of activity too, on average.)
We're also generally training neural nets to succeed at well-defined, bounded tasks, which don't really require long term planning so much. The video-game playing work at Deep Mind is actually a pretty important step forward, though; they're doing impressive stuff with domain-transfer (learning multiple games with a single network), and as one goes further with games, we might start seeing longer-term planning and concept formation. (eg, if we start seeing awesome StarCraft players...)
There's a case to be made, though, that we humans just trained on a big, broad objective function - survive, thrive, and multiply - with training applied across millions of years. And a good deal of intense local training over the first ten years of life, using a massively parallel computer that far outstrips any data center.
The question for strong AI seems to rest heavily on how much extraneous work there is done in biology, and whether we end up with a system so complex that it requires eighteen years of training time. If creating a single strong AI actually requires that much effort, we don't have to worry too much about skynet, methinks.
But we certainly don't have to worry about them spontaneously arising when we train with simple objective functions.
Note: even if training a single AI would require 18+ years, copying it will still be split-second. By this very token artificial minds have an advantage over meat ones.
And they learn in parallel, google self driving cars and a demo from Nvidia both report new experiences to the cloud which is then 'understood' or parsed I guess and lessons learned are passed down to the other cars.
Lessons could also be passed from AI to AI (e.g. fro Google to Ford) by sending the input [training data] and a output [reasonable response to that data] for the other AI to ponder.
Humanity is also immortal and self-modifying, it uses human bodies as the unit of program transfer and modification. Humanity's version is extremely slow, of course, and limited in quantity and quality of copies of a good program.
To be fair, we do have quite a handicap in that respect. A machine intelligence could, at least theoretically, pass around perfect copies of its learned knowledge near-instantly (maybe not though... integrating two neural networks that developed separately might be terribly difficult or even impossible... I wonder if anyone has attempted or studied this? If NNA learns to recognize human faces, and NNB learns to recognize text, is there a way to merge them? It's not like you can just average their weights together, especially if using a method that connects/prunes neurons as necessary). Humans, on the other hand, have to translate bioelectrochemical patterns created in very differently structured networks into flapping of a meat strip rhythmically to cause air vibrations that will rumble a thin membrane of meat, feeding electrical impulses into the differing network, and its success depends upon an expansive, and almost completely unknown (to us), body of similar, but not identical, experiences, from hearing the same patterns of sounds while seeing objects or pictures of objects, possibly decades earlier, to accepting unspoken social cues early in development. What is astonishing is that language works at all. It really, really sounds like it shouldn't have a snowballs chance in Hell.
And the weird thing is, I've got this niggling suspicion that this handicap is actually a profound strength. That evolution didn't produce so many species of organisms separated by walls of flesh and bone, detached from one another and capable of wandering far apart, then incapable of even expressing fundamentally novel concepts (I mean novel novel, such that your language has no word for it and such that any metaphor would be more misleading than enlightening). We could have evolved such that when holding hands, a flow of specially produced chemicals seep from ones hand to the other, travelling to the brain and making the changes necessary to convey concepts exactly. Bacteria communicate in this way. They change each other. Directly. We could to. But... we don't. And there is maybe a reason for that.
> But... we don't. And there is maybe a reason for that.
There's a rather good argument that as a species, we're the stupidest thing nature could get away with that could still develop writing and science. So the reason may very well be that nature didn't bother.
In the same way a neuron is not something that possesses a mind, yet a suitably arranged collection of them suddenly does, I think one could argue that humanity is an actor with a will.
By the Bekenstein bound; they would eventually run out of states to be in, so the total length of their experiences (if they have any) would be finite.
Humanity cannot conquer death.
Of course, the number of possible states is very very large, so, extremely long lived, sure.
Well, physically, yes. But as far as subjective experience goes, repeating states would not contribute to the total length of the subjective experiences. There is no way to distinguish whether the period of time between 5 and 10 seconds ago repeated exactly the same way a billion times before continuing or not. One assumes it didn't, but if it did, it wouldn't really matter that it did.
So for other things, it might be relevant that it continues to exist and repeats states, and it might even be worthwhile for it to make decisions while keeping in mind the fact that its decision might be at many different times (due to repeated states). But the length of its life internally viewed would still be finite, just with parts of it possibly having expression in infinitely many times of the external world.
As an absurd thought experiment, suppose that an entity arrives from the future, over the course of a year, builds a time machine, and then uses it to go back in time to become when it originally arrived. (as a bootstrap paradox sort of thing.) Would you say that the extent of the entities life is ~1 year, or infinite? There isn't any point in the cycle where it ends, but one wouldn't say that circles are infinitely long line segments, just because they don't have an end.
It wasn't specified anything was required of its subjective experience, the only requirement was for it to be immortal. There's a jellyfish on earth that is also immortal[1]; when it's hurt or sick, it can revert to the polyp stage - it's cells would transform, and it grow back into jellyfish with a new body. This process can repeat indefinitely. Each time it does this, it is likely it has a new set of subjective experience, but at the same time, the distinct set of subjective experiences are experienced by the same biological body of cells. If a biological being can be immortal this way, why can't AI's?
Humanity cannot conquer death, but this does not take away the immortality of living beings that already exist in our world.
Take a look at this 80,000 year old trees[3]. It's one living being in multiple trees with a single root system connecting all the trees into one being.
I'm not very concerned about any hypothetical future Skynet scenarios at the moment...
I am far more fearful of how Nueral Networks can be used today to justify the decisions made by those with the resources to wield them.
I agree that the rate of progress in AI is unpredictable, which means we are probably not right on the cusp of superhuman AI. But what if we actually are on the precipice? How could you tell? You seem to be taking a bet on the following statement:
"Before superhuman AI is developed, the techniques that make it possible will look dangerous."
That's a risky proposition. There's so much to lose in this situation. Elon Musk, Stephen Hawking, and many other people are taking a different, more risk-averse bet:
"Superhuman AI is extremely dangerous, so we need to be pessimistic about how good we are at predicting when it will happen."
By that logic, general AI is to be feared and worried about right now, because our predictive abilities are imperfect. The AI trend is towards more danger, not less: there's a slight chance of a cataclysmic event happening today, and as time goes on, the likelihood of it happening will increase (due to ongoing R&D in AI).
(As an aside, if you want to learn how hard the "AI Control Problem" is, I recommend the book "Superintelligence" by Nick Bostrum.)
The real cost of fearing an AGI now is an opportunity cost. The technology is a long ways away. Other events -- such as ecological collapse or nuclear disaster to name two -- are much more likely to irremediably affect our lives. So by hopping on Bostrom's fear wagon, you are making a bet on several levels: a) this matters now; b) this matters more than other dangers we could devote ourselves to minimizing; and c) something can actually be done about it. All three bets seem like bad ones to make to me.
Fair comparison, but I didn't intend any monopoly on fear. All I'm saying is that the risk of GAI should be taken seriously, calibrated along with the other problems you mention. There's a lot of room between 'irrelevant' and 'the most important issue' :-)
OpenAI has nothing to do with AI risk. They want to accelerate research in AI and make it open to the public. Which is the exact opposite you would want to do to protect against superintelligent AIs. They are not focusing on solving the control problem, or anything meaningful.
Actually, @Houshalter, OpenAI has several purposes, and one of them is to address AI risk. You may not think that is an effective approach, but that is their stated intent. They believe that the best hedge against the dangers of superintelligence is to conduct open research that encourages a multipolar world of several AGIs, rather a unipolar world where one company, such as Google, reaps the benefits and determines the course of such an advance. It's true that OpenAI accelerates AI research, which makes it paradoxical. But that's the nature of a hedge. You bet on both sides at once. I do believe that they will focus on solving meaningful problems (they are only a few weeks old...).
>rather a unipolar world where one company, such as Google, reaps the benefits and determines the course of such an advance
Yes that is their model of AI risk. However the people worried about AI risk are not worried about any single organization controlling AI. They are worried about a world where no one controls AI.
Controlling AI is actually a very difficult and unsolved problem. OpenAI is not intended to solve that problem, and if anything they are making it far worse by accelerating AI research before the problem has been solved.
I would make those bets. I'm not afraid of those other things at all. Nukes are really unlikely to be used, and even if they were it wouldn't destroy the whole world, let alone end humanity. Climate change is going to cause lots of economic damage at worst, but it isn't going to destroy the world, let alone end humanity. Same with many other proposed risks.
AI is the only thing I believe has a very high probability of destroying all humanity. Probably within our lifetimes. And there is probably something we can do about it right now - working on the control problem.
It seems to me like you're discounting real and present dangers in favor of one that's more speculative. The reason why climate change and nuclear disasters (I did not limit it to nuclear weapons, but am referring also to events like Chernobyl and Fukushima, which have poisoned significant regions of the planet for millennia to come) are real and present is because a) climate change is already under way and causing massive demographic shifts; and b) nuclear technology is already in a state to destroy large parts of the species and render much of the planet uninhabitable. We are one button click away from a bomb going off at any time, and there are lots of crazy fingers in the world itching for a button.
Even nuclear meltdowns aren't' enough to literally destroy the planet. In the worst case they might kill a few thousand people and cause billions in economic damage. Which sounds like a lot, but on the scale of the whole planet is almost nothing.
All of them are real dangers, and present dangers at that. But they still are not existential dangers. Even in the worst case humanity wouldn't go extinct. It's unlikely even modern civilization would collapse, just a few countries.
AI is one of the few things that can literally wipe out humanity, or even the entire universe. So I think it's worth special consideration. Personally I also believe it's far more likely as well. And people are already doing things about climate change and nuclear proliferation. Until very recently, there were only a few people working on AI risk, and it's still nowhere near enough.
The problem is you could say the exact same thing about an alien invasion (you could literally substitute appropriate analogues in your statement), but you'd be laughed out of the room.
We've searched the skies exhaustively and found no evidence whatsoever of alien civilizations. And why is the Earth is still around after 4 billion years if aliens want it? If they didn't show up in the last few million years, they probably won't in the next few million. Same logic applies to most natural threats like meteors and volcanoes.
AI is a threat that is immediate in time. There is plenty of reason to believe we will build it, and probably within our lifetimes. And unlike aliens, there might be something we can do about it. Whereas aliens would be so far beyond us technologically it's very unlikely we can do anything to stop them.
I know you are just pattern matching "both of those things are from science fiction, therefore shouldn't be taken seriously". But there are actual reasons why aliens aren't taken seriously and AI is.
I agree with this completely and I think a simple board or coalition of companies and governments will be the answer. Regulation, policies, and awareness are the solutions to this problem, but thinking because it isn't a problem now that it won't be a potential problem at all is a scary thought.
I've read many arguments concerned with machine intelligence and the risks it poses, but I haven't read that book. Does it tackle explaining why we should conclude that a machine intelligence would even communicate with us, much less seize resources that we need? I rarely see it addressed, and have never seen it addressed to my own satisfaction; what it is about humans that would be worth the trouble of eliminating. I'm also really interested in the communication question. I do not see any clear reason why a machine intelligence would develop "individuals" or would develop the ability to communicate with us.
Yes that's one of the important topics covered in the book. The AI probably would probably value resources and self preservation.
Resources help it fulfil whatever it's goal is, and the more the better. That includes energy and computing power. Which it could gain a lot of by converting the mass of the solar system into a dyson swarm. Or covering the Earth with solar panels or fusion reactors and comuters. Etc.
Self preservation would be optimized by destroying anything that has even a tiny chance of being a threat to it. Even if humans are harmless, we could build other AIs like it which might compete with it. It's best to just destroy us.
The problem is the AI doesn't need to want to eliminate us. It will likely be entirely indifferent to us. We just need to be in it's way.
I don't think they are afraid of current technology and neither am I. However, I am afraid of a breakthrough or new model of AI that does approach human intelligence. I don't know how, when, or why but I think it's reasonable in the next 400 years that we approach something like that. 400 years is not very long in terms of humanity but this technology could do things to humanity that we can't grasp right now.
Without getting into our personal, specific definitions of intelligence, I do agree the next 400 years could bring about something horribly disruptive that is not intelligent. Dumb and dangerous, discovered by fools, is the more likely threat to peace in the world.
Maybe, but I'm already afraid of dumb and dangerous tools. Nuclear weapons. Maybe we get stronger dumb and dangerous tools, but what scares me much more is a smart drone that has AI and the ability to learn things about me at a rate I can't comprehend.
I think that fear is a very healthy one. I have had an idea for a website for a long time that I haven't gotten around to building yet, but may before too long. It would basically give people such a fear, in hopes that they become better informed and act not to stop the development of such knowledge (it has neutral and positive uses just like any tool which has negative ones) but to restrict its misuse. The site would be a simple one, and decidedly sensationalistic (I am not looking forward to media attention I would hope it would garner, but such is necessary to widely spread an idea these days). It would be machine learning system which has been trained on news articles (or, if I can find such a data set, information about their travel would actually be much better) about politicians and lists of politicians which were later shown to be secretly cheating on their spouses. And it would enable you to select your own politician of choice and see a prediction of how likely that person is to be secretly cheating on their spouse currently.
All of the public discussion around the NSA, and every single last one of the statements by the government about the NSAs practices, centered EXCLUSIVELY on the idea of human analysts at the NSA directly reading the fulltext of peoples communications. Not one single time did they address the (as far as I am concerned) actual danger - the NSAs automated systems having access to peoples communications, providing "summaries" and "ratings" and "predictions." Being worried about some random stranger reading your personal, private communications should maybe be a little bit squicky. Being worried about some random machine learning system teasing out predictions of how likely you are to harbor thoughts that threaten the status quo should be pants-shittingly terrifying.
No one knows yet how big the leap needs to be, but the most amazing thing about ANN's is that the feature layers a CNN arrives at may functionally resemble some of those found in the visual cortex:
"The oft-cited resemblance of the imagery to LSD- and psilocybin-induced hallucinations is suggestive of a functional resemblance between artificial neural networks and particular layers of the visual cortex, a matter which merits further study." [0]
Also see "How brains learn to see" from Pawan Sinha [1].
Yeah, I'm not so sure. All it takes is an intelligent agent that has trained in a VR environment accidentally released into a RL environment and do something crazy - like kill someone. That will be a taste of things to come.
Deep Mind is training computers to WIN in virtual environments right now. What if they train them against simulation of real world environments instead of video games? What if a computer discovers that blowing something up is a way to get a 'higher score'? What if they accidentally get pushed to production?
It's war games / bostrom's paper clip monstrosity. it can happen.
Arguably, it may have already happened in a flash crash. Create a simulation of a stock exchange, train a computer to win the market, it discovers that crashing the market to makes lots of money, gets pushed to production.
Instant global chaos (though maybe the AI owner gets richer). The future is now, folks..
1. In a virtual environment the computer knows about all the objects in the scene. It's a whole other level to have the computer interpret the world through light alone.
2. If you meant that it learnt to play 2d games through watching pixels, it still didn't interpret any meaning, and it's still a huge leap to go to interpreting the real world.
3. The real world is noisy. You don't see it because the brain handles all that part for you, but even something like different lighting or a smoke cloud can throw a computer off completely.
The training is most certainly supervised. The fact that people commonly draw a distinction between standard supervised learning and reinforcement learning isn't really relevant here (the distinction between a single datum and a sequence of observations), because the fact is the label is the score. If you read their DQN algorithm (or try to code an implementation, or look at any reinforcement learning algorithm, really) you would see that.
Addendum: My broader implication is that if you misunderstand such a basic aspect of machine learning (and reinforcement learning), it makes me question whether we should lend credibility to your predictions about the field.
Reinforcement learning is not strictly what is considered supervised learning in ML, but it's very much in the same vein. And a supervised learning algorithm doesn't have any "knowledge" of the domain it's learning about either - it just adjusts its parameters based on training example and class/output pairs. RL attempts to find the best actions to take to ultimately maximize a measure cumulative reward, i.e. a signal which provides an objective measure of performance (much like the class or target output of a training example used in supervised learning).
RL is definitely not unsupervised learning, which in contrast, attempts to find some structure in unlabeled data.
Knowledge that the score is the score, is knowledge of the game. Deep reinforcement learning uses the score as an objective measure for how well it's doing. That's is the definition supervised learning.
It will be interesting to see how a robot walking around really works out. I mean, we know that when it comes to predicting reality, we're just terrible at it. 3 bodies interacting under gravitation outstrips our ability to predict. You want to account for the wind resistance and nonlinear behavior of everything around? Good luck! Chaos theory will eat you for lunch!
If an AI had decided that flash crashes were a way to get rich, you can guarantee that Goldman Sachs would be running one 24/7 right now. High frequency trading systems are actually quite constrained. Even if you hooked together all the worlds supercomputers and somehow got them into the datacenter of the stock exchange, you wouldn't be able to maintain the trading volume HFT systems do with a deep learning system.
Also, as far as I know, the successful videogame playing systems that accomplish some degree of 'long term planning' (meaning more than a few frames) all cheat. They actually "learn" by trying and then rolling the system back. In reality, you can't roll the system back. So that would be a fatal handicap to their methods.
Elon Musk and Stephen Hawking are not talking about current AI at all. They are talking about the future, which could be decades away. But it is very likely we will have superintelligent AI in our lifetimes. Here's a survey of AI experts: http://www.nickbostrom.com/papers/survey.pdf
>We thus designed a brief questionnaire
and distributed it to four groups of experts in 2012/2013. The
median estimate of respondents was for a one in two chance that high-level
machine intelligence will be developed around 2040-2050, rising
to a nine in ten chance by 2075. Experts expect that systems will move
on to superintelligence in less than 30 years thereafter. They estimate
the chance is about one in three that this development turns out to be
‘bad’ or ‘extremely bad’ for humanity
72 out of the 170 survey respondents were attendees of the conferences “Artificial General Intelligence” (AGI 12) and “Impacts and Risks of Artificial General Intelligence” (AGI Impacts 2012). It's not surprising that AGI researchers believe that AGI will be developed soon.
I don't understand why participants of a conference on artificial general intelligence should be excluded from predictions about artificial general intelligence.
But whatever, even the "weak AI" researchers had similar opinions: The median predicted date for a 50% probability of AGI was 2050 by the top 100 AI researchers they surveyed. Only 10 years ahead of the more liberal predictions of the AGI group.
Well, a survey of AI researchers in the 60's would have said we'd have AGI by now. Surveying researchers in a field is not a reliable method for determining truth.
I'm not certain that's correct, only some people predicted AI within 50 years in the 60's.
Presumably modern researchers have a lot more information by now so their predictions are likely to be more accurate. As we get closer in time to it, it will become more predictable and obvious. You can't really have expected people at the time of the very first computers to have predicted the date of AI accurately.
Note the prediction has large error bars, with researchers predicting a 90% chance by the end of this century, but "only" a 50% chance within 25 years.
Anyway what's the alternative prediction? If you don't take predictions from the experts, who do you ask for predictions? Lay people? Just make stuff up? It's the best we can do.
Anyway what's the alternative prediction? If you don't take predictions from the experts, who do you ask for predictions? Lay people? Just make stuff up? It's the best we can do.
One way could be to come up with one or more plausible, step-by-step stories for how AGI will be developed, then model how long each step might take to happen. This would open things up to discussion and criticism instead of having to take the word of experts as gospel. It still wouldn't be scientific, but at least it'd be closer.
There are far too many unknowns to even attempt that. No one even agrees what methods will lead to AGI, let alone how long it will take to make the necessary breakthroughs in those fields. This subject is entirely within the realm of opinion.
I'm not arguing you should take the predictions as gospel. It's just an interesting and relevant datapoint. Maybe the only objective datapoint we have.
Because there is little overlap between AGI researchers and machine learning researchers (or even regular AI researchers, but there's a bit more overlap here). AGI is a mix of philosophy and traditional AI (logic, agents, planning, value functions, etc.). The survey would have much more credibility if it were conducted of researchers at a machine learning conference like NIPS or ICML. Or even a top AI conference like AAAI.
Let's just say that we would not be worrying about AGI if it weren't for the advances in machine learning, which has little to do with AGI research.
Well the thing with the field of "AI" is that it's a moving target. People used to call compilers artificial intelligence. As "AI" research is done, it ends up spawning new fields of study and then the new definition of AI = [previous definition of AI] - [new field of study].
At least Elon Musk is not really implying what technology will get us there. He is assuming that once we do, it has serious implications. It's hard to disagree with that. We will be sharing the planet with another intelligent creature for the first time ever.
We've been sharing the planet with intelligent creatures since before evolution made us human.
That's not the issue. The issue with AI is that we may be sharing the planet with an intelligence that's both nastier and more capable than we are.
I still tend to see this as projection. There's a huge gap between wiring together neural networks and developing something with self-awareness, strategic agency, and some potential to expand both at a rate we can't imagine.
Bostrom's paperclip monster is facile, and no worse than the kind of resource plunder we do already in the name of profit.
Whatever challenges AI brings are going to be quite a few levels beyond that.
To me, the most remarkable thing about recent neural network research is that a very substantial proportion of the human brain is dedicated to vision, and neural networks can now achieve human level performance at object recognition, and with similar, robust representations (see e.g. http://journals.plos.org/ploscompbiol/article?id=10.1371/jou...).
I see two possible conclusions that can be drawn here. The first possible possibility is that neural networks really are close enough to the way the brain works, that they learn the way the brain does, and that with some algorithmic tweaks and the right training data they can learn to everything the brain does. This is the view that Luke is rallying against, and I tend to agree that it's too soon to make this claim. On one hand, the frontal lobe is not all that different from the temporal lobe. If we can achieve the same things with a machine as with one giant chunk of 6-layer neocortex, it's not too hard to imagine that we'll be able to do the same with other pieces. The problems may be different, but the biological implementation is strikingly similar. On the other hand, as sdenton4 notes elsewhere in this thread, the big difference between sensation and cognition is that recurrent processing is that the former can (apparently!) be accomplished in an entirely feedforward manner while the latter requires (or is at least thought to require for efficient implementation) recurrent processing. And just because we have efficient ways to train one kind of neural network doesn't necessarily mean they'll generalize easily to others.
But there is a second possibility, which is that neural networks don't learn or function quite like brains, but somehow both can find similar, nearly optimal strategies for encoding information in sensory domains. In this case, we're probably more than a few years away from machines that can think, but to us neuroscientists this is even more exciting. Vision scientists have been looking for decades to understand how the visual system works, and in this scenario, there's some underlying theory to vision and we have an easily observable system that implements it. Even if it turns out that vision is too different from cognition for these insights to generalize, the success of deep learning for object recognition still has vast implications for future understanding of the brain, and thus probably for future artificial intelligence. If we know the representations the brain uses, that makes figuring out the mechanism by which it computes those representations far easier than if we had to figure out both the mechanism and the representation from scratch.
So my perspective here is that we don't know exactly how far recent neural network research is going to take us, but it's hard to deny its importance. Before I started graduate school, I didn't think that machines would achieve human-level performance at object recognition before we figured out how the brain does it, but here we are. But I suppose that historically, huge advances in AI research have been followed by stagnation and disappointment, so who knows how far we'll get this time.
I don't see much evidence in that paper that neural networks have similar representations to monkey visual cortex activations (the paper studied monkeys). They merely showed that the neural network representations were predictive of visual cortex responses in monkeys (e.g. linear regression was a good fit for using the NN representations to predict visual cortex responses in monkeys).
If you can do a good job predicting monkey visual cortex responses with a linear combination of units from a convolutional neural network, that implies that the network and the monkey compute similar nonlinear functions of visual inputs. If that's not evidence for similar representations, I'm curious what you think would be.
I think this is a good analysis of what Deep Learning is particularly good for and its limitations, but was somewhat annoyed at the lack of any citations of people actually overhyping it. The most there was is this:
"This is all well justified, and I have no intention to belittle the current and future impact of deep learning; however, the optimism about the just what these models can achieve in terms of intelligence has been worryingly reminiscent of the 1960s."
From what I've read and seen, the leading people in the field (Yann LeCun, Hinton, etc.) seem to be very aware that the current methods are particularly good for problems dealing with perception but not necessarily reasoning. Likewise, I have not seen many popular news sources such as NYT make any crazy claims about the potential of the technology. I hope, at least, that the people who work in AI are too aware of the hype cycles of the past to get caught up in one again, and so there will not be a repeat of the 60's.
> I hope, at least, that the people who work in AI are too aware of the hype cycles of the past to get caught up in one again, and so there will not be a repeat of the 60's.
Given that AI Winter is a thing, and that it was a reaction to the across-the-board failure of AI to do anything that people expected of it (expectations driven by hype), then I think you'd be right.
Current top post quotes the most negative observation of the paper. Here's the most positive, and perhaps the most useful to HN readers or investors who are exploring the space:
"Deep learning has produced amazing discriminative models, generative models and feature extractors, but common to all of these is the use of a very large training dataset. Its place in the world is as a powerful tool for general-purpose pattern recognition... Very possibly it is the best tool for working in this paradigm. This is a very good fit for one particular class of problems that the brain solves: finding good representations to describe the constant and enormous flood of sensory data it receives."
the last sentence is also positive as well as insightful
Gradient descent in neural networks may well play a big part in helping to build the components of thinking machines, but it is not, itself, the stuff of thought.
statistical techniques like these are often used to interpret experimental data. "helping to build the components of thinking machines," (and i think this is the author's intent?) may not mean _being_ the components of thinking machines, just as the 4004 is not a component of the computer that i'm currently using, but it did help build it.
idk, though. as cool as the engineering (making) is, i do worry sometimes that the science (understanding) could get overlooked in the process.
Someone once gave the analogy of climbing to the moon. You can report steady progress until you get to the top of the tree/mountain. I think this is applicable here. We'll need a new paradigm, beyond statistical learning, to create AGI
David Deustch has an article online somewhere and his conclusion is that the leap has to be made by philosophy first. We need to first figure out the theory of intelligence and potentially consciousness first, and then information theory will follow.
Same could've been said about every aspect of human progress, if we had asked philosophers to opine (they opine anyway, but the civilization moves forward regardless).
The many facets of human thought include planning towards novel goals, inferring others' goals from their actions, learning structured theories to describe the rules of the world, inventing experiments to test those theories, and learning to recognise new object kinds from just one example.
Not being able to imagine how method A may produce result B is no evidence against method A. It may very well be evidence towards lack of imagination :)
Another article basically saying something along the lines of "there is no current technology that comes close to producing AGI, therefore let's dismiss all these technologies". Of course we don't know what we don't know, until we do, and then it's not as mysterious.
It's not hard to see that the reason NN are becoming the prime candidate for AGI, is because of their inspired architecture based on biological neurons. We are the only known AGI, therefore something similar to the brain will be producing an AGI. NN at least mimic the massively parallel property of biological neurons. And if we're optimistic, the fact that NN is mimicking how vision works in our brain, might mean that we are at some point in the continuum of the evolution of brains, and it's a matter of time until we discover the other ways brains evolved intelligence.
What keeps me optimistic is evolution. At some point brains were stupid, and then they definitely evolved AGI. The question is how did this happen and whether or not there is a shortcut, like inventing the wheel for transportation instead of arms and legs.
Artificial neural networks have basically nothing to do with actual biological neurons. First, neuroscientists do not have even a decent understanding of biological neurons: you cannot say that neural networks mimic neurons when we don't understand biological neurons. Of the bajillion things we do know neurons do, neural networks do on the order of 1% of those things. On the flip side, neural networks do a ton of things that biological neurons do not (like backpropagation(!)* ).
* In fairness, Hinton hypothesizes the brain has a way of doing backprop, but what he talks about only barely resembles actual backprop.
I've heard people suggest a comparison between birds wings and planes. We don't understand everything about bird wings. They're incredibly complex and messy biological structures. However, we do know enough about the general principles of aerodynamics to build aircraft. Perhaps a similar dynamic is at work here. We don't understand brains, but perhaps neural networks are able to capture the same principles of information processing.
Edsger Dijkstra famously made such a comparison [1]:
"The Fathers of the field had been pretty confusing: John von Neumann speculated about computers and the human brain in analogies sufficiently wild to be worthy of a medieval thinker and Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim."
[1] E. W. Dijkstra, EWD898: The threats to computing science
Yes, plane wings are built on fundamental physical principles. We do not know the fundamental physical principles of intelligence.
Building an NN and hoping its brain-like structure will start to think is like building a flapping machine and hoping its bird-like structure will start to fly.
> Building an NN and hoping its brain-like structure will start to think is like building a flapping machine and hoping its bird-like structure will start to fly.
But... you're replying in a comment thread that has explicitly stated that "NNs" don't at all qualify as "brain-like".
Right, no debate there. They're inspired by biological neurons. But it's still a step in the right direction. The facts are we have billions of neurons and there's some form of information processing going on. Perhaps we could continue the trend, and model NN to include other brain cell features like whatever the glyal cells do for example.
You can have a look at the second half of this video https://www.youtube.com/watch?v=IcOMKXAw5VA
for a recent account. He has been suggesting this for almost a decade, but I don't know that there is any article (yet).
It would be great to be able to replicate neurons and neural signals exactly. Direct interfacing between artificial neural networks and biological ones would be very interesting
Nice article - it's good to be realistic about what we can do with current tools.
I feel like the gist of what current neural nets can do is "pattern recognition". If that's fair, I also suspect that most people underestimate how many problems can be solved by them (e.g. planning and experiment design can be posed as pattern recognition - the difficulty is obtaining enough training data).
It's true that we're most likely a very long way away from general AI - but I'm willing to bet most of us will still be surprised within the next 2 years by just how well some deep-learning based solutions work.
>Human or superhuman performance in one task is not necessarily a stepping-stone towards near-human performance across most tasks.
Here's the important difference about NNs. They are incredibly general. The same algorithms that can do object recognition can also do language tasks, learn to play chess or go, control a robot, etc. With only slightly modifications to the architecture and otherwise no domain information.
That's a hugely different thing than brute force game playing programs. Not only could they not learn the rules of the game from no knowledge, they couldn't even play games with large search spaces like Go. They couldn't do anything other than play games with well defined rules. They are not general at all.
Current neural networks have limits. But there is no reason to believe that those limits can't be broken as more progress is made.
For example, the author references that neural networks overfit. They can't make predictions when they have little data. They need huge amounts of data to do well.
But this is a problem that has already been solved to some extent. There has been a great deal of work into bayesian neural networks that avoid overfitting entirely. Including some recent papers on new methods to do them efficiently. There's the invention of dropout, which is believed to approximate bayesian methods, and is very good at avoiding overfitting.
There are some tasks that neural network can't do, like episodic memory, and reasoning. And there has been recent work exploring these tasks. We are starting to see neural networks with external memory systems attached to them, or ways of learning to store memories. Neuroscientists have claimed to have made accurate models of the hippocampus. And deepmind said that was their next step.
Reasoning is more complicated and no one knows exactly what is meant by it. But we are starting to see RNNs that can learn to do more complicated "thinking" tasks, like attention models, and neural turing machines, and RNNs that are taught to model programming languages and code.
I expect that as we improve machine intelligence more and more, aside from the fact that we will simply keep moving the goalposts of what we consider "intelligent" like the irascible scamps we are, we're going to discover that embodiment is absolutely necessary. Not just any embodiment either, but we will need to place the neural networks in bodies very much like our own. Neuroscience continues to find surprising things that link our "general human intelligence" to our bodies. Paralyze a face and a person becomes less able to feel happiness or anger, eventually forgetting what feeling those things even meant, as one example.
We shouldn't forget that the mind/body split is a wholly artificial construct that has no basis in reality. The brain is not contained in the head. The nerves running down your spine and out to your toes and all over your body are neurons. Exactly the same neurons, and directly connected to the neurons, that make up what we think of as the separate organ 'the brain'. They're stretched out very long, from head to toe, sure, but they are single cells, with the exact same behavior and DNA, and there is no reason to presume that they must have some especially insignificant role in our overall intelligence.
Then there is the fact that it is probably reasonable to presume that a machine which has human-level intelligence will not appear overnight. It would almost necessarily go through long periods of development. During that development, when the machine begins to behave in ways the designers are not able to understand, what will be their reaction? Will they suppose that maybe the machine had intentions they were unaware of, and that it is acting of its own volition? Or will they think the system must be flawed, and seek to eliminate the behavior they didn't expect or understand?
I have a hard time imagining that an AI system will be trained on image classification and one day suddenly say "I am alive" to its authors or users. If it instead performs poorly on the image classification because it is pondering the beauty of a flower in one of the images, what are the chances that nascent quasi-consciousness would be protected and developed? I think none. We only have vague ideas about intelligence and consciousness and our ideas about partial intelligence are utterly theoretical. Has there ever been a person who was 1% intelligent? Is mastering checkers, or learning NLP to exclusion of even proprioception 1% of human intelligence? You optimize for what you measure... and we don't know how to measure the things we're looking for.
>Extrapolating from the last few years’ progress, it is enticing to believe that Deep Artificial General Intelligence is just around the corner and just a few more architectural tricks, bigger data sets and faster computing power are required to take us there. I feel that there are a couple of solid reasons to be much more skeptical.
The next step would may be to wire the things together in a similar structure to the human brain which is kind of what Deep Mind are working on - they are trying to do the hippocampus at the moment. (https://www.youtube.com/watch?v=0X-NdPtFKq0&feature=youtu.be...)
I think that this book is really interesting "Surfaces and essences: Analogy as the fuel and fire of thinking" by Hofstadter and Sander
Many people got dissilusioned with classical AI because mathematical logic (inference engines) would not scale to 'strong' AI.
Hofstaedter says that most concepts handled by Humans do not fit into clear cut onthologies one to one. Instead each higher order concepts are created by finding analogies between objects or simpler concepts, and by grouping these similar concepts into more complex entities.
My personal theory is: semantics of language are neatly/minimally represented by dependency graphs. Maybe analogies can be found by matching colored dependency graphs (where the nodes and links determine the coloring)
We will never have human level AI until we can properly understand, define, and model human intelligence. While we are advancing at a very rapid pace on that front, we are still years away from the field being considered mature.
Why? All sorts of behavior emerges out of seemingly simplified rules/systems. Could the convoluted and beautiful patterns that can come from Conway's game of life be predicted in advance? What about the sort of meta "intelligence" on display by ant colonies when taken as a whole? Or how connecting disparate groups of people over the globe changes the way humanity as a whole reacts to situations.
I'm no expert and these examples probably betray the shallowness of my understanding as much as they make a point. But if they do make a point it's that we don't need to (and often can't) understand what will result from systems that have a built in feedback mechanism.
>>But if they do make a point it's that we don't need to (and often can't) understand what will result from systems that have a built in feedback mechanism.
That may be true, but my point was more to the effect that there are things we fail to understand about human nature that drives decision making, actions, and goal setting within our own society, let alone other cultures. Consider how factors such as your own personal energy levels affect your decision making throughout the day (decision fatigue)...how are we going to model something like that with AI? Or consider that many different cultures around the world have very different societal goals than Americans do pursuing personal freedom and material well-being. How do we consider those different abstract goals, that most citizens don't even realize or understand why they pursue, because it's just how they have been raised. How do environmental factors drive decision making in different climates? Those who live in a very cold climate have to take different precautions and deal with different risks than those in warm climates do. What about locally available resources? Many business (and design) decisions are made based as much upon supply chain availability and risk management as they are customer demand. How does AI consider these factors when AI doesn't even know these things exist? These are the things I was talking about that we still have difficulty understanding about each other, let alone being able to program a robot to imitate.
Maybe the difference between a human intelligence and a super intelligence comes, in part, from not being affected by day to day mood/chemical processes that are probably related to keeping the body as a whole functioning. The super intelligence has an intelligence that's isolated from these needs.
I don't really have an answer to all your points, just a general feeling that AI needed be just like our "I" to be effectively superior.
You state this as fact. Link to credible science saying that it is true.
This is like saying we have to completely understand how birds fly before we can make planes, or how fish swim before making submarines. Or how muscles cause movement before inventing the wheel. Or how chemical reactions work before we can make fire.
You don't have to fully understand the original system or be able to recreate it to make something that exhibits some or all of the desirable properties. This has been shown time and time again.
There was a recent paper [1] about learning visual concepts from few examples. I don't know if it generalizes or not, but it seems too early to assume that researchers will hit a dead end.
it is true that most recent success of deep neural network are in the regime where n, d are large. And we surely shouldn't fantasize general AI solved in this way. However, the very appealing aspect of deep neural network is end-to-end training: for image recognition, we can map from raw-pixels to output. This is very different from other ML techniques. In some sense, deep neural networks learn "algorithms", not just "models". This formulation can be richer especially when given lots of data.
At last, the thing that's unreasonable isn't effectiveness. I've been hoping for a while that someone close to the field would cut through the hype and put ANNs in context.
To my knowledge, intuitionistic type theory (ITT) considers type of higher order (called higher-kinded types), so homotopy type theory (HoTT) isn't exactly relevant here - especially since it has, to my knowledge, not cross-pollinated with ML or ANN research. Models of recursion can be found in systems as simple as lambda calculi or cellular spaces, and this was the big idea behind Jeff Hawkins's HTM and other classical models (please correct me if I am wrong).
What HoTT brings to the table is the notion that a path between two types can be used to transport a proof from one type to another - currently immensely handy for theorem proving, but applications in ML and beyond remain to be made.
I think many are missing the point here: AI can just be very stupid and still wipe everything out. It only takes some sort of irreversible minimisation function to let machines destroy all at sight. Drones are the first step, then comes IoT, what else? We fully depend on machine learning just now. So no wonder many are scared even before machines becoming human-intelligent.
By the same token, the ability of neural networks to learn interpretable word embeddings, say, does not remotely suggest that they are the right kind of tool for a human-level understanding of the world. It is impressive and surprising that these general-purpose, statistical models can learn meaningful relations from text alone, without any richer perception of the world, but this may speak much more about the unexpected ease of the task itself than it does about the capacity of the models. Just as checkers can be won through tree-search, so too can many semantic relations be learned from text statistics. Both produce impressive intelligent-seeming behaviour, but neither necessarily pave the way towards true machine intelligence."
So true, and this is why I don't listen when Elon Musk or Stephen Hawkings spread fear about the impending AI disaster; they think because a neural network can recognize an image like a human can, that it's not a huge leap to say it will be able to soon think and act like a human, but in reality this is just not the case.