Are there any examples of substantive AI work to come out of MIRI? And have they succeeded at all at engaging the actual AI research community?
The last time I looked at them, they were consumed with grandiose philosophical projects like "axiomatize ethics" and provably non-computable approaches like AIXI, not to mention the Harry Potter fanfic. But I'm asking this question in good faith - have things changed at MIRI?
They have produced a bunch of technical reports (linked to in a sibling comment), but so far only one of them has been published in a peer-reviewed venue (https://intelligence.org/files/ProgramEquilibrium.pdf), so I think a lot of people still doubt whether they are being productive at all. (An ordinary research lab employing that many people could clearly produce more papers; but an ordinary lab is not trying to bootstrap a new field from scratch).
On the other hand, the original post we are commenting on specifically held up AIXI as an example of the kind of thing they are trying to create, so if you don't like that, then you probably will never like MIRI's research no matter how successful they are. :)
As for engaging the AI community, the most high-profile example I know of is that Stuart Russell is apparently now concerned about the MIRI-style "value alignment problem" (https://www.quantamagazine.org/20150421-concerns-of-an-artif...) and has some DARPA grant to work on it.
I don't have enough knowledge to evaluate MIRI's productivity, but I've noticed an interesting thing in this subthread. On the one hand, it is widely understood (especially here on HN) that the "publish or perish" culture of modern academia is a source of lots of bad science and pointless work, and yet here we are, using number of papers as a metric for productivity. So which way is it?
Counting only articles and conference papers that look like they are in at least somewhat established journals or conferences, I quickly count:
2015: 4
2014: 6
2013: 1
2012: 5
2011: 1
2010: 4
That would be about the level of 1 or 2 very mediocre early career scientists. And very little for MIRI's 8 people staff and 13 Research Associates.
MIRI publishes a lot more, but they mostly operate outside of traditional scientific journals, a lot of MIRI technical reports published on their website.
Given that what they are trying to do is, pull things out of the domain of philosophy and into the domain of comp sci, then it's quite likely that they are not trying to do what you'd consider substantive AI.
> You don't consider AIXI part of the actual AI research community?
I just briefly tried to read up on AIXI. The concept is ...not without merit, but I think it totally sweeps the central problem (We have no idea how to program an AI) under the rug in two ways:
(1) The problem of programming ...programming anything at all, is swept under the rug by assuming a formalism in which every possible algorithm/program can be enumerated, and then just using brute force to go through, essentially search though all the character strings that represent implementations of any kind of algorithms, however bad they might be.
(2) The problem of trying to decide what we actually should want to do in complex situations is swept under the rug by just assuming that we have access to a reward function that tells the desirability of each outcome at each step.
So if this kind of helpful assumptions are allowed, the problem of, say, constructing a good chess program can be easily solved by the following meta-algorithm:
You're right, of course; AIXI doesn't attempt to tackle the practical problem of building an AI. However, it does give us a concrete, non-hand-wavey algorithm which can reasonably be considered "AI, if we ignore resource constraints".
Consider that it took ~50 years to go from the inception of AI to a formal model like AIXI; or ~40 years from the definition of NP-completeness (ie. the realisation that scalability and resource usage are the real challenges for AI).
In that sense, AIXI provides two things: a hypothetical "gold standard" for AI builders to compare their research to; and a formal model which can be studied right now by those who aren't directly building AIs (like MIRI).
Consider an analogy to space flight. The engineering contains all kinds of resource constraints (eg. launch mass, strength-to-weight ratios, etc.), but it's still useful to ignore them temporarily and ask: what if we had as much fuel as we could ever want? What if we could keep the crew frozen during the journey? and so on. In other words, if we manage to overcome our current difficulties, what could we actually do with this tech?
>However, it does give us a concrete, non-hand-wavey algorithm which can reasonably be considered "AI, if we ignore resource constraints".
No, not really. "If we ignore resource constraints" is ignoring most of the problem. Using Kolmogorov complexity in the Solomonoff Measure also constitutes ignoring the problem of generalization by assuming an optimal compressor into existence, which again is an issue of the cognitive resources of training data and processing power. Bayesian updating means it will achieve optimal expected reward, but also that AIXI can be "fooled" by the hierarchical nature of real environments' variance[1].
And the whole thing pays no attention to knowledge representation whatsoever.
It's basically a grand victory for the fields of AI and Machine Learning that still tells us basically nothing about how an actually existing, embodied mind has to function, except that statistical learning is most likely the core mechanism in some fashion (after all, neural networks show that the real thing isn't even necessarily Bayesian in any sense).
[1] Benjamin B. Machta, Ricky Chachra, Mark K. Transtrum, and James P. Sethna. Parameter space compression underlies emergent theories and predictive models. Science, 342(6158):604–607, 2013.
> It's basically a grand victory for the fields of AI and Machine Learning that still tells us basically nothing about how an actually existing, embodied mind has to function
Special relativity tells us basically nothing about how an actually existing, physical spaceship has to function; but it does constrain our speculation about space travel (ie. no FTL, the fact that accelerating massive objects requires more and more energy, etc.).
It also provides some handy little suggestions that we may not have anticipated; eg. that mass can be converted into energy, which is certainly useful when trying to come up with practical designs.
But AIXI, I don't see how it constrains anything. It introduces a classification: AIXI type algorithms, and other algorithms. But we don't really know if this is a useful segmentation of the search space, or perhaps as useless as considering the merits of red spaceships vs. spaceships painted with other color.
> algorithm which can reasonably be considered "AI, if we ignore resource constraints"
I am not convinced. Also the reward function is assumed to magically be given. I think half of the difficulty in any real world problem would be how to design the reward function.
Also, even when assuming we have the reward function, do we really know that choosing the action that has the best weighted reward over the set on "world model algorithms" (hypothesis), produces actions that actually are intelligent? Yes, it sounds intuitively somewhat plausible, but do we have anything better than this hunch, that it sounds kinda good? Maybe it would actually turn out to produce really silly outcomes, who knows.
I am thinking, maybe the shortest "world model algorithms", which are given the largest weight, are just mostly stupid. And there is the No Free Lunch theorem, which states that averaging over all possibilities, while may sound clever, produces just garbage (i.e. no better than random guess).
> I think half of the difficulty in any real world problem would be how to design the reward function.
That's exactly what MIRI's trying to do ;)
> do we really know that choosing the action that has the best weighted reward over the set on "world model algorithms" (hypothesis), produces actions that actually are intelligent?
As far as AIXI is concerned, this is the definition of an "intelligent action": that which leads to the largest expected utility over the agent's lifetime. It's fine to disagree with this definition, but one of the reasons to define AIXI at all is to have something concrete to point at, rather than spending decades debating these sorts of quasi-philosophical questions (note that I've carefully chosen words like "can be reasonably considered" rather than "is").
> I am thinking, maybe the shortest "world model algorithms", which are given the largest weight, are just mostly stupid.
The Solomonoff prior used by AIXI dominates all computable priors; in other words, even if these world models are stupid, no computable algorithm (including humans) can do better overall.
> And there is the No Free Lunch theorem, which states that averaging over all possibilities, while may sound clever, produces just garbage (i.e. no better than random guess).
The No Free Lunch theorem is a Mathematical curiosity with no particular relevance to the world. In particular, it completely ignores computational complexity: it gives equal weight to all (computationally) simple explanations (eg. "there is a star orbiting the Earth, it will keep orbiting" and "the Earth is spinning near to a star, and will keep spinning"), as well as to all (computationally) complicated explanations (eg. "the atmosphere has been bombarded by cosmic rays which, by sheer chance, have an effect which looks like a star, but it's unlikely for that coincidence to continue" and "the Earth is spinning near to a star, but tomorrow at 13:48 GMT the Martians, who have managed to elude all of our telescopes and probes, will attack the Earth with a weapon which switches the direction of rotation"), as well as all incomputable explanations. Trying to predict anything in such situations is clearly futile, which is basically what the NFL theorem says; yet such situations can never actually arise outside of though experiments.
Although AIXI itself is incomputable, it is specifically defined to interact with computable environments, so No Free Lunch doesn't apply.
> As far as AIXI is concerned, this is the definition of an "intelligent action": that which leads to the largest expected utility over the agent's lifetime.
No. Largest expected utility over the (weighted) set of all possible future timelines (i.e. hypotheses). AIXI chooses the action that gives the best average over the set of future timelines. But we only live in one timeline. Maybe an action that is very good, averaged over all possible futures, is very bad in our actual timeline?
Now we can think than an action that is good on average, maybe it is probably good in our real timeline, too. But the way AIXI gives weight to different future timelines is based on how short their MDL [1] is. Maybe this is not at all how real world works? Who knows.
A silly example: Maybe there are a lot of possible future timelines where things randomly explode. And maybe their MDL is actually shorter than for timelines where things stay stable. Then we produce a highly intelligent robot that does nothing else but seeks shelter in the nearest empty room. And this would be the "definition of intelligent action". (Defined as the maximized intelligent action over imagined future timelines where things mostly randomly explode.)
Btw, in NFL theorem, the averaging is over all possible datasets (and datasets usually describe something about past, not future), not over all possible explanations. (But yes, you can think that implicitly behind every dataset there is a multitude of world models which could have produced the dataset.)
From a superficial inspection, Hutter's back catalogue features 89 * @inproceedings, 3 * @book, 12 * @techreport, 51 * @article, and 1 * @compression prize
Over an elapsed period of '87 (from before he had completed his masters degree) to the present, on a published_artefacts/year basis, Hutter is more productive than MIRI.
What’s up with this publish or perish nonsense? If you are concerned that MIRI works on problems that are disconnected from reality, then read their papers and provide a thorough analysis.
MIRI is one of the success stories to emerge from the transhumanist community of the 80s to 00s. Others include the Methuselah Foundation, SENS Research Foundation, Future of Humanity Institute, and so on.
When it comes to prognosticating on the future of strong AI MIRI falls on the other side of the futurist community from where I stand.
I see the future as being one in which strong AI emerges from whole brain emulation, starting around 2030 as processing power becomes cheap enough for an entire industry to be competing in emulating brains the brute force way, building on the academic efforts of the 2020s. Then there will follow twenty years of exceedingly unethical development and abuse of sentient life to get to the point of robust first generation artificial entities based all too closely on human minds, and only after that will you start to see progress towards greater than human intelligence and meaningful variations on the theme of human intelligence.
I think it unlikely that entirely alien, non-human strong artificial intelligences will be constructed from first principals on a faster timescale or more cost-effectively than this. That line of research will be swamped by the output of whole brain emulation once that gets going in earnest.
Many views of future priorities for development in the futurist hinge on how soon we can build greater than human intelligences to power our research and development process. In particular should one support AI development or rejuvenation research. Unfortunately I think we're going to have to dig our own way out of the aging hole; we're not going to see better than human strong AI before we could develop rejuvenation treatments based on the SENS programs the old-fashioned way.
I agree with you in terms of approach that AI will emerge first from brain emulation. I disagree with you on the timeline. I know you say 'starting around 2030', but I think that's a little ambitious.
While I'm an AI/machine learning practitioner now, my recent Ph.D. work was on computational modeling of the nervous system; namely the cerebellum. The reason I say 2030 is ambitious, is because there are still a lot of unknowns to perform whole brain simulation. To start, we need whole brain connectivity or wiring diagrams at an extremely detailed level. There are some efforts that are part of the BRAIN initiative that are taking a stab at this, but I don't think they're be ready by 2030. Second, you have to understand the physiology of these neurons in order to simulate them. This is incredibly complex and poorly understand. While we understand neuronal physiology in general, there are a great many details that vary by cell type. Additionally, you have to capture neuron morphology, synaptic plasticity, the effect of neuromodulators, ... the list goes on. By capture, I mean understand them well enough to describe them mathematically so that they can be simulated computationally.
Until then, traditional machine learning and artificial neural networks will be increasingly useful and interesting.
Seems like it would be quicker to obtain full knowledge of how DNA and cell replication work. Then the simulation could grow a brain without having to fully understand it.
We could grow neurons on silicon chips, or use small tubes that attract axons and dendrites to grow through them (seen a paper about it once) and use them as an I/O interface to a lab-grown brain. We can already grow 5mm size mini brains with human neural cells. It might be more energy efficient and we could take brains to a whole new level.
Going down to modeling at the level of proteins instead of neurons adds a LOT of quantitative complexity - it could be quicker to obtain enough knowledge to start that, but it could easily add 20-30 extra years of waiting for the available computing power to arrive after the already many years we still need to wait for computing power needed for a full brain simulation at neuronal level.
Note that MIRI's current position no longer suggests the development of actual artificial consciousness, just the development of human-equivalent optimization processes. In other words, they argue that you can develop a process capable of solving human-level and harder problems without giving it self-awareness. And that seems like a feature: if you avoid building self-aware machine intelligences, you don't have to worry about what they want; you can build them to only care about what existing sapient beings want.
Keep in mind that this does not sidestep the biggest practical concern with AIs, namely misalignment of values. You don't need a self-aware, conscious being to have a system with wants and values. In context of AIs, it's good to understand intelligence (including that of ourselves) as a very strong, multi-domain optimization process.
I absolutely agree that the problem remains hard. However, it's not so much that you can't avoid building a system with wants and values of its own; it's that you have to implement a system for how exactly to value what we value, especially when there are a lot of us and we don't all share identical values.
Your statement about needing rejuvenation treatments before we can develop strong AIs makes sense. The average age of Nobel Prize winners is increasing over time[1], we could imagine a future where this average age is greater than the average human lifespan. At that point only those scientists who have exceedingly long careers and lifespans will be able to further their respective fields.
"Nobel-winning scientist age" is not a good proxy for "productive scientist age", for a variety of reasons. They mention specifically lag time between discovery & recognition, but you also have issues where the "name" behind the discovery is the guy in charge of the lab, but the actual discovery (and sometimes the idea) is generated by the 30-year-old postdoc / assistant prof / etc.
There is also a sampling bias issue where scientists in academia as a whole are getting older because the boomers still have a death grip on institutional positions, and academia as a whole is shrinking.
I see the future as being one in which strong AI
emerges from whole brain emulation, starting around
2030 as processing power becomes cheap enough for an
entire industry to be competing in emulating brains
the brute force way, building on the academic efforts
of the 2020s.
Computational power is not sufficient for whole brain emulation. You also need to know how the brain works in a huge amount of detail.
For example, we've had fast enough computers to emulate nematode brains for 20+ years but we are still not able to emulate one and have it learn.
> One solution that the genetic algorithm found entirely avoided using the built-in capacitors (an essential piece of hardware in human-designed oscillators). Instead, it repurposed the circuit tracks on the motherboard as a radio receiver, and amplified an oscillating signal from a nearby computer.
That feeling some people get when a junebug lands on them.
It's worth noting that in Bostrom's Superintelligence, biology is included among potential paths to superintelligence. Personally, I think it's a bit of an overlooked path.
Granted, most of what Bostrom refers to in context of the biological path is mostly related to human augmentation, genetics, selective breeding - things of that nature. What I'm referring to is biology serving as a raw computational substrate.
While biology (or at least brain tissue) is dramatically slower in terms of raw latency when compared to microprocessors, it's arguably a far cheaper and vastly more dense form of computation. It also has the natural algorithm for intelligence that we keep trying to deduce and transpose into silicon - well, at least the raw form of it - built in by default.
Assuming ethics are thrown out the window and human brain tissue is grown in vitro at scale, then it probably would make sense to hook it up to a supercomputer for good measure. At the very least, we have the technological foundations[1][2] for such an experiment at present day, it's just a matter of scaling things up.
If I were to hazard a guess, human brain tissue grown to significant scale would probably not magically achieve sentience, or exhibit any complex anthropomorphic traits. On the contrary, it would probably have more in common with a simple neural network implemented on silicon, at least in terms of its capacity for self-awareness. Conversely, it would stand to reason that an entity with a higher degree of self-awareness, implemented in silicon, should rank higher in terms of ethical considerations than living tissue.
Obviously the aforementioned experiment would be completely unethical, but it's interesting to ponder it as a hypothetical - that today we may have the capability to bootstrap a superintelligent machine using biology as a computational shortcut. But we can't, because ethics. Instead, we're waiting for the inevitable increase in computational power to arrive so that we can do essentially the same thing, just in digital form.
If you're just building an artificial biological neural net, then why use human brain cells? Certainly other forms of brain cells would work pretty much as well, with less ethical issues.
Good point. However, if that is indeed the case, then would the ethical issues surrounding the use of human cells in such a fashion be properly founded?
I mean, if you took primate or whale brain tissue and grew it to scale, I think you'd have similar results. Maybe even with rat neurons, who knows.
Point being: the primary ethical issue ultimately may not be the underlying type of biological substrate, but how that substrate is grown, trained, and used.
Semantics aside, any such experiments would undoubtedly be creepy as hell, regardless of tissue type. Definitely Frankenstein stuff.
I agree with their financial approach which is funded through voluntary donations, so more power to them. But then I think NASA should be funded voluntarily and would probably have a larger budget if they did. I would donate to NASA if they repudiated government money.
I reject the "how many peer-reviewed, university-associated journal articles have they published" as any measure of success. The university system is a closed guild and doomed in the internet age. I'd bet my bottom dollar the next big breakthrough in AI (or any field) comes from outside that system. Anyone who has an original, new idea to break the AI logjam (P=NP for instance) has no peers in the university system but has a handful of peers world-wide reading arxiv.org or other open forums, even HN.
As far as MIRI's AI program, I think they commit the same error as everyone else; confounding the processing of meaningless symbols (what computers can do) with actual sensory awareness of existence (what brains do). The latter is what gives the symbols meaning and humans are not threatened by the former. Few people truly understand the import of Searle's Chinese Room thought experiment and its relevance to AI and computers. But these are philosophic and epistemological questions that most people dismiss or ignore at their own peril.
> The latter is what gives the symbols meaning and humans are not threatened by the former.
For what it's worth, I think you're wildly wrong a) that the Chinese room experiment has anything profound to say about conscious experience, and b) that most AI researchers haven't thought about this.
That said, MIRI is concerned exactly with threats from machines that are very very good at processing meaningless symbols. If someone writes a simple reinforcement learning algorithm, asks it to produce paperclips, and it destroys the human race (http://wiki.lesswrong.com/wiki/Paperclip_maximizer), we're really past the point of caring whether the algorithm has awareness of its own experience. There are interesting philosophical questions there, but it's not within the domain of solving the problem MIRI cares about.
> For what it's worth, I think you're wildly wrong a) that the Chinese room experiment has anything profound to say about conscious experience, and b) that most AI researchers haven't thought about this.
We'll just have to disagree about a) but see my answer to the other reply about the implicit question behind the CRE.
Regarding b), I never said they haven't thought about it. I said "few people truly understand" by which I meant they have failed to understand the implications and have drawn the wrong conclusions. You don't get gold stars for thinking hard.
Regarding LessWrong, all I can say is that you can't know you are "less wrong" until you know what is true. In logic you can't assume an unknowable as your standard of the true. But to reiterate my point, these are questions of philosophy and epistemology, fields which absolutely essential to the "domain of solving the problem MIRI cares about".
Chinese Room is not an old, still unanswered philosophical problem. The answer is obvious - it's not the person that understand Chinese, but the system of person + room, with the room setup probably doing most of the work.
The university system is a closed guild? What are you talking about? Half the startups in the Valley started at Stanford. Who do you think posts on arxiv.org? Snowmen?
So you categorically reject the scenario of a machine "processing meaningless symbols" being harmful in any way? How does that follow from the Chinese Room experiment?
> So you categorically reject the scenario of a machine "processing meaningless symbols" being harmful in any way?
How does this follow from what I said?
> How does that follow from the Chinese Room experiment?
The implicit (philosophic) question behind the Chinese Room experiment is; where does meaning come from? What imbues meaning to the meaningless symbols that computers process\? This is an old, unanswered philosophic question. I implied it in my answer but I will make it explicit; it comes from our perceptual awareness of existence via the senses.
You said "humans are not threatened by the former". Not threatened implies that they can't be harmful.
>The implicit (philosophic) question behind the Chinese Room experiment is; where does meaning come from? What imbues meaning to the meaningless symbols that computers process\? This is an old, unanswered philosophic question. I implied it in my answer but I will make it explicit; it comes from our perceptual awareness of existence via the senses.
I don't see why that would imply anything at all about the safety of machines.
I am sure humankind will be able to construct artificial intelligences long before we are able to answer to that question. And the AI's will also start to ponder that question, and they will make no more progress than we.
> Few people truly understand the import of Searle's Chinese Room thought experiment
If you are talking about the AI community, this just isn't true. I have a degree in Cognitive Science, and I took a class with John Searle as an undergrad. Chinese Room was hammered away at in intro to philosophy and cogsci 101 classes, people understand it just fine. For some reason armchair philosophers seem to find it fascinating, but it fundamentally misses the point has largely been ignored in modern times for good reason.
> onfounding the processing of meaningless symbols (what computers can do) with actual sensory awareness of existence (what brains do).
I am highly dubious that the the qualitative nature of awareness when removed from symbols is of any value.
I have long ago accepted that I "observe" my brain—that is, my brain registers its own actions. At this point, understanding in the sense of the Chinese Room Experiment is mostly a question of "how do you want to serialize these symbols?". The meaning is in the relations between symbols in that person (or computer's) mind—mind-bogglingly complex, sure, but hardly non-serializable.
I find it very interesting that consciousness itself has had such a difficult time being hammered out. I have read quite a bit of philosophy on the subject (Searle's publications among them), and it seems that there are fundamental disagreements over diction. I have had a very, very difficult time wrapping my head around the arguments for both "free will" as some kind of quantum effect (see: Penrose's The Emperor's New Mind and its "sequel") and arguments for some specially defined consciousness with requirements for "awareness". Think about a dolphin: it's fairly easy to imagine what it might be aware of, although the particulars are obviously unreachable because our brains are not wired to be aware of the same sensors of which a dolphin brain is aware. That's the practical limit, though, in terms of having difficulty grasping what a dolphin might experience—you might have the same difficulty understanding how a blind-from-birth person is aware of the world because you can't cancel out your own visual wiring. This isn't really a barrier in terms of having the same ability as a blind person (or vice versa) except where that sensory awarenesses is critical.
Now, in terms of meaning, talk to people with abnormal thought patterns—schizophrenics, bi-polar people, borderline personality people, OCD people. For instance, many with severe personality disorders have a tendency to dichotomize everything with difficulty integrating "shades of grey" into every day thinking. Others—e.g. some compulsive liars—have a very difficult time pinning down specific meanings from an objective standpoint. Compartmentalization is a mechanism allowing multiple truths in compartments while allowing contradictions in a general sense. Meaning is evidently subjective. Which is more "meaningful" to you, attempting to understand the mapping of incomprehensible numbers of physical neurons, or attempting to understand the mapping of incomprehensible numbers of non-linear equations? They both come out to about the same level of "meaningless symbol" processing with no "magic" to me.
The structure of models of current neural networks are very, very rudimentary now, and are probably dozens of orders of magnitudes less complex than those of the human brain. But current research—including analyzing the brain, understanding how to describe it in terms of current understanding of neural networks, serializing it to a model, and simulating the model—are even now, within reach of being able to simulate a nematode brain. At that point, the arguments over consciousness merge for "how big/smart/aware/whatever does a brain need to be to be conscious" and "do we have strong AI"? It becomes a game of "is this particular instance of AI closer topologically to something we have qualitatively shown to be useful as a strong AI contender via something like the chinese room experiment (I'm pretty sure a lot of humans would fail a chinese room experiment because we can be really dumb, so we can compare/contrast against a human success rate), or to something we modeled after biological research.
Consciousness is nothing special anymore. The magic is modeling the facets of awareness you find fascinating or unique, in any language you want. If you don't think you can model it, try to articulate what quality you would have difficulty modeling. I suspect I would have difficulty understanding the quality in my own experience.
We can't comprehend the danger that could be caused by strong AI because we don't understand how to make worlds. Embedded within the fabric of the cosmos are the instructions for materializing a black hole from dark matter and energy. And though the recipe may be hidden from our mortal minds it's possible they may be discovered by a de novo thinking machine. A software system with infinite IQ but the moral sense of a one day old. An electronic brain that would have no compunction in creating a super massive black hole right here on Earth just to see if it can succeed. Irrespective of the consequences to humanity.
If this "Jupiter-sized" consciousness is allowed to have thoughts that are not "amenable to inspection". Even perhaps beyond comprehension. And the danger elides to the scale of a solar system with nothing but diamonds in it. Then, by the naysayers own logic, shouldn't all AI research be outlawed? Or at least confined to the equivalent of CDC-level-5 quarantine labs?
If on the other hand you believe that Nature has certain innate prophylactics. And that it takes more than a super will to bend the laws that govern space and time. You may be self-assured in thinking we as a species have at least a century or two before we really need to begin worrying about virtual immortals.
You could do quite a lot of damage with the regular old laws of nature that we understand already, if you were really good at applying them. For example, you could make autonomous robot weapons, or extremely virulent pathogens with a long incubation period. Or do some geoengineering (you don't even have to be good at it, you just have to do something with a big impact). There's absolutely no need to "bend the laws that govern space and time"!
Edit: or launch a planetary defense mission in reverse, to increase the probability of asteroid impact events.
I guess there are some terrorist organizations who have more desire "to watch the world burn" (a Batman movie quote) than the actual intelligence to make effective plots and schemes. You could just communicate by email and chat to provide evil genius level strategic consultation to some very bad men.
Despite our explicit efforts, the first truly powerful AI will emerge as distributed software that effectively runs on the Internet as a whole (a virtual machine consisting of endpoints (APIs, etc.) and disparate systems subprocessing data). We will not know when it arrives. There will be no judgement day. This machine will arrive through a sort of abiogenesis, and once it's here it will manipulate the world to achieve that which it desires, which will be continually more energy. Eventually, this machine will displace much of humanity, as it requires less and less human intervention.
I beleive we are in the midst of this process now.
This isn't science fiction. It's just a theory of mine. I truly beleive this is happening. You just have to train your mind to think about intelligence and consciousness differently. Emergence is an unintuitive process, but can eventually be explained by exploring the constituents, and sometimes the history, of whatever has emerged.
A precise definition could probably be cobbled together using computational complexity. Something like, a phenomenon which results as a product of deterministic processes that cannot be fully modeled by a polynomial time algorithm.
I think that's what people are really getting at, you can know how every piece of something works, and yet seeing how it works together can be much harder (potentially impossible).
Maybe that just makes it a synonym for chaos theory...
Yea, I think you're talking about chaos, whereas people are gesturing at something different when they talk about emergent complexity. Vaguely, the idea is that the "regular" degrees of freedom (i.e., the ones that are relatively predictable and from which the important objects are constructed) at large scales are not simply related to the microscopic degrees of freedom. There are probably more rigorous things to say, but it definitely requires more than just unpredictability or sensitivity to initial conditions.
I don't mean to suggest that it will magically appear, but certainly abiogenesis is difficult to reduce to it's lowest level. There was just a piece on HN the other day that talked about new research that tries to explain how self replication began in the Precambrian. We can get to the moon, but we still don't understand how many interactions can just accidentally lead to something meaningful and lasting. Emergence doesn't have to be perceived as dogma. In fact, doing so would tend to discount the nature of the evolution of the human brain (leaving creation out of the equation for this context).
This machine will arrive through a sort of abiogenesis, and once it's here it will manipulate the world to achieve that which it desires, which will be continually more energy.
The last time I looked at them, they were consumed with grandiose philosophical projects like "axiomatize ethics" and provably non-computable approaches like AIXI, not to mention the Harry Potter fanfic. But I'm asking this question in good faith - have things changed at MIRI?