The term “Singularity” had a much narrower meaning back when the Machine Intelligence Research Institute was founded. Since then the term has acquired all sorts of unsavory connotations. The kind of Singularity I work on has little to do with Moore’s Law. So forget the word; here’s the issue:
Since the rise of Homo sapiens, human beings have been the smartest minds around. But very shortly – on a historical scale, that is – we can expect technology to break the upper bound on intelligence that has held for the last few tens of thousands of years. Artificial Intelligence is one of the technologies that potentially breaks this upper bound.
The famous statistician I. J. Good coined the term “intelligence explosion” to refer to the idea that a sufficiently smart AI would be able to rewrite itself, improve itself, and so increase its own intelligence even further – a positive feedback cycle that would shoot upward and arrive at superintelligence, something far more capable than a human.
If you offered Gandhi a pill that made him want to kill people, he would refuse to take it, because he knows that then he would kill people, and the current Gandhi doesn’t want to kill people. This, roughly speaking, is an argument that minds sufficiently advanced to precisely modify and improve themselves, will tend to preserve the motivational framework they started in. The future of Earth-originating intelligence may be determined by the goals of the first mind smart enough to self-improve.
My mid-range long-term research program is to work out a formal theory of such matters – describe how a mind would modify itself deterministically and precisely, including modifications to the part of itself that does the modifying.
A good deal of the material I have ever produced – specifically, everything dated 2002 or earlier – I now consider completely obsolete.
Singularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion.Read More
How advanced artificial intelligence relates to global risk as both a potential catastrophe and a potential solution. Contains considerable background material in cognitive sciences, and conveys much of my most recent views on intelligence, AI, and Friendly AI.Read More
In our skulls we carry around 3 pounds of slimy, wet, greyish tissue, corrugated like crumpled toilet paper. You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe.Read More
If you believe professional bioethicists (people who get paid to explain ethical judgments) then the rule “Life is good, death is bad; health is good, sickness is bad” holds only until some critical age, and then flips polarity. Why should it flip? Why not just keep on with life-is-good?Read More
This is a 5-minute spoken introduction to the Singularity I wrote for a small conference. I had to talk fast, though, so this is probably more like a 6.5 minute intro.Read More
When we build AI, why not just keep it in sealed hardware that can’t affect the outside world in any way except through one communications channel with the original programmers?Read More
That way it couldn’t get out until we were convinced it was safe. Right?
How much fun is there in the universe? What is the relation of available fun to intelligence? What kind of emotional architecture is necessary to have fun? Will eternal life be boring? Will we ever run out of fun?
To answer questions like these… requires Singularity Fun Theory.Read More
Book chapter I wrote in 2002 for an edited volume, Artificial General Intelligence, which is now supposed to come out in late 2006. I no longer consider LOGI’s theory useful for building de novo AI. However, it still stands as a decent hypothesis about the evolutionary psychology of human general intelligence.Read More
Takes a stab at saying what we might wish to do with a Friendly AI if we had the technical knowledge to build one.Read More