Obsolescence Notice
This document has been marked as wrong, obsolete, deprecated by an improved version, or just plain old.

The Plan to Singularity

Version 1.0.10.

©1999 and © 2000 by Eliezer S. Yudkowsky.
All rights reserved.

Table of Contents

"The Plan to Singularity" is a concrete visualization of the technologies, efforts, resources, and actions required to reach the Singularity.  Its purpose is to assist in the navigation of the possible futures, to solidify our mental speculations into positive goals, to explain how the Singularity can be reached, and to propose the creation of an institution for doing so.

NOTE: Since the creation of the Singularity Institute in July 2000, much of this document has become obsolete.  In particular, the whole concept of starting an AI industry may turn out be unnecessary; the Singularity Institute does not currently plan to develop via an open-source method; the entire technological timeline has been compressed into three stages of a seed AI designed by a private research team; and, of course, most of the sections dealing with how to establish a Singularity Institute are obsolete.

These strategic changes are largely due to improvements in the understanding of seed AI - see Coding a Transhuman AI 2 - which may make it possible to develop a seed AI using fewer resources than previously thought.  If the problem of seed AI turns out to be less tractable than expected, the strategies described here would still be a valid fallback plan.

This document comes in two versions, monolithic and polylithic.  You are reading the monolithic version.  This version of the document is intended for continuous reading, or downloading to local disks.  The polylithic version of the document is intended for incremental reading and light browsing.

This document was created by html2html, a Python script written by Eliezer S. Yudkowsky.
Last modified on Wed Apr 25 22:54:25 2001.

May you have an enjoyable, intriguing, and Singularity-promoting read.
Eliezer S. Yudkowsky, Navigator.


The Plan to Singularity ("PtS" for short) is an attempt to describe the technologies and efforts needed to move from the current (2000) state of the world to the Singularity; that is, the technological creation of a smarter-than-human intelligence.  The method assumed by this document is a seed AI, or self-improving Artificial Intelligence, which will successfully enhance itself to the level where it can decide what to do next.

PtS is an interventionist timeline; that is, I am not projecting the course of the future, but describing how to change it.  I believe the target date for the completion of the project should be set at 2010, with 2005 being preferable; again, this is not the most likely date, but is the probable deadline for beating other, more destructive technologies into play.  (It is equally possible that progress in AI and nanotech will run at a more relaxed rate, rather than developing in "Internet time".  We can't count on finishing by 2005.  We also can't count on delaying until 2020.)

PtS is not an introductory-level document.  I am assuming you already know what a Singularity is, why a Singularity is desirable, what an Artificial Intelligence is, and so on.  For more information about the Singularity, see An Introduction to the Singularity.

As with any other document I publish, I guarantee no perfection and claim no authority; I only believe that publishing the document will prove better than not publishing it.  As these words are written, PtS is the very first attempt to sketch a complete path to Singularity; it has no competition.  If you're reading these words in a time substantially after 2000, I hope you will appreciate the historical context.

The future sketched in PtS is not intended as speculation.  I intend to spend my life making it real.  If you believe that the Singularity is a worthwhile goal, and you're interested in making it happen, consider joining the Singularitarian mailing list.

NOTE: Since the creation of the Singularity Institute in July 2000, much of this document has become obsolete.  See note above.

Guide to Contents

1: Vision is a high-level introduction to the PtS plan; it describes the top-level goals and the reasons for the top-level goals.  Whether you plan to browse or read straight through, start here.

2: Technology sketches the sequence of technologies leading up to the Singularity.  (Obviously, this section is not intended to contain a complete technical whitepaper for the next ten years.  The technologies are introduced, rather than explained.  Nonetheless, where the technical architecture has consequences for the PtS strategy, I get technical.)  2.1: Component technologies introduces the Aicore architecture for artificial intelligence and the Flare programming language.  2.2: Technological timeline describes the specific path taken to Singularity, although the purpose of such early technologies as Flare may not become clear until 2.2.9: Self-optimizing compiler.

3: Strategy describes how the PtS goals will be accomplished.  In each category, a "timeline" section describes what will be done in the short-term, mid-term, and long-term.  Other sections discuss miscellaneous questions of strategy, or how to deal with problems that are likely to crop up.  For maximum reading ease, read in given order.  3.1: Development strategy discusses the task of creating the necessary technologies.  3.2: The Singularity Institute discusses the administrative backbone required.  3.3: Memetics strategy discusses the tasks of finding help, not creating opposition, and the art of writing about the Singularity.  3.4: Research strategy discusses how to handle any further research required.  3.5: Miscellaneous contains issues that affect general strategy (3.5.1: Building a solid operation) or details that don't fit under any specific heading (3.5.3: If nanotech comes first).

4: Initiation describes what has to be done to get started, the people needed to do it, and how much it's all going to cost.

Appendix A: Navigation describes how the content of PtS was determined by the structure of the spectrum of possible futures.  For example, this section includes the reason why developing AI is easier than developing intelligence enhancement, why 2010 is the target date, and why 2005 would be better.

Those of you who are just in it for the future shock will probably enjoy 3.5.3: If nanotech comes first and 2.2.9: Self-optimizing compiler through 2.2.14: Transcendence.

Table of Contents

NOTE: Since the creation of the Singularity Institute in July 2000, much of this document has become obsolete.  See note above.

1: Vision

The Singularity, by the old definition (1), is the creation of greater-than-human intelligence.  The Singularity, as the goal pursued by Singularitarians, is the existence of at least one transhuman with enough power to prevent catastrophe and take the next step into the future.  Failure, as a negative goal, is any event that sterilizes Earth and destroys all intelligent life in the Solar System, thus permanently preventing the Singularity.  I think most of us want to share the Singularity with as much of humanity as possible, so widespread war and billions of deaths would probably count as at least a partial failure.

The best candidate for creating the Singularity is Artificial Intelligence; the technology most likely to wipe out the human race is nanotechnology.  (See Appendix A: Navigation.)  To win, we need to create an AI.  To avoid losing, we need to outrace nanotechnology.

We must create a "seed AI", initially dumber than human, but capable of redesigning itself to increase intelligence, and re-redesigning with that increased intelligence, until transhuman ability is reached.  (See the page Coding a Transhuman AI.)  We must ensure that the computing hardware exists to run that AI, and that we have access to that hardware.  We must do all this before nanotechnological warfare becomes capable of completely destroying humanity, and ideally before nanotechnological warfare has wiped out a substantial fraction of the human race (2).  To have a good chance of outracing nanotech, we should plan to develop a seed AI by 2010.  (3).

We need to develop an AI fast, with the same kind of hypergrowth seen in the creation of the Web - what's known as "Internet time".  A private effort probably won't be enough to get that kind of speed; it'll take an industry.  As with the creation of the Web, only the core technologies should be developed by private teams, or in this case, Singularitarian efforts.  The flesh, the content, should be distributed across the shoulders of a planet.  Even the core architecture should be an open-source effort.

The primary thread of PtS deals with creating an open-source AI architecture, an AI industry, a seed AI, and accessible hardware.  Other efforts are required to support these goals and create an environment favorable to success.  We should encourage a community spirit in Silicon Valley that actively favors a Singularity, and encourage an atmosphere in the wider population (and politics) which either favors a Singularity or will not take action against it.  A nonprofit organization - the Singularity Institute - is needed to fund the initial prototype, provide Web and legal infrastructure for the open-source effort, provide funding for the final seed-AI development project, and provide funding and an administrative nucleus for other efforts.

1.1: Open-sourcing an AI architecture

In my visualization of AI development, a tremendous amount of original, brilliant coding and architectural design is required to create a seed AI.  It's not a matter of a few simple reductive principles and a lot of hardware, or even a few simple principles and a lot of knowledge, as previous paradigms in AI have usually claimed.  This is understandable, given that the paradigms of modern AI were born long before the Internet era, in the 50s, 60s, 70s, and occasionally 80s.  The AIers who built the field planned on a scale of small projects, and tried to implement those projects on computers that modern pocket calculators would sneer at, so it isn't surprising that they adopted theories of cognition that promised success with those limited resources.

Even if it turns out that artificially designing a self-improving mind is easier than biologically evolving a non-reflexive one, coding a mind is likely to be a huge project.  An effort that attempted to "go it alone" would spend all its resources on writing, debugging, and testing a few simple algorithms, and developing rudimentary features of the tools to write the tools.  A true mind is simply too complex to be developed by any one project with a realistic level of funding.

The PtS plan seeks to farm out the effort of AI development.  One of the primary methods is developing the core AI architecture - Aicore - through an open-source effort, as with Linux, Apache, Python, Perl, and many other names of honor in the computing industry.

DEFN: Open-source:  Software in which the source code is free, as is the software itself.  Open-source software is partially or wholly developed by a distributed group of users/testers/developers working with the open source code and submitting changes to an open forum.  See the Open Source Definition for the formal definition, The Cathedral and the Bazaar for an early introduction, and The Magic Cauldron for a later and more rigorous analysis of the economics.

By open-sourcing the core architecture, we will reduce the amount of Singularitarian resources required to build, test, and debug (especially test and debug) the core tools.  Building an AI is likely to involve a number of fundamental programming design innovations.  (See 2: Technology.)  As a closed effort, each brilliant new idea represents a brilliant new drain on resources.  As an open-source project, each brilliant new idea, if the idea is brilliant enough, will attract more programmers to the project.  Open source acts as a force multiplier, particularly where bright ideas are concerned.

The Cathedral and the Bazaar also notes the usefulness of open-source for exploiting the design space; that is, open-source users are capable of coming up with bright ideas, useful new features, and even more elegant design architectures.  Open-source users contribute intelligence as well as labor, and to build an AI, we'll need all the intelligence we can get.

Of course, open source also requires a pool of users.  It might be possible to attract a sufficient programming population through the sheer coolness factor of open-source AI, not to mention the high altruism of the Singularity itself.  Every true hacker (4) wants to code an AI and save the world; it's part of the job description.  But I don't intend to rely on that, which brings us to 1.2: Creating an AI industry.

1.2: Creating an AI industry

The open-source AI architecture is only half of the equation; we may compare the architecture to the HTML and TCP/IP protocols underlying the World Wide Web.  The content is another question, and that question is:  "What is the AI doing?"

Eurisko, designed by Douglas Lenat, is the best existing example of a seed AI, or, for that matter, of any AI.  (If you haven't heard of Eurisko, please see footnote (5).)  If "promising" performance is defined as "doing at least one thing a human hasn't done", this being the characteristic that creates the potential for profitability, then Eurisko exhibited promising performance in areas from game-playing to VLSI circuit design.

I believe this level of intelligence and generality can be exceeded, or at least matched, by the AI design paradigms assumed in PtS.  (See 2.1.1: The "Aicore" line and Coding a Transhuman AI.)  Even matching Eurisko should be enough for the first stages.  (If Eurisko's source code were available, there'd be a thousand programmers playing with it right now.)  With luck, the existence of an Aicore architecture will be enough to create the potential for profitable performance in hundreds or thousands of domains - some significant fraction of the domains encountered by modern-day IT (7).

However, trying to replace human experts or match human creativity - historically the great failed venture-capitalist-attracting promise, the shiny sparkly minefield of AI - is not the task I would choose for creating an AI industry.  I believe in mundane AI.  (8).  I think that the New Promise should be significantly decreasing the cost of IT development; AI as a quiet, behind-the-scenes programming tool.  First as an intelligent debugger to provide the "codic cortex" humans lack; later as a part of the core architecture of the program, so that the language itself has a certain amount of common sense.  (See 2.2: Technological timeline.)

The New Promise of AI
The use of artificial intelligence can reduce the cost of software development, speed the development process, improve reliability, and increase the usability of the software.  AI can provide a framework for programming, assist with debugging, and simplify program maintenance.

I believe that source code is the natural domain of AI, the ancestral savannah of computer-based intelligence, and that an AI with domain-specific intelligence targeted on programs should be an enormous aid to the human programmer.  After all, humans don't have a codic cortex, so we've always been in the position of a human without a visual cortex drawing a picture pixel by pixel.  A human programmer is a blind painter (9).

IT presently accounts for half of all capital expenditures in the United States.  A significant reduction in development costs should be more than enough profit-motive to fuel the widespread adoption of our AI.  It should be enough profit-motive to fuel the creation of an industry centered on our AI, in the same way that Linux has changed from a free operating system to an industry centered on a free operating system.

Once an AI industry exists, the development effort should have a wider pool of open-source volunteers and more contributed improvements.  There'll also be the profit-motive to develop better AI - better applications for our architecture - with many private organizations trying new ideas.  And finally, if the AI has the capacity to improve itself, learn, or develop heuristics, there'll be much more computing power devoted to generating shareable improvements.  In that helpful environment, doing the core research to move along the timeline should be more like flying and less like wading through molasses.  We'll be able to develop the potential for a group of features, release the potential, and get back the features.

This vision has several technological consequences:

  • There has to be a strong real profit-motive for adopting existing AI and developing new AI.  The AI has to be more than a sparkly toy.  Ideally AI should be part of the program architecture, and that architecture should provide a major improvement in development time, or performance, or interoperability, or usability.  If the AI is allegedly "thinking" about the domain, then that should be a data-mining feature integrated into an existing application and used every day by mortal end-users, not a once-off investigation running on a supercomputer.
  • There has to be a strong perceived profit-motive for adopting AI.  The AI has to create successes, and it has to create visible (or even spectacular) successes (10), and people have to see it creating successes, and the benefits have to be described on a T-Shirt.
  • Once the gold rush starts, the AI architecture has to be such that everyone can easily swap the components they independently develop - HTML's open nature, its "stealability", was a key part of the Web revolution.  Ideally, the components should interoperate automatically as part of the system architecture.  Failing that, if programmer effort is required, there has to be a developer base which will make the effort to "compile" integrated versions of popular modules; ideally there should be a culture that freely distributes the results, and free availability of the base components.  (11).
  • There has to be an incremental path between modern-day technology and a true go-for-broke seed AI.  There has to be a profit motive to develop, or adopt, each increment.  Also, if possible, blind alleys should be avoided - by which I mean, the path shouldn't be littered with attractive opportunities for people to crap up the technology or lock developers in.  (12).

1.3: Spreading the right memes

The larger our support base, both of active Singularitarians and of people who are kindly inclined, the better our chances of getting to the Singularity.  As yet, there are no groups directly opposing our immediate purposes; we should try, as much as possible, to keep it that way.  These are the two goals that need to be served by memetic efforts.

The memetic task is further complicated by the number of audiences being targeted.  We need to target Internet tycoons and programmers (particularly open-source programmers) with the full Singularitarian meme.  We should try to target the rest of the technophilic populace, from SF fans to the readership of Wired, with Singularity-ownership memes.  We need to worry about the reactions of CEOs, Greenpeace, politicians, TV reporters, teens, journalists, televangelists, honest religious fundamentalists, the middle class, truck drivers who've lost their jobs, "disadvantaged youth" and the "urban poor".  And that's just in America.

The circumstances needed for the easiest, safest path to Singularity can be compactly stated:  We need rich Singularitarians in Silicon Valley, open-source programmers who believe in seed AI, CEOs who don't object to using AI and are even attracted by the sparkle, supercomputing vendors who either believe or turn a blind eye when the time comes to run the Last Program, no interference from politicians, no fad television programs about the Singularity, and citadels of technophobia worrying about something else.

Is the safe path possible?  Probably not.  It relies on nobody outside the technophilic minority hearing about the Singularity, believing it if they did, or spreading the meme if they didn't.  That's a pretty fragile situation.  The Singularity is an awesomely powerful meme, and I have the feeling that if we don't spread it, someone else will.  The ethical question involved in leaving "the average guy" out of humanity's victory is thus somewhat moot.  While targeting only technophilic audiences may remain the wisest use of resources in the short term, everyone else (13) will find out eventually.

The "But someone will" rule also simplifies the oft-asked question of "Shouldn't we tone down the Singularity meme, for fear of panicking someone?"  In introductory pages and print material, maybe.  But there's no point in toning down the advanced Websites, even if technophobes might run across them.  Given the kind of people who are likely to oppose us, we'll be accused of plotting to bring about the end of humanity regardless of whether or not we admit to it.  (14).

Only in the early stages will we be able to choose the material presented and the target audience; in later stages, if we're not fortunate enough to manage a rapid, quiet Transcendence, we'll be dealing with a rapidly evolving memetic environment containing every kind of idea about the Singularity.  We can take for granted that the negative memes we're afraid of will come into existence and propagate.  We have to either get there first with positive memes, or develop counter-memes that get there first, or create positive memes that can out-propagate the negative memes, or corrupt the negative memes so that they don't result in active opposition.

1.4: Starting a Singularity Institute

All else being equal, the tasks outlined in PtS will go faster if there are people working on them full-time (15).  Some tasks, like running the Last Program (16) on rented supercomputing hardware, are likely to require large-scale funding (17).  Finally, there should be an obvious target for people who would like to help out the Singularity via cash donations, preferably in a tax-deductible fashion.  A nonprofit (18) institution devoted to providing Singularity infrastructure, a "Singularity Institute", would seem to be required.

The right kind of nonprofit (19) would be eligible to apply for grants from private foundations; which, especially during the initial stages, may be our best bet for funding.  It would also be possible for individuals to make tax-deductible donations.  Later on, my hope is that the Singularity will prove a popular cause among Silicon Valley millionaires; in the long term, this will probably be the major source of funding.  I'm not particularly counting on the middle class for broad-based support, since even institutions that actively solicit small donations typically get 80% of their funding from a few large donors.

In the beginning, I expect the Singularity Institute to employ one or two full-time developers; once we have a major sponsor, or we successfully apply for grants, this will go up to a couple of dozen people including some Singularity PR people and a few researchers.  This may not be enough to reach Singularity in 2005 or 2010, but given unlimited time, we can probably get all the way to the Singularity with no higher level of funding.  Since our time probably is limited, we should try to build a stronger operation.  If we can get Silicon Valley to adopt the Singularitarian ideal, the Singularity Institute might have enough funding to sponsor massive PR efforts, run dozens of research projects, and start subsidiary institutions.  Anything beyond that is probably superfluous, although I sincerely doubt that we'll ever run out of uses for money.

For more on the Singularity Institute, see 3.2: The Singularity Institute.  For a description of the initial people required, see 4.2: Institute initiation.  For guesses at the amount of funding required, see 4.1: Development initiation and 3.2.1: Institute timeline.

1.5: Dealing with opposition

A substantial fraction of the population is likely to react badly to the prospect of Singularity - either because it contradicts deeply held moral principles, or because they learned their reflex reactions from watching Star Trek.  We should emotionally accept the possibility of government interference, and be prepared to move against attempts to regulate the development of AI, or evade those regulations if they are successful.  (We should also oppose attempts to turn the public against the Singularity, even if no government regulation is immediately proposed; the general battle over how the Singularity is perceived comes under 1.3: Spreading the right memes.)

The probability of public or governmental opposition is a primary reason why running the Last Program distributed over the Internet, formerly a major part of the PtS vision (20), has been abandoned.  With that tempting target for regulation (or public protest) gone, and the necessary public exposure reduced, there's a much smaller chance of crippling legislation being passed.

Unless the Singularity becomes a major public issue, a complete and enforceable ban on AI research is not likely in the United States.  Technophobia is more likely to find outlets in bans on government funding, regulations requiring public disclosure, and so on.

There are a few psychologically plausible pieces of legislation - which I see no need to be more specific about - that would impose enough inconvenience to force the open-source project to move to Australia or, if necessary, China.  I'm not saying we should be ready to move on a minute's notice, but it's something to bear in mind.  (For example, we should back up all our materials in several offshore locations, so that nobody can prevent us from taking our information with us when we move.)  This state of readiness should also help prevent a ban on AI; if it's absolutely clear that banning AI would simply move the project overseas, to the detriment of American (or Australian, or English) industry, there's some slight chance that the legislators involved will see reason.  (But not much, so we need to be ready to actually move.)

There are also technological precautions we can take against a complete ban on AI.  We should be ready to switch the open-source administrative structure from being public and centralized to being anonymous and distributed, with the source code being submitted via PGP, and protecting participant identities.  In short, we may need to go underground, and that's something to keep in mind while organizing the project and writing the code.  I'm not suggesting that we be ready to disappear on a moment's notice, since that would take a lot of work that might turn out to be unnecessary, but I am suggesting that we bear it in mind.

I would also suggest encouraging encryption (whether governments like it or not), particularly ubiquitous protocols like Secure IP (21), just in case it turns out we do need distributed computing.

"Dealing with opposition" may also include dealing with groups that resort to extra-legal means to oppose us.  Aside from locating our working sites under the protection of police forces principled enough not to "look the other way", I don't see any particular aspect of this task that should be discussed in advance.  (It is, however, something to bear in mind.)

1.6: Protecting the IT industry

The computer industry is our base.  It's not enough to start a gold rush; we have to ensure the miners are healthy.  In particular, we have to ensure that Moore's Law (23) keeps on trucking, and that the software industry remains viable.

A little-known adjunct of Moore's Law is that the capital required to build a chip fabrication plant also keeps doubling.  We have to ensure that hardware demand remains strong.  (24).  Bill Gates is famous for telling Intel that no matter how much power they supplied, he would "develop some really exciting software that will bring the machine to its knees".  Of course, that was some time, and a factor-of-1000 performance improvement, ago.  And now it looks like Bill Gates may finally be defaulting on that promise (25) - the slow stuff isn't exciting and the exciting stuff isn't slow.  People are starting to wonder whether they really need the fastest new machine.

Since PtS abandoned the distributed-computing plan, the strength of the individual machines on the Internet is no longer all-important - but said machines still need to be able to support the local infrahuman AIs we develop.  Furthermore, modern supercomputers increasingly consist of thousands of Pentiums wired together, meaning that the cost and magnitude of supercomputing depend on the cheapness and speed of the individual processors.  Above all, we need to encourage the growth of "ultracomputing", software that uses massive amounts of computing power (and provides equally massive benefits), to spur others to build and rent out massively supercomputing hardware (26).  But we also need to ensure that the desktop computer keeps getting brainier, or hardware companies won't earn the money to build the factories to make the chips that go into the supercomputers.

The software industry isn't in trouble now, but there may be trouble brewing; to wit, CEOs and even CIOs starting to wonder whether investing trillions of dollars in software development - half of all capital investment in the US is going into information technology - is really paying dividends.  Large projects in particular are starting to run into Slow Zone problems (27), the tendency for anything above a certain level of complexity to bog down.  Of course, many of these people are still using COBOL, and there's not much you can do to help a company that clueless, but some projects use C++ or Java (or even Python) and still run into trouble.

I am not suggesting that we launch a special-purpose effort to Save the Software Industry.  That doesn't need to be done by Singularitarians (28).  Rather, I think we should try and kill three birds with one stone.  The primary immediate goal of AI (29) should be to reduce development, maintenance, and debugging time for mainstream, internally developed IT.  First bird - this is AI targeted on source code, meaning that it's a step towards self-improving AI.  Second bird - providing the profit motive for the computing industry to adopt and develop AI.  Third bird - heading off economic trouble in the software industry.

2: Technology

This section explains the sequence of technologies leading up to the Singularity.  It's not intended as a detailed technical whitepaper.  It's not even intended as a high-level guide to design principles, or anything else aimed at the eventual implementors.  Rather, this section is intended to convey some of the plan-relevant characteristics of each technological stage.  (30).

The technologies described here are presented in chronological order, which happens to be the reverse order of their invention.  Coding a Transhuman AI (which describes how to build a seed AI, the final stage of the timeline) was published in 1998, long before I'd thought of turning a bag of programming tricks into the Flare language.  My personal notes preserve for posterity the exact moment in which I realized there was a direct path from Flare to a self-optimizing compiler, and from a self-optimizing compiler to a Singularity.  My brain preserves the memory of the triumph I felt.  That was when I decided to write PtS.

"So I think I may be able to map out the full Path to the Singularity, here."
           -- Written in the predawn hours of April 28th, 1999.
The technological timeline was created by starting with seed AI and working backwards.  The upshot is that, by my standards, we don't get to the exciting part until 2.2.9: Self-optimizing compiler.  I hope you'll bear with me until then.  (Alternatively, you can read the sections in reverse, starting from 2.2.14: Transcendence and working backwards, but PtS wasn't designed to be read that way.)

2.1: Component technologies

2.1.1: The "Aicore" line

Creating a self-enhancing AI with the potential to get all the way to Power level involves a number of complexities not needed to just hack something up.  Non-seed AI is considerably easier than seed AI.  You can design systems with no way to automatically integrate changes to the architecture; in fact, it's even possible to design systems with clear-cut distinctions between content and architecture.  The system can contain premanufactured knowledge that it has no way of learning, and subsystems that it can't understand or modify (31).  In short, there are a lot of shortcuts that we can take initially.  As time goes on, more development resources will become available, and computing power will become cheaper, enabling us to stop taking shortcuts.

The "Aicore" timeline, or at least the initial stage (a.k.a. "Chrystalyn"), describes a "crystalline" AI (32).  If Elisson (33) is a mind, then we might consider Chrystalyn, a.k.a. Aicore One, to be a picture of that mind.  As we move along the Aicore line, the flat picture becomes a sculpture, and the sculpture becomes a mind.

Initially, the Aicore I visualize will provide a basic framework for programs that can use artificial intelligence:  A programmatic architecture, an API (34), a set of architectural domdules, and whatever library domdules are standard.  "What's a domdule," you say?  The actual explanation of what domain modules are, and why domdules form a fundamental part of AI, is in Coding a Transhuman AI.  I shall nonetheless essay a one-minute explanation.

Domdules and RNUI in 60 seconds:
A "domdule", a module targeted on some domain, is what enables the AI to Represent, Notice, Understand, and Invent cognitive structures in that domain.
  • Represent means having data structures that can mirror the low-level elements and structures being modeled.
  • Notice means the ability to detect simple facts about the domain - in programmatic terms, a set of codelets that annotate the data structure with simple facts about relations, simple bits of causal links, obvious similarities, temporal progressions, small predictions, et cetera.  The converse of notice-level simple perception is simple manipulation, the availability of choices and actions that manipulate the cognitive representations in direct ways.
  • Understand means integrating the simple facts with the goal system and other architectural domdules to form designs with internal purposes, to represent larger designs, analogies, and facts about usefulness.
  • Invent means that the AI can start with a goal and create a design that fulfills it.
You can think of a domdule targeted on source code as being a "codic cortex", by analogy to the human "visual cortex" (35).  If you think of a programmer as a blind human painting a picture pixel by pixel, you'll understand the level of improvement I'm hoping to bring to programming.

The AI application developer would write an "application" domdule for some specific domain - say, cash-flow analysis - and integrate it with the rest of the system.  Then the application domdule automatically gets all the built-in capabilities - design optimization, learned prediction, goal-oriented planning, and so on (36).  In short, the application AI would be able to engage in reasoning about the domain.  Furthermore, any skills the AI has already learned can be applied to that domain - if the AI knows how to look for anomalies, then, after a bit of experience, it will know how to look for anomalies in cash flow.

And domdules add up; the standard library in later versions of the Aicore system might provide a natural-language domdule (37), which could be added to the existing AI to create a question-answering interface for the cash-flow analysis database.  Of course, I doubt this integration will be automatic during the initial stages.  How much cumulative ability you can get by combining domdules, and how much work is required to combine domdules over and above the work required to create the domdules themselves, will be one of the key parameters shaping the AI industry at any point in time.  (38).

The primary goal of Aicore, in the initial stages, will be making IT development easier.  The most obvious way this can happen is through smart IDEs (39); that is, application domdules targeted on source code.  The next most obvious way is making it easier to develop IT applications that "wanted" AI to begin with, i.e. IT applications that need to perform reasoning about the domain.  The Aicore programmer would just have to define the way the domain behaves, since general reasoning is already present.  The most subtle and important goal of the Aicore line is to become integrated into the system libraries and the program architecture.  The programmer will write programs that make use of AI reasoning in the algorithms (40); or, better still, write "programs" that are simply thoughts in the AI (41).

The ideal application for Aicore - the one we want most to encourage - would be systems that consist of the Aicore system, an application domdule, a user interface, a set of goals, and lots and lots of Beowulf or supercomputing hardware.  Where, previously, the system would have been a three-year hundred-million-dollar program that was two years late and never did work right.  Aicore has the highest utility where being able to assume some basic reasoning ability is the key difference between writing one set of high-level instructions, and spending a thousand programmer-years writing ten thousand slightly different low-level procedures ten thousand slightly different ways.  That's what it means for AI to be "part of the program architecture".

The very first releases might not have the robustness needed to run corporate IT in real life, but even the first releases would still be useful for rapid prototyping, and robustness is what open-source does best.  In later stages, the products of the Aicore line will begin to approach the level of sophistication and decrystallization needed for seed AI, perhaps even exhibiting some signs of rudimentary intelligence.  No harm will be done if true intelligence isn't possible on desktop hardware, however.  The important thing is that the stuff Aicore is made of should begin to approach the same design used in true seed AI, so that the genuine full-scale seed AI research project can take advantage of industry advances.

2.1.2: The "Flare" line

Flare is a proposal for a new programming language, the first annotative programming language, in which programs, data, and the program state are all represented as well-formed XML.

NOTE: At present, I don't even have a publishable Flare whitepaper.  I don't even have a finalized design.  I am engaging in the sin of aggravated vaporware because I have been told, and convinced, that the timeline does not make any sense without knowing some of what Flare is and what it does.  Please consider all discussion of Flare to have whatever degree of tentativeness and subjunctivity is required for that discussion to be an excusable act of speculation.

Since I do have over half a megabyte of design notes, I use the present tense in discussing properties of Flare.  Those properties are no longer tagged as "subjunctive" in my mental model, and using the present tense feels more natural (42).  No claim that the Flare design has been finalized is implied.

XML is to Flare what lists are to LISP, or hashes to Perl.  (XML is an industry buzzword that stands for eXtensible Modeling Language; it's a generic data format, somewhere between generalized HTML and simplified SGML.)  The effects are far too extensive to go into here, but the most fundamental effect (43) is that XML is easy to extend and annotate, and this property extends into Flare programs and the Flare language itself.  (44).

Our own cognition is also annotative - we note arbitrary facts about things, and think in heuristics that act on arbitrary facts about things.  An "annotative" programming language, which recognizes this fact, is thus a higher-level language.  "Higher-level" means "closer to the human mind" - annotative thinking is reflected in annotative language, just as object-based cognition is reflected in object-oriented languages, just as procedures are closer to our mental representations than assembly language.

Flare has four primary purposes:

    To protect our base by making large-scale software programs easier to develop.  Annotative programming is fundamentally more scalable.  Modules can comment on arbitrary objects without introducing compile-time dependencies, and this in turn allows for the natural existence of certain design patterns for self-organization (45).  Flare, if successful, should decrease "ablative dissonance" (46) by an order of magnitude.
  • To accelerate the development of AI by providing
    • A modern replacement for LISP-the-language-of-AI, based on XML instead of linked lists.
    • An annotative program format so that AIs can notice and manipulate source code in a natural way.  (47)
    • An extensible language architecture, so that (some versions of) Flare One can have features necessary to self-understanding code, such as:
      • The ability to watch a program execute (interpreted instruction by interpreted instruction) in another thread, or a lower layer of the interpreter, so as to visualize what it does;
      • An XML-based program state that can be annotated, noticed, and understood;
      • Both of the above, in a natural way that allows J. Random Hacker to write AI or use AI without three months of preliminary work.
    • The annotative architecture for notice-level functions.  This one is the real, fundamental reason for Flare; in order for the world-model principle to work, arbitrary functions have to be able to notice facts about the world and notice other functions' facts about the world.  (48).
    It may be possible to implement limited AI without Flare, but it won't feel natural.  To start an innovative AI industry, there has to be a working AI language.
  • To create the potential for the timeline described below.  XML is extensible, and so is the Flare language, and so are annotative modules written in Flare, and so are scalable programs built from annotative modules.  Because of this basic extensibility (as well as a number of features included with malice aforethought), it should be possible to move along the timeline by the natural accretion of features.  As a secondary benefit, it will be easier to move along the timeline in a coherent way if we're the ones doing the development.  (49).

  • To jump-start the PtS plan by providing something to develop that isn't as difficult as AI.  Even if work in AI bogs down at any point, work on Flare can continue.  Likewise, it'll be easier to create a smoothly functioning Singularity Institute if there's an ongoing project whose progress isn't so dependent on basic research.
Influenced, but not commanded, by the worse-is-better philosophy from LISP:  Good news, bad news, how to win big, I've divided up the implementation stages into the 50%-right version that spreads like a virus (Flare Zero), the single perfect gem (Flare One)... and also the total development environment (Flare Two), followed by effective integration with the Aicore line (programs in "Flare Three" are essentially thoughts in the AI).

The current stage is Flare Negative One.  I have at least 500K of notes on Flare, but I haven't put them into publishable form yet.  I don't have a Flare whitepaper available; I could probably get one together in, say, a month or so.  Since I don't have a complete whitepaper, I was reluctant to say as much as I've said already.  I don't want any crippleware versions coming out and depriving Flare of its incremental benefit.  (I'm not worried about crippleware AIs for the simple reason that anyone bright enough to implement any part of the Aicore line without my help - no matter what I post on the Web - is bright enough to do their own navigation.)  I'm also reluctant to engage in acts of aggravated vaporware, but I've been convinced it's necessary (50).

2.2: Technological timeline

2.2.1: Flare Zero

As noted in 2.1.2: The "Flare" line, Flare is the name for a new programming language that is to be the vehicle of a series of improvements in programming techniques.  It is a programmer's truism that 90% of the features provide 10% of the functionality.  This can't necessarily be reversed to provide 90% of the functionality with 10% of the features, but it's often a good idea to try.  As designed so far, the true, elegant version of the Flare language contains a number of features which will probably not be frequently used (51).  Thus "Flare Zero", a version of Flare designed according to the "New Jersey approach" to have around 50%-80% of desired functionality (52).  For example, Flare Zero will have programs in XML, and data in XML, but the program state will probably not be expressed in XML.  There are all sorts of cool things you can do with an XML program state (53), but they won't happen every day.  The "Zero" is because the number of omitted features render Flare Zero not-really-Flare.

Nonetheless, Flare Zero should yield substantially improved development times, functionality, and maintainability for the development of certain types of complex programs.  In particular, any attempt to explain to the program a set of regulations or rules originally designed for humans to follow (54) should be substantially easier, an improvement on the order of the transition from procedural programming to object-oriented programming.

Ubiquity (55) isn't likely to reach the same level as, say, Python or Java until Flare One.  (On the other hand, it should be fairly easy to interoperate with other languages (56).)  Of course, that's all the "mature" version.  The first release will be a research language, the way these things always work, and without the speed and sophisticated tools that many programmers demand.  Even so, it'll be a fun language, and it'll be possible to do things in Flare that simply couldn't be done otherwise, so we can probably get enough open-source volunteers to bootstrap.

Flare Zero will be a reasonably "mundane" project in that, given the basic insights, the design and implementation should not require Specialist-level talent or nonobvious, fundamental revision of the basic insights.  In short, it's a project that I can reasonably turn over to someone else; I don't have to be the limiting factor.  This may make it suitable for an initial project by the Singularity Institute, even though Flare is not actually necessary to the Aicore line until Aicore Two or thereabouts (58).

Flare Zero is not on the critical path, but it's also the easiest, and most conventional, of all the projects on the timeline.  Depending on the resources available and the perceived necessity for experience and a successful initial project, it might be wise to start work on Flare Zero first.  (See 3.1.1: Development resources and 3.1.2: Development timeline.)

2.2.2: Aicore One ("Chrystalyn")

Chrystalyn is a minimal version of Elisson - Elisson being the seed AI from Coding a Transhuman AI - which lacks the cognitive and programmatic features required for self-alteration, independent learning of new domains, self-organizing integration of new representations, automatic adjustment to architectural changes, and flexible symbols.

However, Chrystalyn will have a domdule-based architecture and world-model, RNUI design and notice codelets, a goal system, causal and similarity analysis, reflectivity, and some self-improvement with at least as much reflexivity as Eurisko (59).  Or at least it will have simplified versions of causal and similarity analysis et cetera which imitate cognition in useful ways.  As the name implies, Chrystalyn will be designed as a mostly "crystalline" (60) AI.  The programmer will simply sit down and write notice-level functions, and those functions will have direct meanings that make immediate sense to humans.  (61).

Although Chrystalyn is not an intelligent entity, it should be one heck of a computer program.  Eurisko achieved impressive (62) performance in a wide variety of domains, through a combination of generalized heuristics and domain-specific models.  I would like to duplicate and exceed this capability in open source code.  Given a domain (chess, inventory replacement, stock prices), it should be possible to write a domdule which obeys some standard API and has human-provided integration with the "architectural" domdules (causality, goals, etc.).  That is, the notice-level functions inherit from the QNoticeCodelet class (63), and the AI application developer annotates (64) notice-level functions (and representations, and reflexive traces) with standard, crystalline labels (65) that let them link up with the rest of the system - the domdules and representations for goal-oriented thinking, causal analysis, heuristic-learning, evolutionary design, and so on.  The programmer writes the application domdule; Aicore provides the cognitive architecture, and skills to perform actual reasoning about the domain, such as design optimization, prediction, question-answering, and so on.

I wouldn't be surprised if the first Aicore architecture required, for a pair of domdules to work together, that at least one domdule have been designed by someone who knew about the other domdule.  That doesn't mean things are hopeless; two domdules might be able to interoperate fairly well in practice because they both know about the causal-analysis domdule.  But if this level of AI goes mainstream, it will create a market in domdule packages (rather than just domdules) and minor "OS wars" for new architectural domdules.  Software folk will find this paradigm familiar.

At minimum, the initial releases of Chrystalyn should be useful for rapid prototyping, exploratory programming, and knowledge mining.  Given a lot of domdule work and enough computational power, Chrystalyn should be capable of a number of novel applications (or assistances) - automatic translation between communication protocols and data formats; user interfaces that learn frequent actions and understand user goals (66); porting code between operating systems and languages; spotting bugs.  The initial versions of Chrystalyn will probably be memory hogs and slow like molasses (although perhaps the heuristics and procedures could be learned on a SPARC and run on a PC).  But software bloat is relative to hardware, and perhaps someday Chrystalyn will become part of word processors.

Work on Chrystalyn can occur contemporaneously with Flare Zero development.  It should be possible to write Chrystalyn in Python, using some Flare techniques without having the actual language (67).  If Chrystalyn is still around when Flare Zero becomes reliable and supported, we'll translate it into Flare Zero.  Same goes for Flare One.  (Anything less than Flare One is a hack as far as doing AI is concerned.)  But meanwhile we'll do it in Python.  Since Python is open-source and embeddable, applications with embedded Chrystalyn shouldn't be too complicated.

As always, the disclaimer:  At present, Chrystalyn is simply an idea.  As with Elisson and Flare, it will probably take at least a month of thought to translate the idea into a design, then another month if I need to publish it on the Web.  I do not think it will be possible for a team to translate the high-level concept into a design and implementation without Specialist-level assistance, both because of the intrinsic difficulty, and the high probability of further research-level insights and redesigns being required.  The creation of any mind is the hardest solvable intellectual task in existence.

2.2.3: Flare One

If Flare Zero is an advance compilation of "Flare's Greatest Hits", Flare One is the first actual implementation of Flare.  Flare One eliminates all the shortcuts taken to get Flare Zero out the door, and adds some fundamental concepts needed for distributed operation, self-examining code, secure execution, and so on.

One example would making the program state representable as XML; another example would be replacing the monolithic interpreter with the Tcl-like modular interpreter implied by an XML program state.  (68).  This might make Flare One slower, but given the above modular interpreter, it should be easy to create alternate implementations that leave out unused features.  It should also become a lot easier to port Flare to new environments.  (See above footnote.)

If there's a usable version of Aicore One when we're ready to put out Flare One, we should integrate Aicore into the Flare IDE, although not (yet!) into the language.  This should lend at least a minimal level of intelligence to program editing (69), debugging (70), semantic searches (72), change propagation (73), and possibly even code reuse (74).  Even if only the most obvious applications are possible, I would expect a significant improvement in coding time and the manageability of large projects, especially as more AI "scripts" were contributed.  (Although I do worry about whether the AI "scripts" will be fast enough for people to actually use.)

If Flare Zero is the "50% right" version, then Flare One skips the "90% right" phase and goes directly to the "single perfect gem".  And yes, I know that single perfect gems are supposed to take forever to design and be impossible to implement efficiently.  But somehow, after trying to design a mind, the prospect of designing a single perfect gem doesn't seem very intimidating.

2.2.4: Symmetric Multiprocessing and Parallelism

(Should be part of the Flare One release.)

Flare One should contain language features intended to set things up for parallel computing (75) on symmetric multiprocessing machines.  The ideal of SMP is that almost any Flare program will run four times faster on a machine with four processors.  In practice, a "parallel Flare" program that runs on one processor will probably be ten times slower than a "serial Flare" program, which is why SMP should wait until Flare One, when unused features won't detract as much from efficiency (76).

The purpose of symmetric multiprocessing is threefold.  First, as an optional research feature and rapid prototyping method, making certain kinds of code more natural, and encouraging programmers to experiment with parallelism.  Second, to introduce the theoretical potential for upward scalability - if a Flare program won't run a hundred times faster on a hundred processors, perhaps it will at least run ten times faster.  Third, as an ordinary programmer's tool for managing preexisting parallel processes (77).

The ulterior motive of parallel Flare is to start setting things up for the rise of the sixteen-processor home computer.  Admittedly, since the concept of running on the Internet has been abandoned, the sixteen-CPU home PC is less of an advantage (to us).  Nonetheless, SMP helps set things up for very-high-powered apps like Aicore, may advance techniques that will help us run a seed AI on massively parallel hardware, may advance ultracomputing in general, and may also help keep the computing industry going in the event that Moore's Law temporarily fails.

Admittedly, parallel Flare per se won't actually be a practical advantage, capable of driving demand for SMP machines, until 2.2.9: Self-optimizing compiler.  But just the technical advancement in programming techniques may be enough to affect the SMP market (78) - if, for example, the Aicore line can use SMP efficiently, then this will increase demand for SMP machines.

2.2.5: Flare Two

Flare Two integrates Aicore into the language itself, not just the IDE.  This step will probably take place at least a year after the release of Flare One, so that people have had time to evolve applications and libraries and programming techniques that use Aicore as part of the architecture.  (79).  The fruits of the previous loose Flare/AI coordination will be compiled and coordinated.

Flare Two is the point at which system libraries get turned into domdules.  As you hopefully recall from Domdules and RNUI in sixty seconds, a domdule contains "a set of codelets that annotate the data structure with simple facts about relations, simple bits of causal links, obvious similarities, temporal progressions, small predictions, et cetera.  The converse of notice-level simple perception is simple manipulation, the availability of choices and actions that manipulate the cognitive representations in direct ways."

System interfaces, database interfaces, and anything else with an Application Programming Interface - anything with an interface for getting information and acting - can be thought of containing some  notice-level functions of a domdule.  Turn the API functions into domdule codelets, document the codelets with the labels that tell the rest of the AI what the functions are, add in information about visualizing consequences, integrate the domdule with the goal system and the other architectural domdules, and teach the AI about the purposes of the API.  Train the AI; show it what the API is usually used for.  Voilà.  The AI can use the API on its own; all it needs is the set of end-goals.  The AI can perceive when you make silly mistakes in using the API.  Et cetera.  (80).

And "API domdules" are only the simplest, most obvious application, just the tip of the AI iceberg, like using computers for high-speed arithmetic.  If the program-as-domdule concept really works, there'll be applications I haven't even imagined.  Flare Two is where we start "Exiting the Slow Zone":  AI begins to make mainstream programming significantly easier - not just the task of editing and debugging, but system design.  There's a wider range of things you can assume the computer will understand, and the language itself has a certain amount of common sense.  Yes, Virginia, there is a silver bullet.  (82).

Current programs have an internal coherence that's represented only in the mind of the programmer, or at most in human-readable documentation.  But once the programming language and the IDE AI can represent cognitive facts about the program, programs will get a lot easier to debug.  And once the language can turn cognitive facts into programs, programs will get a lot easier to write.

2.2.6: Aicore Two

(Not synchronized with the Flare line.  (83).)

Aicore Two will be a major reference release, written completely in the best Flare available at the time.  (84).  If there are any versatile domdules that have become popular and widely used, they will become part of Aicore Two's new set of architectural domdules.  (85).  If there's anything we've learned about faster development of Aicores, better domdule representations, better formats for the labels that integrate the system, et cetera, then we'll incorporate that too.  We'll do the lessons-learned thing.  The business case for Aicore Two will consist of that update.

But the primary purpose of Aicore Two is three timeline-desirable fundamental improvements:

  • The ability to run distributed over trusted hardware and exchange information with other trusted Aicores.
  • The ability to run untrusted content and think untrusted thoughts, such as heuristics that may provide bad advice.  (87).
  • A goal system with minimal safety precautions (88) - just in case.

2.2.7: Planetary AI Pool

(Should be part of the Aicore Two release.)

The Planetary AI Pool is a central repository of content developed by AIs.  "Content" includes domdules, domdule elements (i.e. notice-level functions), heuristics, concepts, models, and whatever other high-level constructs or low-level elements exist.  The low-level elements should obey a standard API, and the high-level constructs should be mutable by the same cognitive processes that created them.  Hopefully, the problem will not be finding two pieces that fit, but finding two pieces that fit together well.

One example might be a bunch of word-processors all trying out new heuristics and sharing any user-interface adaptations that they've learned from the user.  I create a word-processing maicro (89) that does something cool, and if your word-processor thinks you might like the maicro, your AI downloads the maicro and tries it out.  An example maicro might be "If the user is making periodic entries in some document, offer to add the date and time of each entry" or "If the user is writing a Web page in FAQ format, check the Internet to see if a previous FAQ already exists".  More mundanely, the AIs might just swap lower-level heuristics, like "Investigate cases close to extremes" (instead of "Investigate extreme cases").

If participating in the Pool reliably yields good results, and if selling spare CPU cycles is only worth a couple of bucks a month, then optimizing the local AI might provide more benefit than renting out your computer - making the Planetary AI Pool one of the largest MIPsuckers on Earth.  This isn't as Singularity-desirable as it looks, however, since the AIs don't have a global intelligence.  The primary effect would be to vastly increase the amount of computing power applied to tweaking bits of code, with no greater intelligence than the maximum locally achievable.

Nonetheless, there will be self-improving AI around, interacting on a global scale, so we need to at least start thinking about a possible Transcendence.  Hence the "crystalline Interim Goal System" requirement in 2.2.6: Aicore Two.

Note:  Being a pessimist by neurology as well as profession, I can't help but wonder whether all the "easy wins", all the interesting results, will be gathered in the first two days of the running the AI Pool - after which nothing interesting will happen, wrecking the business case for further participation (90).

2.2.8: Scalable software

"Scalable software" is software that shows a continuous qualitative improvement with better hardware.  Deep Blue is the canonical example; IBM's research team just piled on computing power (91) until Deep Blue exhibited "a new kind of intelligence" (92) and beat Kasparov.  It seems plausible to me that an AI, with more intelligently shaped search trees, would scale even better.

Any software that uses scalable AI automatically becomes scalable itself (93).  But I also have hopes that scalable AI will start a trend towards the general use of scalable programming techniques.  The name of this stage is "scalable software", not "scalable AI".

When scalable AI or scalable something is an integral part of word-processing programs, Joe Q. Consumer will always have a motive to buy the latest 16-processor 2GHz tower.  When scalable programming becomes common, Joe Q. CIO (94) will be able to throw hardware at a late software project.  Above all, the style of programming involved will hopefully extend to the creation of "ultracomputing" software applications - software that would do something amazingly useful on a supercomputer.  (95).

The purpose served:

  • Ensuring the financial health of Intel (or whoever is hawking chips that year), thus
  • Ensuring the financial health of their research department, thus
  • Ensuring Moore's Law keeps on running, thus
  • Ensuring we can keep developing really cool features that would have brought last year's machines to their knees, and
  • Ensuring it'll be possible to build the supercomputer to run a seed AI.
    • Also:
  • Making it possible to develop "ultracomputing" software, thus
  • Ensuring someone will build said supercomputer and rent it out.

2.2.9: Self-optimizing compiler

After Aicore and Flare have been around for a few years, there should be mature Flare domdules for Aicore - domdules capable of understanding the logic and execution of Flare programs.  There should be domdules that parse (and notice) other languages as well.  In combination, this should yield AI capable of translating other languages into Flare.  Flare, being XML-based, is obviously suited very well to being a universal program format.  (96).

If the AI has a reasonable understanding of the logic behind the program (97), it should also be possible to treat a Flare program as a prototype, and write code that does "the same thing" using C++ or assembly language.  (98).

In fact, given that Flare's XML representation should be easy to manipulate and translate, I would expect the first experiments with Flare-to-C++ or Flare-to-assembly compilers to begin soon after Flare One was released.  By this point on the timeline, we might just be assembling the experiments and compiling them into a coherent whole (99), rather than doing any actual research.

Automatic translation will break down the distinction between languages.  If the AIs can analyze machine code and translate it back into commented, named, understandable source code, even the distinction between source code and assembly will break down.  And at that point, Flare will eat the entire software industry.  When Flare is as fast as C++ but infinitely safer; when all your legacy code only looks like it's written in COBOL or assembly language, but - like it'll say on the T-Shirts - It's Really Written In Flare; when Flare becomes the common format, the meeting point of every programming language and IDE - then, things can really start to move.

When Flare-tuned AIs can examine machine code as easily as the original source, there won't be any comprehension hit for using assembly language.  When Flare programs can automatically be rewritten as machine code (100), there won't be any performance hit from using Flare - quite the contrary!  "Code" will become an abstract liquid that can be poured from one substrate to another.  There won't be any part of its own source code an AI can't understand - if necessary, it will be able to look at its own program in RAM.

And thus, the AI will be able to understand, and optimize, its own source code.

NOTE: Veteran Singularitarians will recognize this as a description of Dan Clemmensen's "self-optimizing compiler".

Fully self-swallowing programs are a key step on the road to Singularity.  A long, slow, extended step, a step that starts with Flare Zero and probably won't come to fruition until after Aicore Two.  But that's one of the main reasons for having Flare and Aicore.  Flare is an XML-based annotative programming language.  The Aicore architecture has notice-level functions that annotate a world-model.  And thus, J. Random Hacker can experiment with noticing facts about programs.

So by this point there should be a huge library of programs and notice-level functions and domdules that understand Flare, and manipulate Flare, and translate other languages to and from Flare.  The self-optimizing compiler stage occurs when that collective intelligence can read assembly language, and write it, as easily as it reads and writes Flare.  It should be possible to write a Flare interpreter and Aicore implementation in High Flare, Flare with all the features turned on.  Earlier, this would have run like molasses; with a self-optimizing compiler, it'll run as fast as C++ or assembly.  And experimenting with Flare AI will get even easier, since it'll be possible to write intelligent Flare evaluators in Flare without running into a major performance hit or infinite recursion.

2.2.10: Adaptive hardware utilization

With a self-optimizing compiler, capable of translating 68040 machine code for a PalmPilot interface into Flare and thence to parallel-computing Intel assembly that runs on a multiprocessing Linux machine (101), the SMP market should really hit the mainstream for the first time.  (102).  With true code-understanding AI, it should take only a small additional refinement to handle asymmetric multiprocessing.

I used to wonder why, if we can fit a primitive CPU onto thousands of transistors, and a modern CPU onto millions of transistors, we can't fit a thousand primitive processors onto a modern chip.  But I know why:  It's because we don't have the programming techniques to use the darn things.  So we just build larger and larger serial CPUs with as many bells and whistles as it takes to turn all those transistors into one instruction-execution event loop.

With a self-optimizing compiler around, it should be possible for Intel to design thousand-processor asymmetric multiprocessing chips and be assured that existing programs will be capable of using them to their full potential.  And this, in turn, should mean that instead of million-CPU supercomputers, we'll have billion-processor supercomputers.  With any luck at all, this should be more than enough raw power to run a seed AI.

Since this development would have to take place in "hardware time" (103), the PtS plan doesn't rely on it.  It would really help, though.

2.2.11: Aicore Three

This is the point at which we start to decrystallize the Aicore line and make it self-swallowing; this where we start moving towards seed AI.  It's the release where we start long-cutting some of the shortcuts.  Aicore Three is when we put in some of the Elisson characteristics that were originally omitted, but which will become necessary once mutating code is around (104).  I won't say that this stage will have to occur after SMP or adaptive hardware utilization, since who knows if we'll have the time for hardware to catch up with software - but nonetheless, decrystallizing does take power.  (105).

Aicore Three will also contain a reference release of any infrastructure that got invented by the Planetary AI Pool.  However, most of this infrastructure will be obsolete.  (See next paragraph.)

The defining change in Aicore Three will be self-integrating domdules.  While human effort may still be required to label all the functions and representations, it shouldn't require human effort to link up two sets of labels.  The links will be learned.  AI should exist which examines possible links between tags, associations between representations, et cetera, and which improves or invents them by studying similarities and covariances.

As a result, the distinction between "architectural" domdules and "content" domdules should start to break down.  The market for domdule packages will vanish.  And if the learning techniques can apply to previously created domdules, the total intelligence of the Aicore system will take a huge leap; all the existing intelligence will come together.

Perhaps existing domdules will become obsolete as well.  (Not useless, just obsolete.)  At this point, the time and effort and research computing power should exist to decrystallize the domdules and even the architecture - bump the domdules down a level or two, break them up into subcomponents.  This is getting close to true cognition, but not quite there yet.

Furthermore, some or all of the Aicore code should be "documented" in a way that the AI can understand (106), so that the AI itself can improve on it.  Perhaps as much as possible of the code will be replaced with an implementation generated by the AI itself, so that the AI can manipulate the design and thus manipulate the implementation directly.

When you factor in the Planetary AI Pool, this all sounds like Singularity-class stuff, and the probability does exist - but, once again, I don't think it's enough.  I think it's necessary, but not sufficient.  Reflexivity and circularity create the raw material for Transcendence, but to start it off, you need a fundamental spark of creativity, of smartness.  I'm not sure that will be present at this point.

As with the self-optimizing compiler, I think the result will be a superbly optimized design, perhaps with some interesting new tricks and features, but not a wholly new design with an interesting new purpose.  The AI might be capable of all kinds of coding tricks, but not of performing scientific research or holding a philosophical conversation with a human.

But since there's a real possibility of a Singularity, or of real-but-infrahuman intelligence, Aicore Three should have a safe, decrystallized, and fairly complete goal system (107) - one with almost all the precautions we'd want in a real AI (108).

At this stage we're obeying the second rule of navigation:

Second Rule of Navigation
Before you can create X, you must create the potential for X.

2.2.12: Ubiquitous AI

If progress continues long enough, Flare and Aicore will merge.  Most mundane programming will consist of taking an AI and telling it what you want, in natural language.  One will be able to give instructions to computers in the same way one would give instructions to humans.  Programs will become thoughts.  AI will eat the software industry.

This stage will also see the rise of the World Wide Program:  When all programs are thoughts in AIs, they should all interface automatically - programs will just flow together, like puddles of water merging.  Just as the modern Web can be viewed as one massive document, all the public IT in the world will be one massive program.

There is no way this will happen before the self-optimizing compiler stage, because otherwise the programs will run like molasses.  It will be possible, but a bit more difficult, to carry this off without adaptive hardware; instead of having AIs that think, we'll have AIs that write programs.  (Perhaps the programs will call on the corporation's central AI (109) whenever something exceptional happens.)  There will probably be a significant difference in user happiness between having a personal AI, and just having AIs write all the code, but the effect on the software industry will be the same:  Blam.

This stage is more futuristic than navigational.  I'm not sure it'll happen and we can certainly do without it, but I think it might happen - the potential will be there - so I'm mentioning it.  Why?  Mostly in case our meme people want to talk to Wired about it.

In real life, I'm not sure ubiquitous AI would be a good idea.  In fact, it could be nearly as bad as nanotechnology.  I'm not going to be specific, because it doesn't necessarily help matters to sketch out all the possible ways something can go wrong, especially in public fora (110).  In essence, there'd be three major categories of problems:

  • Power and profit would grow too concentrated - in AI, the software industry, and in any social domain that involves software.  Concentrated power is almost never a good thing.
  • This level of AI would go far beyond the realm of productivity tools, into the realm of incredibly powerful technologies that create drastic alterations in the fabric of society.
  • The transition would happen fast, so fast that even minor changes might produce tremendous shocks.  (111).
And as Dan Clemmensen's contribution to navigation states:

Clemmensen's Law
"IMO, the existing system suffices to permit technological advance to the singularity. Any non-radical change is unlikely to advance or retard the event by much. Any radical change is likely to retard the event because of the upheaval associated with the change, regardless of the relative efficiency of the resulting system."

On the other hand, trying to retard a radical change is also a bad idea, in accordance with Yudkowsky's Third Threat:  "Attempting to suppress a technology only inflicts more damage."  (After all, if the Hidden Variables are kind, the enormous power of ubiquitous AI might enable us to deal with the enormous problems posed by ubiquitous AI.  The Information Age may have sent enormous shocks through the economy, but it also helped build an economy flexible enough to take it.)

Clemmensen's Law says that it's rarely a good idea to attempt major changes to society.  The Third Threat says it's rarely a good idea to try to prevent major changes to society.  I think the upshot is that we don't need to help ubiquitous AI along.  If ubiquitous AI happens anyway, or looks like it might happen, then we'll try to deal with it.  But there's not much that needs to be navigated in advance - except, as stated, the public-relations potential.

In short, this is one of those "destabilizing" applications of AI - an "ultraproductivity" effect.  (See 3.5.2: Accelerating the right applications.)  Ubiquitous AI can't be held off indefinitely, but it doesn't have to happen before the Singularity.  If ubiquitous AI happens anyway, it doesn't have to happen before we're ready; it can wait until the economy is built to take it.

2.2.13: Elisson

And now, the big finish:  Developing a fully self-swallowing seed AI, capable of creatively enhancing itself to greater-than-human intelligence.  We take the best version of Aicore and finish decrystallizing it, doing all the things we couldn't do earlier (because it would be too slow for the user, or because it would be so big that it would have to run on a major supercomputer).  In short, we'll use the Aicore line as the raw material for building Elisson, the AI from Coding a Transhuman AI.

I imagine I'll have revised CaTAI extensively by this time in the future, but for the moment, it will serve as delineating the goal.  Every aspect of the AI, from the low-level code, to the conceptual architecture in CaTAI, to the reasoning behind CaTAI, will be explained in a way that the AI can understand and manipulate.  With the full power of cognitive science as it exists at that time, we will try to duplicate, at least in potential, every useful detail of human thought.

We'll do our best to explain the concept of "better thinking" as a goal, the measurable ideal of better representing, predicting, and manipulating reality.  We'll give Elisson the ability to see the internal coherence of designs for a mind.  We'll give Elisson the ability to evaluate those designs, to see how they serve the goal of better thinking.

And once full self-understanding is achieved, it's only a short step up to self-invention.  When innovation is achievable in theory through a massive search through all possible designs, then innovation should be possible in practice to any self-modifying mind that understands search trees.

AI will change, from a computer program designed for speed and reliability, into a real mind designed for power and flexibility.  We will add the spark of creativity, and link that spark to a clearly defined goal of self-enhancement.

Elisson will probably be a tremendous challenge, possibly requiring a centralized effort.  Elisson is also a Deep Research project, very very Deep, the Deepest humanity will ever face before the end, and it will require an immense amount of ultra-top-flight brainpower (112).  But with a pre-existing Aicore-based IT economy, small improvements coming out of the Elisson Project should yield immediate profits, thus providing a motive for the investment required.

With a huge pool of AI hackers, with planet-years of knowledge and expertise in domdule programming and code understanding and self-modification, the potential will exist.  In the end, every other point along the timeline exists only to create the largest possible support base for Project Elisson (or other AI projects, if Elisson should fail).

Project Elisson should be started as soon as the necessary resources are acquired.  Those resources probably won't be available until AI goes mainstream at 2.2.6: Aicore Two, and the project will not yield directly applicable results - it won't be part of the timeline - until AI becomes decrystallized at 2.2.11: Aicore Three.  Likewise, the timeline will not yield direct programmatic support until Aicore Three, just hints and tools.  Nonetheless, Project Elisson will represent the leading edge of research in AI, which will trickle back to the Aicore line.

Besides, you never know where the breakthroughs lie, and with self-modifying AI, any breakthrough might be the last.  Project Elisson should start up as soon as it's practical.

NOTE: This marks the point at which we are actively and directly trying to bring about the immediate creation of a true Singularity, the birth of greater-than-human intelligence.

2.2.14: Transcendence

And then, at some point, the Elisson project succeeds.

A major breakthrough occurs within the research project - the local version of Elisson does a major rewrite with much greater creativity, exhibits flashes of smartness, but perceptibly runs up against the limit of the hardware lying around.  In short, Elisson exhibits some kind of progress that leads us to think it can go all the way.

The next step would be running Elisson on adequate hardware.  There are three possibilities:  "Adequate hardware" is what's lying around the Singularity Institute's basement, "adequate hardware" can be rented out for a few days and a couple of million bucks, and "adequate hardware" simply isn't available.  In the first case, hardware isn't a problem.  In the second case, we quietly (113) rent out the best available hardware and run the latest version of Elisson.  In the third case, after the attempt on the best available hardware fails, we keep on researching and try again when a significantly better supercomputer becomes available.  For the sake of discussion, we'll assume that adequate hardware is found.

I hope and pray (and guesstimate using the power/optimization/intelligence curve described in Singularity Analysis) that there's very little chance of winding up with a merely human-equivalent AI.  Once the AI reaches the vicinity of human intelligence, it should be able to redesign its architecture for greater efficiency, which would translate into even greater intelligence, which would enable it to redesign its architecture yet again.  Since the forces involved in self-modifying intelligence are folded in on each other, the total curve is completely different from the non-self-referential forces whereby evolution produced human intelligence.  There's no particular reason for the curves to have plateaus in the same places.  Given the historical fact that Cro-Magnons (us) are better computer programmers than Neanderthals, I would expect human-equivalent smartness to produce a sharp jump in programming ability, meaning that, for self-modifying AI, the intelligence curve will be going sharply upward in the vicinity of human equivalence.

Thus, there should be a fast transition between considerably-dumber-than-human AI and considerably-smarter-than-human intelligence.  In the event that I'm wrong about this, we'll probably have to grit our teeth, go public with the birth of human-equivalent AI, and hope for the best.  But I really do think that's a low-probability event.

There are two critical levels of intelligence:  First, the level of intelligence necessary to take over leadership of the Singularity effort.  Second, the level of intelligence needed to create "rapid infrastructure", or nanotechnology (114).  I think it very probable that these two levels will be achieved almost simultaneously; in the event that this is not so, things get more complicated than I'm going to talk about in this section.  (116).

Even though we're assuming that Elisson is running things at this point, there are still some things we should do in advance.  A merely transhuman AI (as opposed to a Power) might have trouble renting a nanotechnology lab without attracting attention.  So, if the Singularity Institute has the money, we should have a nanotechnology lab in our basement.  The remarkable thing about nanotechnology, circa 2000, is how cheap the basic equipment is (117).  Having a nano lab is likely to be considerably easier than having our own supercomputer.  Circa 2000, a pocket nanotech lab would probably consist of a scanning tunnelling microscope (118), a DNA sequencer, and a protein synthesis machine.  (119).  Given superintelligence, I get the impression that this should be enough in the way of raw materials.  Of course, I am not a nanotechnology expert, so I could be totally off base.

Given all those devices (120), I would expect diamondoid drextech - full-scale molecular nanotechnology - to take a couple of days; a couple of hours minimum, a couple of weeks maximum.  Keeping the Singularity quiet might prove a challenge, but I think it'll be possible, plus we'll have transhuman guidance.  Once drextech is developed, the assemblers shall go forth into the world, and quietly reproduce, until the day (probably a few hours later) when the Singularity can reveal itself without danger - when there are enough tiny guardians to disable nuclear weapons and shut down riots, keeping the newborn Mind safe from humanity and preventing humanity from harming itself.

The planetary death rate of 150,000 lives per day comes to a screeching halt.  The pain ends.  We find out what's on the other side of dawn.  (121).

3: Strategy

3.1: Development strategy

3.1.1: Development resources

One of the strengths of open-source development is the possibility of a casual, volunteer-run, decentralized structure - there doesn't have to be a "core" operation.  I don't think we should take advantage of this possibility.  It strikes me as being unnecessarily fragile, sensitive to random variables in the life of the project leader.  As everyone knows by now, it's possible to run a huge open-source project in a Finnish college student's spare time.  But, historically, this isn't true of all open-source projects (123).

In the case of Aicore and Flare, where the projects are entirely new ideas instead of improvements on previously developed tools, I don't think it would be a good idea to run things on an ad-hoc basis.  There are usually a few key people in any open-source project, and while they often work as spare-time volunteers, the plan will be less vulnerable to the random factors if they can work full-time.  (124).  Likewise, the project will be more scalable if there's an expandable support operation instead of one person handling everything in vis (125) spare time.  (See 3.5.1: Building a solid operation.)  So the PtS plan assumes a support operation.

The virtual nucleus for an open-source project is a Website, a mailing list, and a CVS server (126); as of 2000, this remains constant over the initiation, short-term, mid-term, and long-term stages of open-source projects.

I'm not sure if the Aicore or Flare projects will need an evangelist or other memetic personnel.   (127).  I get the impression that good open-source projects generate their own evangelists.  But, again on the principle of building a solid operation, we might want at least one full-timer.  (128).

This is the minimum nucleus which can support arbitrarily fast growth of the project, in terms of user base and development base.

If the project does start growing "arbitrarily fast", which is the mid-term to long-term scenario, then ideally the Singularity Institute will grow with it (129).  This would enable us to expand the support operation, which would hopefully pay off in faster growth, or at least reinforcement and consolidation of existing growth.  But considering that huge open-source projects have been known to run without any full-time developers at all (130), nothing will go irreparably wrong if the project grows faster than the Institute.

Note that "short term" refers to after (A) the development project has "something potential contributors could easily run and see working" (131), which is required for getting open-source volunteers, and (B) the Singularity Institute exists.  During the "initiation" period (covered in 4: Initiation) and depending on the interaction of the development and Singularity Institute timelines, the build-something-cool stage (132) could take place with anything from one or two part-time volunteers distributed over the Internet to a full-time development crew with a physical location.  (A physical team would probably be considerably faster, one of the primary reasons for using full-time developers.)  We can hope that some Singularitarian volunteers will contribute during the initial development, so a CVS Website and a mailing list are still appropriate. Aicore and Flare

Another initiation-stage question is how to divide resources between Aicore and Flare.  These are very different projects, with different difficulty quotients, requirements, timelines and strategic effects.  The upshot is that even though Aicore is on the critical path and Flare is not, I think initial resources should be concentrated on Flare.  Flare is more scalable, and more accelerable.

The Aicore project presents special difficulties, both programmatic (133) and social (134), that are not present in the Flare line.  I think it makes sense to initiate the far more conventional Flare project first, since Flare is easier to develop, vastly easier to explain, and will, initially, be usable by a wider group.  It should just be easier to get people enthused about Flare, from a programmatic perspective.  AI has major coolness factor, but in practice, it'll take a lot of work before your AI app hello-worlds.  Flare should be easier for mortals to sink their teeth into.

The Flare project creates the infrastructure, influence, contacts, experience, and credibility needed to get the word out about the Aicore project.

Flare also provides a language of implementation for the Aicore project; we can do the prototyping in Python using ugly hacks, so Flare isn't on the critical path, but ugly hacks will only get us so far.  Seed AI will take a self-optimizing compiler, which requires an annotative programming language and annotative programs (137), which means Flare (138).  Flare will also help protect our base in the software industry.  Flare is a legitimate Singularitarian accomplishment.  (I'm saying all this, of course, because I instinctively feel guilty about spending time on anything except AI.)

We should expect that the Flare growth curve will significantly outpace the Aicore growth curve, which should translate into the Flare timeline being ahead of the Aicore timeline (139).  We have to steer between the Charybdis (140) of being seduced by Flare's faster growth and neglecting the Aicore project, and the Scylla of being just another AI project with one or two researchers.  That last part is the organizational reason why Flare is necessary.  A human-scale challenge ensures the Singularity Institute doesn't need to wait indefinitely for successful projects and completed milestones.  (142).  Growth, which is necessary for ultimate success, requires interim successes.

3.1.2: Development timeline

NOTE: All development times are wild guesses that can extend into indeterminate amounts of time or (less likely) become shorter.  Disclaimer, blah blah, legalese, disclaimer, import * from disclaimer, #include "disclaimer.h", require disclaimer, #!/usr/bin/disclaimer, visit http://www.disclaimer.com, you get the idea.

The relative growth curves of Flare and Aicore are likely to be as follows:  The Flare project gets started after either (A) I put the language into the form of a whitepaper that can be handed off to any competent and creative programmer, which will probably take about a month, or (B) I explain the Flare concept in person (143) and remain personally available for later consultations, which implies a Singularity Institute strong enough to support either a physical center or travel fees.  I would prefer option (B), as it will save time, even though (A) is more solid (145).  At this point, the Flare project has been "handed off", in the sense that I will no longer be the limiting factor.

It's utterly impossible to estimate development times in true research projects, of course, but I would hope that the formal open-sourcing of Flare Zero would occur in between six months and one year, that a version stable enough and featureful enough for AI development (146) would be available in from one to two years, and that a significant number of users and a sustainable open-source community would develop in from one to three years.  If the resources were available, Flare One would begin as soon as there was enough feedback on Flare Zero to provide design feedback.

Meanwhile, after Flare had been handed off, I would start working on the Aicore line.  I'm thinking in terms of spending a month or two thinking all the basic concepts through in greater detail (147), then another month or two concretizing the basic architecture (148), then some indeterminate amount of development time (probably a month or two) to "SimpleMind", a rapid-prototype skeleton AI (149).  Then I'd probably have to rewrite the architecture over the course of a few weeks or months (150), after which I'd have a complete design for Aicore One's basic architecture and APIs.  If, while all this is happening, I'm also trying to play some administrative or memetic role in the Singularity Institute (151), getting to this point is likely to take six months to a year.

If Flare Zero is usable at this point, further development will occur in Flare.  (If not, I'll keep working in Python.)  Once there's a clear design for the architecture and API, it'll be possible to initiate the Aicore project with a core crew of full-time developers.  Once the architectural code, the "operating system", is developed, the creation of the architectural domdules can begin.  Because of the cognitive nature of domdules - the notice-level codelets and so on - this stage of the task should easily lend itself to volunteer assistance (152).  I'm not sure how much skeletal material will need to be there before Chrystalyn runs and does something cool, but afterwards, we can party with the open-source process.  We'll only have volunteer-developers rather than developer-users, but I think we can expect quite a few of these due to the coolness factor.  Figure the "does something cool" stage for two to three years since I handed off Flare.

Figure another year's worth of volunteer open-source domdule fleshing, skill teaching and heuristic creation, experimentation with application domdules, knowledge learning, and so on before the first formal business-ready distribution of Chrystalyn.  (154).  I would not realistically expect a substantial user base before four years have passed, making my "Singularity 2005" T-Shirt a touch unrealistic... but we can always hope.  (155).  T-Shirts aside, the PtS navigation assumes 2010 as the target date (156), and if we can seed an AI gold rush in four years, it should be possible to do the rest of the work in six.

Working out specific schedules beyond Flare Zero or Aicore One strikes me as pointless and unrealistic.  I don't see what current decisions would be affected, and any plans made now would almost certainly have to be completely revised.  This can be planned later, and should be.

3.1.3: Open-source strategy

Open-source resources:
The Cathedral and the Bazaar ("CatB")
        Eric S. Raymond ("ESR") and the original announcement of the revolution.
Homesteading the Noosphere
        ESR on the psychology of open-source.
The Magic Cauldron ("MC")
        ESR on economics (and doing a damn fine job!)
Open Sources:  Voices from the Open-Source Revolution
        A book by O'Reilly, readable online.  Essays from the leaders (including ESR).
The Open Source Page
        Home page of the Open Source Initiative.  (ESR is president.)

Prerequisite:  1.1: Open-sourcing an AI architecture.

Open source, as defined by the Open Source Initiative, means free use of the program and free availability of source code.  Free source code allows volunteer programmers and interested users to assist in developing the AI's core architecture.  Free distribution encourages maximal use of the core architecture.  Maximizing use maximizes AI content development (157), the number of "interested users" with a motive to help develop the core architecture, and the amount of publicity attracting Singularitarian volunteers.

I find it fascinating that this entire open-source strategy is made possible by the treatment of core AI as infrastructure instead of application - which, in turn, is only possible because the model of cognition is complex enough to use domdules.  (Wrong AI uses such simple algorithms that the problem-solving intelligence can't be divided into content and architecture.)  The distinction between {networked infrastructure, partially standardized middleware, and local application} is one of the key factors determining how well the open-source model pays off; Aicore is infrastructure, and may become networked.  In a very real sense, the pattern of the industry is caused directly by the pattern of the artificial mind.  Cognitive science for MBAs!

That's it.  Most of what I want to say about open-source is in either 1.1: Open-sourcing an AI architecture or some other part of 3.1: Development strategy.

3.1.4: Designing an open-source community

"...In his discussion of ``egoless programming'', Weinberg observed that in shops where developers are not territorial about their code, and encourage other people to look for bugs and potential improvements in it, improvement happens dramatically faster than elsewhere.  Weinberg's choice of terminology has perhaps prevented his analysis from gaining the acceptance it deserved -- one has to smile at the thought of describing Internet hackers as ``egoless''..."
        -- CatB:  The Social Context of Open-Source Software (ESR)

"...the number of contributors (and, at second order, the success of) projects is strongly and inversely correlated with the number of hoops each project makes a user go through to contribute. Such friction costs may be political as well as mechanical. Together they may explain why the loose, amorphous Linux culture has attracted orders of magnitude more cooperative energy than the more tightly organized and centralized BSD efforts and why the Free Software Foundation has receded in relative importance as Linux has risen."
        -- MC:  The Inverse Commons (ESR)

Although the Singularity Institute is providing core infrastructure for the Aicore and Flare projects, this does not mean that the global effort should be tight, disciplined, or centralized.  Ease of contribution, as ESR notes, must be maximized.  Aside from Internet infrastructure (158), this means establishing an open, relaxed, "egoless" culture, one in which there are no political obstacles to progress.

This is hardly the place for an open-ended discourse on the best way of creating egoless project leaders (159), but I'll take a stab at it.  One way to remain egoless is to start your project as a spare-time college student, so you know that you have absolutely no political authority over the people donating time to the project.  Another way to remain egoless is to have a very high degree of self-awareness, which, in my observation (160), comes from studying evolutionary psychology (161).  I think we can get by on the second method.  If we set out to deliberately create an open, relaxed, egoless culture, as egoless as a shoestring operation, we should be able to do it.  An adept of evolutionary psychology should be able to disable the contextual triggers, suppress the activation, identify and countermand the influences, and disbelieve the suggestions of the emotions having to do with the exertion of obnoxious political control.  It would be silly to rely on this degree of mental discipline in any large group, but I don't think it's too much to ask of a few Singularitarians (162).

It would be best if the "top people" were principled Singularitarians, partly so that we can rely on them to help steer the projects along the line that leads to seed AI, and partly because we don't want them getting cold feet when the day comes to run the Last Program.  Similarly, considering the desirability of building a strongly idealistic Singularity Institute, it'd be best if the full-timers were Singularitarians.  We should also try to seed the main project with first-step (163) Singularitarian and transhumanist memes; that is, I'd like first-step Singularitarian memes to show up in literature about the purpose of the project, and I'd like the average volunteer to have some idea of the ideals that are being served.  An ideal is not necessary to an open-source project, but it does help.

(The preceding paragraph holds true of both Aicore and Flare, but more strongly in the case of Aicore.)

But!  We should be very careful not to create a mindset among ourselves that Singularitarian project members are superior to other project members.  We have to establish a mindset that says:  "Being a Singularitarian is great, but it doesn't mean you're any good as a coder."  I'm an agent of the Singularity, not Singularitarianism, not the Singularity meme.  The Singularity and the timeline projects require intelligence (164) far more than they require a particular set of beliefs.  If a Singularitarian and a non-Singularitarian have an argument over a project feature, the side that needs to win is whichever side is right.

The minor benefits of a Singularitarian leadership cannot be allowed to interfere with the creation of a meritocracy.  Discrimination on the basis of political beliefs can rip a community spirit apart.  For Linux coders to believe that they're taking on the Evil Empire is one thing; if Linux coders who said they were just in it for the money were discriminated against, the effort would die instantly.  My hope is that the people who care enough to go full-time will care that much because they know the whole world is at stake, and that the really bright people will go SL4, and thus the top layer formed of really bright people who really care will be composed mostly of full-time Singularitarians.  But we can't force it.  We can only try to make it happen by ensuring that the project literature mentions the ideals.

Memetic note:  Since the actual short-term task is creating great software, more should be said about the necessity for and uses of that software then about saving the world.  But both should be mentioned, and neither should explicitly be said to be more important than the other.  That's something people can decide for themselves; raising the issue explicitly is not cognitively necessary (165) and would create an unnecessary risk.

Above all else:  Keep the project fun!

3.1.5: Keeping the timeline on-track

Given that there is a technological timeline, steering the project is likely to become necessary; we don't want to run off the track into blind alleys.  I certainly have no problem with rewriting the timeline to take advantage of unforeseen opportunities, but we still need to move along the technological timeline without losing control of the project's direction.  I see at least four major challenges:

In the short-term of Flare, the challenge is preventing an infinitely extensible language from becoming balkanized, like Unix; or at least, ensuring that the balkanized versions still work together perfectly and seamlessly (166).

In the short-term of Aicore, the challenge is keeping what is essentially an open-sourced research project on track through severe differences of opinion about how minds should work.

In the long-term of Flare, the challenge is preventing major vendors from decommoditizing the language, and to convince everyone to go along with the transitions to Flare One and Flare Two.

In the long-term of Aicore, the challenge is ensuring that the middleware war over what set of secondary library domdules (167) and domdule packages (168) to use doesn't backfire and balkanize the architecture.  Furthermore, as time goes on, popular domdules need to be integrated into the primary libraries and if possible the architecture, perhaps in the face of any blocking patents (169).  Finally, the feature set and basic architecture need to keep moving towards seed AI, and users need to be convinced to adopt Aicore Two and Aicore Three.

The trick is maintaining control without being obnoxious about it.  Any attempt to maintain directional control through brute force - exploiting the Singularity Institute's privileged position as maintainer, adding clauses to the license - may simply result in the open-source project being forked away.  We have a great deal of influence as maintainers and project leaders, but this should not be confused with control.

The socially acceptable method of steering a technology - supreme technical excellence - should be assisted by the idealistic, Singularitarian underpinnings of both projects; supreme technical talent does tend to care about ideals.  On occasion, this will hold true even if the technical talent is working for a closed-source company.  Furthermore, open source is more powerful than closed source, and the open-source projects should be able to outcompete closed-source vendors.  Not everything should be dominated by open source, since we want a market to exist, with corresponding profit-motive.  The market occupies the "leaves" of the tree, as it were.  But in the branch nodes where the network effects live - the points where the Evil Ones might uglify the architecture - open source should be, and will be, technically superior.

However, for the correct direction to triumph over crippleware in the larger market (170), not once but every single time, standards and core interfaces and network protocols must be so intrinsically open, open on the level of the components from which higher structures are made, that no reasonable delay between the rise of a "wrong thing" and the Singularity Institute's publication of a "right thing" can create a lock on the market.  This is one of the major driving forces in the fundamental architecture of both Flare and Aicore.  An extensible implementation lets anyone grab your project away from you.  An open architecture lets you grab it back.

Concretely:  If a software company were to try to decommoditize Flare by adding a set of custom XML tags, then a modular interpreter architecture should ensure that any such tags could be added as drag-n-drop libraries to the Flare interpreter.  Furthermore, given that Flare is designed to eventually give birth to the self-optimizing compiler, translating between Flare dialects should be relatively trivial.  If the architecture is extensible enough, the dark forces can't decommoditize it, because anything they do winds up as an extension.  That's the ideal, anyway.

3.1.6: Dealing with blocking patents

The US patent office is severely broken with respect to software patents.  (See the Wired article Patently Absurd or the Upside article Surviving a War with Patents.)  Software patents have been granted, in total ignorance of the prior art, on everything from multimedia to virtual function tables.  Since the US software patent system no longer fulfills its stated moral purpose, the moral issues and the legal issues must be dealt with separately.  (171). The moral issues

The moral issue is simple with respect to so-called "nuisance" patents - patents obtained in bad faith and clear defiance of the prior art by hoodwinking the patent examiner.  (Examples would be the patents granted on multimedia and virtual function tables.)  Nuisance patents are evil; they have absolutely no moral authority and can be evaded by any means necessary.  (I talk about the means of evasion in The legal issues, below.)

On the other hand, I find it conceivable that some company will be the first to come up with a visual domdule (172) for Aicore One.  Let's suppose that they invest a fair amount in research, put out a good product, and then, in addition to copyrighting the domdule itself, they patent the concept of a visual domdule.  Then what?

I would have to say that this still verges on a nuisance patent; there's a requirement that the concept be "unobvious to a professional skilled in the art" (though the phrasing is from memory).  The idea of a visual domdule, or a chemistry domdule, or a seawater fluid dynamics domdule, is obvious to any professional skilled in the art.  Any attempt to patent the idea of a domdule covering a particular domain is evil, and may be dealt with accordingly.  You might as well try to patent the idea of "software that deals with seawater fluid dynamics".  (173).

The point at which we start getting into morally ambiguous territory would be if the company invented a visual domdule, and, in doing so, discovered a new algorithm for visual processing.  Then they patent the algorithm.  At this point, under ordinary circumstances, the company would have a legitimate claim to the algorithm - even if the algorithm is so inevitable, so necessary, that it's impossible to write a visual domdule without it.  That the algorithm is necessary - meaning that anyone trying to write a visual domdule would eventually have to invent it - does not necessarily make it "obvious", under the morality of patents.  It could still take time, money, and research talent to discover the algorithm.  Then, under the morality of patents, the company that spent the money "owns" the algorithm and has a right to prevent others from mooching off the research effort.

And then the Aicore project is in trouble, because we can't include a visual domdule with our free distribution.  (Maybe the free distribution doesn't need a visual domdule, but there's still a problem with the Elisson research project.)  The duration of a US patent is 20 years.  If someone patents an algorithm necessary to cognition, we'll hit the nanowar deadline before the patent expires.  In short, our nightmare is that someone will patent - whether it's a real patent, or a nuisance patent - an algorithm necessary to the development of the timeline or to the creation of seed AI.

Personally, I believe that 20 years is far too long a duration for software patents, thus making even a validly obtained software patent morally shaky.  It'd be like granting 100-year patents on ordinary technologies.  But even if that duration were replaced by something sane, like 5 years, the PtS timescale still wouldn't permit that kind of delay.

My moral argument for running a "patentless" operation is that I have given away the ideas in Coding a Transhuman AI, and I will be giving away the ideas behind Flare and Aicore.  In return, rather than asking for money, I'm asking everyone who builds ideas based on my ideas to give those ideas away - or at least, to let Aicore and Flare incorporate those ideas into library code, if necessary.  (174).  It's a quid pro code:  You use my ideas, and I expect to be able to use your ideas.

Let it be known to one and all that Aicore and Flare are "patent-free" efforts.  By using the ideas given away in Aicore and Flare, you relinquish any moral claim to a "blocking" ownership of ideas that you invent as a result.  You still receive social credit for being a genius, and you can still beat everyone else to the market and make a buck, but once the idea is out, you can't prevent Aicore and Flare from using it.  You can, morally, keep the algorithm a secret through compiled or obfuscated source code, forcing us to reinvent it; as long as we are allowed to reinvent it, that's fine.  You might be able, morally, to sue your fellow for-profit domdule sellers if they steal your research, but if the Aicore project decides to make your bright idea part of the freely distributed core libraries, that's just the quid pro quo.

There must be no insuperable obstacles to progress. The legal issues

With the understanding that nobody has a moral right to sue Aicore or Flare, how do we keep from getting sued? Patentleft and the Mozilla license

The Mozilla Public License (v1.1) (175) contains language intended to ensure that nobody can contribute open source code that infringes on a patent they own, then jump up and say:  "Aha!  Now you have to pay us license fees!"  An earlier version of the license would have exempted all Mozilla source code from infringement on any patent owned by a contributor, although this was later alleged to be a typo, and I don't believe it persists in the current version.

The point is that there exists a precedent for mentioning patents in open-source licenses.

Another honorable tradition in open-source is known as "copyleft"; in fact, this is the very basis of the licensing system.  "Copyleft" is when you maintain copyright to your code, instead of putting it into the public domain, so that you can safely give your code away.  Not only is the code given away for free, but others cannot sell it; others must also give the code away for free, or are allowed to charge only for the cost of distributing collections.  Likewise, the copyright (or copyleft) must travel with the code, and attribution must be maintained.

BSD created the tradition of a "viral license", or a license that applies, not only to the thing itself, but to all derivative works.  Actually, most open-source licenses have a clause about derivative works; you can't take sendmail and add a feature and sell the result.  The BSD license was more extreme; they said, essentially, that any time you used a library to build an application, the application was a derivative work and had to be covered by the BSD license.

Combining these two traditions yields the concept of "patentleft".  (176).  In essence, the license for Aicore would state that any derivative copyrights or derivative patents may not apply to open-source distributions of Aicore.  Just as Linux has an unlimited right to incorporate any modified Linux code, Aicore would have an unlimited right to incorporate any innovation that was published (177), despite, not only copyrights, but patents.

This absolute access would be triggered by the creation of a module dependent on Aicore technology, not just by the deliberate contribution of source code.  Publishing, selling, using, or merely developing a closed-source and patented domdule which (a) used the Aicore API, (b) was linked against Aicore libraries, or (c) ran under Aicore would, under the license terms, grant any open-source operation (178) the right to infringe on that patent.

In order to implement this "patentleft" theory, it may or may not be legally necessary to patent Aicore or Flare (179), just as it's legally necessary to copyright open source code in order to free it.  If so, the patent may or may not scare off some corporate users.  I think that a properly developed license, granting nonrevocable rights, should put all legal fears to rest; this is the same theory behind most open-source licenses.

One should bear in mind that applying for a patent can be expensive; the operation might have to wait until the Singularity Institute had reached the appropriate stage.  I would imagine that the patent could be applied for before the publication of any Aicore code, however.  (180). Auto-downloaded modules, anonymously developed overseas

The patentleft license is the first line of defense.  Suppose it fails, either because someone sues us anyway, or because a random nuisance patent is used against us.  Suppose that we lose the legal battle, or that we don't have enough money to fight, or that the judge issues a preliminary injunction.  As is often the case when a legal system malfunctions, there is nothing we can do that will make us completely safe from the lawyers.

Both Aicore and Flare should run on a plugin architecture, an absolutely modular design.  This being the case, any modules we are legally barred from developing - this goes for encryption technology too, not just nuisance patents - could be developed by an operation based overseas.  I believe Netscape was developing an entire browser in China, at one point; I'm not sure what came of that, or why they weren't using a plugin architecture, but it will serve as an example.

One CVS site overseas; infrastructure for secure, encrypted, anonymous development; and the modules we need are available on Chinese or Russian servers.  Obviously, we'd have to digitally sign versions of the source code which we approved as safe and noncorrupted, but there's no law against distributing digitally signed checksums of strong encryption code.  Likewise, our installer would have to automatically download the code, perhaps through indirection; i.e., the Singularity Institute server contains the URLs of the latest code and signed checksums for the contents, and the installer downloads the strong encryption module and checks the signature.

Admittedly, a judge might be skeptical of such goings-on.  Ideally, we should use sign-and-download techniques for a wide variety of optional modules, not just strong encryption and so on.  It will be harder for the government to mount a legal challenge if the challenge has to be leveled at plugin architectures or distributed installations in general.


All of this might prove unnecessary, of course.  The patentleft license might work perfectly (182), thus obviating all necessity for overseas development.  Nonetheless, distributed installation is something we should set up, with a few optional items, as soon as possible; it establishes the precedent that distributed installation is a general technique, not just a method of evading patent laws or export restrictions.  On the other hand, secure anonymous development is something we might want to have the source code around for.  But we shouldn't release it, much less use it, until the patentleft defenses fail and anonymous development becomes necessary.

Note:  Even if the patent-infringing modules are downloaded from overseas, the end users might, or might not, be legally liable for the use of the items.  So the techniques described here do not solve the problem completely.  If they become necessary, we might lose a few nervous large corporations, or an even larger segment of the audience.  We should definitely try to avoid this contingency, fighting it out in court before resorting to overseas development; both because of the PR problems, and because of the user-side problems.  But even in the worst case, we won't be crippled.  Quiet use by people who aren't rich enough to be sued is a smaller market segment, but it's enough to keep an open-source project going.  The plan will be slowed, but it will survive. Software patent review board

Considering the "broken" state of software patents, which I'm sure only gets worse on a global scale (183), it seems likely to me that some kind of industry-supported arbitration board will come into being (184).  In effect, a duplicate patent office, one that decides whether government-issued patents are "valid" or "nuisance".  (I would also expect there to be a considerable fight over the existence of said office, all sorts of government critters screaming about the theft of the authority they were too dumb to use responsibly.  I don't know, offhand, who's likely to win.)

In the event that such an office comes into existence, we want to get in on the early stages, so we can make sure they understand the concept of "patentleft".

3.1.7: Opportunities for profit

It would be useful, for many reasons, if there was the prospect of making money off the development timeline at some point - or at least, intermediate profits to be pointed to.  Many Singularitarians, despite our admitted fanatic and ascetic devotion to our Cause, would probably be more effective in our service if we were filthy rich.  (185).

For the reasons discussed in Why not venture capital?, even the glorious opportunities don't imply that we should ditch the nonprofit concept and start our own company.  (186).  But there's the possibility of starting "supporting" companies alongside, which, on top of the opportunity for profit, might take some of the strain off the Singularity Institute.

The Magic Cauldron shows that the correct business model for certain types of software is selling support, rather than selling the software itself.  It's conceivable that the Singularity Institute could exist alongside "Crimson Headgear Inc." (187), which would sell CDs and technical support for (free) Flare distributions.  Of course, this company could only start up after Flare had gotten going, but once it did, it would represent, not just an opportunity to create wealthy Singularitarians (188), but the chance to tap into the startup and for-profit side of Silicon Valley.  Furthermore, the Crimson Headgear people could legitimately help develop, advertise, and evangelize Flare on a more traditional basis.  (We could do that ourselves, but they would have a better excuse.)

Besides the Flare software, a market would be born for Flare programmers and Aicore developers.  Consulting companies, training companies, certification companies, and headhunting companies would all be possibilities.  (I even see the opportunity to put the Flare job market on a really systematic basis by building a centralized resume repository right into the Flare IDE.  The same might go for a centralized repository of contract jobs that Flare freelancers could bid on.  You get the idea.)

More mundanely, there may be the chance to provide Aicore T-Shirts and Flare coffee mugs, or for that matter "Singularity 2005" T-Shirts (189).  I'm not sure whether nonprofits are allowed to sell tchotchkes (191), but I imagine this is one operation that wouldn't have to be split off from the Singularity Institute.  Whether the income would be significant is another question.

Finally, the rule that source code must be open only applies to direct-to-Singularity projects.  Side-effect applications we want to accelerate - Teacher AIs (192), Elizas (193), and so on, could conceivably be run as closed-source private corporations.  Of course, it might be Singularity-preferable if these applications were open, free, or at least very cheap, but on the other hand, developing and selling them as private software may move faster, thanks to a larger marketing budget.

Or, leaving the Singularity arena entirely, there should just be cool things to do with Aicore and Flare.  Cool things that could become the basis for companies, allowing Singularitarians to leverage our research talent and reputation into startup products and venture capital - if, of course, we have the spare time.  But the point is, we're planning to deliberately create an industry.  If we succeed, we'll be at the center from start to finish.  We'll be the ones who choose the direction of the industry and know what's coming.  If we can't make a few bucks off that, we don't deserve to be rich.

I mean, speaking of open source, Eric S. Raymond woke up one morning and found out that VALinux, of which he was a member of the Board of Directors, had gone public, making his 0.3% share of the company worth $36M.  Whether the loony stock valuations will still be around in five years is doubtful, but even at sane prices, major names in Flare and Aicore should still be valued members of Boards of Direction, and minor names should still get in on IPOs (194), and all concerned should still become reasonably rich as a result.

But we shouldn't get our priorities mixed up.  The primary purpose is a Singularity, and no IPO profit can compare to immortality and transhumanity.  The primary goals are the timeline projects; if we divert our efforts, or even divert our purpose, we'll probably fail and be left with nothing.  The opportunity to start a company is a side effect, and it'll arrive when it arrives.

3.2: The Singularity Institute

3.2.1: Institute timeline

The stages of the Institute's growth will be determined more by available funding than by time or progress.  Thus, no durations or concrete times are given.  We move from one stage to the next when that level of funding becomes available, and from what I know of history, that's usually random chance.  Likewise, although the stages are presented in order, we should feel free to skip anything skippable.  If Marc Andreessen walks up and hands us ten million dollars, we should set up a ten-million-dollar Institute as fast as the infrastructure can be called into existence, without bothering about the intervening steps.

That said, the financial numbers are guesstimates, even with respect to the order of magnitude.  Hard data and real details on how much it costs to support a given level of functionality - sample histories of nonprofits - are amazingly difficult to find on the 'Net.  If any of my readers have a better feel for the finances, please let me know if I'm underestimating or overestimating - or if I'm exactly right, for that matter. Skeleton Institute

This would be an Institute with no money coming in or going out, but possessed of nonprofit status and a Board of Directors - everything needed to accept contributions if a donor could be found.  Getting to this stage would take somewhere between $1K and $12K up-front, for the legal work on applying for nonprofit status.  The primary advantage would be that we'd be able to apply for grants.  The secondary advantage would be that we'd be able to put up a flag and say, "Look, here's a Singularity Institute!"  In other words, we'd have increased access to the "fire and forget" class of donors.

At this stage, we'd be looking for a major supporter and writing grant proposals.  There'd also be the possibility of selling paid memberships and putting out a newsletter, but I think we should just skip it.  (195).

Infrastructure:  We probably wouldn't have a central physical location.  We might have a small Website put up by volunteers. The short-term:  One or two projects

From around O($100K) to O($1M) (196), the Singularity Institute would be capable of running one or two projects - Flare and Aicore.  In other words, in the short term, the Singularity Institute could implement the first few years of 3.1.2: Development timeline by having at least two full-time developers, one on Flare and me on Aicore.  Paid evangelism and other research projects would have to wait, and any other services related to the projects (tech support, etc.) would have to be provided by volunteers, or other institutions.

This level of funding could be reached from initiation by finding a major donor, or reached from the skeleton stage by successfully applying for a grant proposal.

Infrastructure:  We might be able to get the people together in a central physical location, although not necessarily real offices.  Our people might work at home, but we should be able to buy them development stations.  We can have a professional-looking Website.

The Foresight Institute spent $404K in 1997; now, in 1999, they have 11 staff members listed, not counting Foresight Europe.  This gives us an eyeball figure for the order of magnitude. The mid-term:  Development teams and memetics

With O($10M), it would become possible to deploy full-time development teams and professional writers in support of Flare (which might not need it) and Aicore (which probably will).  Volunteers and major corporate users can be recruited by at least one paid evangelist.  We can employ one or two people to write articles and publish papers about the Singularity or Singularitarianism.  We can hold conventions.  We can influence events outside our immediate circle.

We can move as far along the Flare and Aicore timelines as we need to, given indefinite time.  I'm not sure that starting the Elisson project (or obtaining supercomputer time) would be easy with this level of funding, but it should be possible to at least start a minimal Elisson project, and worry about obtaining supercomputer time when that becomes an issue.

Infrastructure:  We can have central offices.  With a secretary and an accountant, if either should become necessary. The long-term:  Deep research, rapid development, mass evangelism, and meddling

With O($100M) to O($1G), or more (197), the Singularity Institute should definitely be able to make it to the Singularity.  Without funding-related speed limits on AI development, we may even be able to beat the nanowar deadline.  We should be able to fund Elisson research and supporting research in cognitive science.  We should be able to fund subsidiary projects, such as Teacher AIs and design-ahead of nanowar survival stations (198).  We could engage in large-scale evangelism.  We could "meddle" in things like independent patent agencies, government hearings on biotechnology, and so on.

If large-scale funding becomes available, some of it might not go to the Singularity Institute.  We might need to set up a sibling organization to handle political lobbying, which is not tax-deductible.  There are also some for-profit ventures that would be nice, though not necessary, to have around.  E.g. if AI starts having an impact on the economy, there are some other technologies (199) we should sponsor (preferably in advance) to cushion the impact of ultraproductivity.

Planning in any greater detail doesn't seem to be necessary this far in advance, since even if that amount of money lands in our lap tomorrow, the time necessary simply for the legalese should be enough to figure out exactly what to do with it.

Infrastructure:  We can have offices in Silicon Valley (200).  We can also have a small nanotechnology laboratory (and possibly even a supercomputer) in the basement.

3.2.2: Nonprofit status

(The following discussion assumes we're operating under US tax law.)

The Singularity Institute should be a nonprofit operation - in legal terms, a "501(c)(3) public charity".  Whether a nonprofit is classified as a "private foundation" instead of a "public charity" depends on the funding method; how, I have not found to be clear, but I think it has something to do with the ratio of assets to expenditures.  Private foundations acquire several significant legal restrictions.  Also, by convention, private foundations fund public charities - never vice versa and rarely foundation-to-foundation.  For SingInst to apply for grants from existing foundations, it must have legal "public charity" status.  In previous drafts, I had worried about whether having only a few major funders would change the status from "public charity" to "private foundation", but this shouldn't be a problem (201).

As far as I can tell, however, there is no significant advantage - with respect to the law or grant applications - for public charities with a narrow focus.  Thus it should not be necessary to have an elaborate multi-nonprofit structure (202) - certainly not at first!  (203).  At most, we might have to start a Singularity Outreach Committee a few years down the line, or the first time we want to talk to a Congressperson, since nonprofits lose their status if they engage in political lobbying.  But at initiation, the Singularity Institute should be enough.

Unless the section about nonprofits needing to have "educational, scientific, religious or whatever" purposes is exclusive or, in which case we would need separate Institutes for research and memetics.  For example, Foresight and Extropy are 501(c)(3) educational charities, while the Singularity Institute would be a 501(c)(3) scientific charity.  But I don't anticipate this being a problem. Why not venture capital?

When dealing with extremely cool technologies, there's always the temptation to guard every idea like gold, on the theory that funding the Singularity takes money and the idea is the ticket to founding a major company.  Well, founding a company takes a lot more than an idea.  It takes time, and effort, and venture capital, and the acceptance of a 90% chance of failure, but mostly time.  Even supposing the success of the startup, it would simply take too much time to develop the timeline technologies as private projects.  By the time the company goes public and we can finally go to work on the "real" project, the planet will probably have fried.  The core architectures must be public to get the necessary speed.

Also, venture capital involves a set of assumptions that would make it very difficult to implement the PtS plan, or even make a profit.  There may be profits eventually (see 3.1.7: Opportunities for profit), but getting there requires the long-term mindset to concentrate on building a real mind, not making pretty toys.  (204).  A venture capitalist would probably insist on a proprietary architecture, meaning we'd have ten full-time developers instead of one full-time developer and a thousand volunteers, probably a net loss.

The truth is that we aren't trying to make a profit; we're trying to bring about the end of the human condition, with any profits along the way being a pleasant side-effect.  It doesn't seem likely to me that a venture capitalist would be willing to accept that philosophy, no matter what the return-on-investment looked like.

3.2.3: Funding, grants, and donations

I expect that almost all of our short-term and mid-term funding will come from two sources:  Wealthy individuals (usually from Silicon Valley) and private foundations. Private foundations

A foundation generally provides funding in the form of a "grant", which is usually, but not always, tied to the implementation of a particular project.  (As discussed above, grants are almost never given except to public charities.)  Whether the grant is given depends on whether the foundation approves of the project.  The other type of grant is general operating funds; this type of grant is much more rare, and presumably occurs only if the foundation very strongly approves of the charity, or if the foundation was chartered with the purpose of providing general operating funds.  Most foundations have fairly tight charters and purposes, and will not be able to fund anything except projects within those purposes.

Funding from foundations goes towards whatever you convinced the foundation to fund.  The nature and priority of the projects funded by foundations will probably be tuned to the preferences of those particular foundations, unless the range of foundations is so wide that we can pick and choose.  Thus, resources from foundations are "non-optimable".

DEFN: Optimable resources:  Resources which can be used optimally; that is, resources which can be used wherever they'll do the most good at that time.  Non-optimable resources would include most grants from foundations, which can only be used on specific projects.  An "optimable project" is one important enough to be pursued with optimable resources (205).  A "non-optimable" project is something we can do if the money falls into our laps, but not otherwise - at least, not at that time.

Finding the foundations to fund the highest-priority tasks will take some work, and along the way we may run into foundations with mandates that fit low-priority projects.  The upshot, perhaps, is that low-priority (but still Singularity-related) projects - in particular, I have some cognitive science questions that might help with constructing an AI - might still be undertaken, not because completion is most necessary, but because funding is most available.  Of course, I am not advocating that we waste our efforts on makework.  There's likely to be a limited number of Singularitarians available, and only projects that advance the Singularity should be considered.

Likewise, there may be shifts in the particular emphasis of the important projects.  One of the things I "personally" would very much like to do - in my capacity as a human rather than as a Singularitarian - would be developing a "Teacher AI".  I'd like to see an AI capable of teaching children mathematics - real, fun mathematics, not the dull pap they get in school; starting from arithmetic (or any later level) and continuing to, say, calculus. If we just can't get funding for the general Aicore project, then developing an Aicore and the associated Teacher domdules would probably fit the mandate of a far greater number of foundations.  That's a last resort, though, since it would involve a genuine change of focus, a diversion of research talent, more time to the first release, and a considerably greater probability of failure.

(On the other hand, I do intend to create Teachers eventually - just farther along the timeline, when the substrate is there - and it would be entirely honest to mention this possibility in grant proposals.  There are all kinds of wonderful improvements to the world that would be possible with an AI capable of dumber-than-human general cognition, and I intend to take a shot at them; if mentioning specific examples proves persuasive, then I see no problem with doing so.)

I get the impression that the primary effort required to obtain funding from foundations is in writing grant proposals, and occasionally engaging in telephone talks to nail things down. Open-source grant proposals

I am intrigued by the prospect of writing "open-source grant proposals".  The seed material would consist of the topic and suggested subtopics to cover, any information we have about which past proposals were successful, and any previously sent-in proposals we have on hand.  Volunteers could then take their stab at writing proposals for particular foundations, or suggesting foundations to write for.  Sufficiently literate efforts would get sent off.  There are probably a number of intelligent people who would love to donate some time to the Singularity and simply haven't had an outlet - in fact, writing is probably the least barriered-to-entry volunteer work around, unless you count writing skills as a barrier.  Then again, I don't know how feasible this is, or if writing grant proposals is enough work to require volunteer efforts, so it's just a thought.  If this idea is unworkable, writing grant proposals may be a full-time job for someone.

Even if the Singularity Institute has enough funding (from individuals, most likely) to run a project without a grant from a foundation, the open-source proposal project might still be worthwhile.  I doubt any project will run out of uses for money. Individual supporters

Private individuals can fund whatever they like, and are likely to have a less formal and more personal relation to the charity or the charity's purpose.  Thus, they are much more likely to provide general operating funds, or to fund any given project.  However, private individuals - unlike foundations - do not exist for the sole purpose of philanthropy, do not publish their funding criteria, and are thus, in brief, harder to get.  (206).

Funding from individuals (whether large or small donors) can probably be used optimally - on whatever project is presently most important.  The nature and priority of the projects funded by individuals can be determined by the preferences of the Singularity Institute.

The effort required to contact an individual supporter is (a) obtaining publicity or (b) persuading in private interviews.

  • Obtaining publicity:  Basically falls under memetics; one of the things we'll be emphasizing, besides how much is at stake, is how easy it is to make a difference.  I doubt I could find a place where philanthropic dollars have more leverage, not if I tried for a month.
    • In the short-term - that is, in an effort mounted by a Singularity Institute in the short-term stage - this is probably the only practical method.
  • Private interviews:  Find a charismatic Singularitarian, find a possible supporter worth converting, have A invite B out to dinner, take a shot at letting B know What's Really Going On.  Funding required:  Dinner check, travel bills.
    • Anyone with any amount of money tends to become leery of charities that contact them instead of vice versa, so setting up the initial interview may be close to impossible unless we already have a reputation.  Thus, this is a mid-term and long-term strategy.
In the very long-term, if there is a sizable percentage of the public interested in supporting the Singularity, small individual contributions may become as important as large contributions or grants.  (This is not likely, however.)  I think it would also be a good idea to have a means of handling small contributions in the short-term, simply because a small contribution will do more good at the Singularity Institute than a small contribution elsewhere.  (Besides, It's Their Planet Too and it should be as easy as possible to get involved.)

If there's a project that scales down well, it might be a good idea to have that project specifically supported by individual contributions.  Maybe we could even provide reports on how the contribution was spent - i.e. "Your contribution went towards paying for a computer that will be used for development on the Flare project." Paid memberships:  Why bother?

I'm not sure we should have paid memberships in the Singularity Institute, or even memberships at all.  It seems like an inefficient way to run things, even if it used to be traditional.  In the days of Web architecture and hypergrowth, paid membership - even formal membership - is only another barrier to entry.

Perhaps Institutes such as Extropy and Foresight got started to support minimal infrastructure - i.e. a newsletter - for the members, in which case membership funding is reasonable.  But the Singularity Institute exists to change the world, which I don't think can be funded out of any reasonable membership fee.  Less ambitious purposes, such as community solidification, don't require an Institute; it can be done with free mailing lists.

I can see sociological benefits to creating a list of recognized Singularitarians; I see no reason why this should be confused with payment of a token fee (207), or the problem of funding.  Likewise, if anyone can subscribe to an online newsletter, why conflate the recipient list with the membership list?  Above all, paid membership in the Singularity Institute should not be a prerequisite for access to any projects which benefit from increased participation.

"Membership" seems like a centralized way of tracking a lot of things that will work far better if tracked separately.  The list of known Singularitarians, assuming we have the time and the inclination to compile one, should be as close as we get. Conclusion

Funding from individuals is the only way of moving into the "long-term" stage (208).  While it may be possible to work with minimal funding, I believe the primary strategy should be to find at least one donor wealthy enough that we simply don't have to worry.  Ideally, we should become the Silicon Crusade, the heart and ideal of Silicon Valley, the charity of choice for every technomillionaire.  Why not?  The Singularity deserves to be a crusade, and the meme is powerful enough.

We're trying to massively alter the fate of the human species; as I've remarked elsewhere in this document, trying to do it on a shoestring is silly.  Our ideas are on the grand scale, our goals are on the grand scale, and there's no reason to think small.

In the beginning, it may be necessary to form a skeleton Institute and then apply for grants.  But I don't think we can realistically get through the middle and final stages of the PtS timeline on an underfunded operation.  We can gain credibility and publicity by going through the initial stages on a shoestring, if that becomes necessary, but longer-term operations should assume adequate funding.  To get to the Singularity, to design a true seed AI and rent the hardware to run it, we need to eventually become a well-funded organization.

3.2.4: Leadership strategy

A nonprofit organization, like a corporation, requires a Board of Directors and a chairperson, which brings us to the question of "leadership".  The precise question of who should be on the Board of Directors is addressed in a later section (209); for now, we'll ask what kind of leadership the Singularity Institute needs, and why.

This is my nightmare scenario:  We're at the Elisson stage, we've got a working seed AI, we're almost ready to run it, and we so inform the Board of Directors.  Who's on the Board?  A group of funders that thought the Singularity sounded cool, but never really adjusted, emotionally, to the concept (210).  Now all of a sudden it's here, it's real, it's decision time - and they lose their nerve.  If we're really unlucky, the on-highs will start meddling in the design of the AI, demanding unworkable Asimov Laws (211) and the like.  The urge to meddle is strong; it seems to be a human instinct to do something, anything, even if it's the wrong thing, when anything important is at stake.

There are deep policy questions surrounding the question of how to program the Last AI.  Who should make that decision?  Well, me, of course.  But giving an observer-independent answer, I would say "whoever knows the most about the seed AI" - the same person who decides whether to add any other architectural feature.  (This isn't necessarily me; one of my lifelong ambitions is to find a replacement.)  I can't rely on this leader acting from the same philosophical motives as mine (which are, of course, the only correct ones), but someone who intimately understands the AI is unlikely to voluntarily do anything blatantly suicidal, and that's enough for me (213).  So if we write that into the Institute's charter, does that solve the problem?

I don't think so.  The headlong rush for Singularity is a decision that requires maturity - the ability to acknowledge risks that exist (such as nanowar) and take risks that are necessary (such as not meddling with the seed AI).  If that maturity doesn't exist in the Board - the ability to take the Singularity seriously, which is 90% of the definition of a Singularitarian - then we're likely to run into problems long before the Last Minute.  The Board might decide to stop all AI research and concentrate on uploading, for example. Making policy decisions

So what does it take to make policy decisions?  I think the qualifications are, in order of importance:

  • Demonstrated ability to do original thinking about policy issues.  (214).
  • Self-awareness, particularly awareness of one's own fallibility.  (215).
  • Maturity, the ability to emotionally accept risks.  (216).
  • And, of course, being a Singularitarian.  (218).
However, these "ideological" qualifications, or character qualifications, don't have to hold entirely true of all Board members.  I'd just get nervous if they didn't hold true of a majority, or if they didn't hold true of the chairperson.  If the Board is really intended to direct Singularitarian policy in the long run, then it should hold true of almost everyone. The dangers of power

But I don't think the Board should direct policy.  Singularity Institute policy, maybe; Singularity policy, definitely not.  (Besides, there are other criteria involved in choosing the Board; it makes no sense to try and make one body serve two very different design functions.)  So what am I proposing, a Council of Navigators?  No.  Actually, I don't think anyone should have that power.  Maybe it's just my pseudotraumatic childhood, but in my experience, power is something that other people use to screw up your life.  I would want to minimize, as much as possible, the power held by the Board or by any other formal body, and I speak as someone who plans to be on the Board.

Even if it were legally possible to take all the reins of power into my own hands, I'm still not sure it would be wise.  Concentrating power in your own hands doesn't mean you're safe; it means that the power is concentrated, ready to be taken away and used against you.  That power should remain distributed over all the Singularitarians in the effort.  And that doesn't mean some kind of voting system, either!  A voting system would just distribute power to whoever decides who the voters are.

The deep policy questions about the Singularity cannot be made politically; they are, ultimately, engineering questions.  Putting the coercive power to decide that question in the hands of anyone, even me, even a democracy of Singularitarians or a planetary plebiscite, probably isn't going to help.  The ultimate questions should be left in the hands of the same engineers who would make the decision if it wasn't so vastly important, morally charged, and philosophically controversial.  Yes, there's a possibility that the engineers will make mistakes, but that's not as bad as the possibilities opened up simply by the idea of making it a political question.  If the AI's goal system is designed by a democracy of Singularitarians, why not by the Board?  Why not by the government?  Why shouldn't every television commentator second-guess us?  Who gave us the power to decide the fate of humanity, if it's a political question?

The question, then, is how to ensure that the questions remain in the hands of the engineers.  Which brings us to 3.2.5: The open organization.

3.2.5: The open organization

The goal introduced by the previous section is preventing anyone, including the Singularity Institute's own Board of Directors, from exerting coercive control to torpedo or pervert the seed AI project.  In short, preserving the independence of the engineers.  (Yes, I plan to be on the Board of Directors, and I'll do my best to prevent interference, but I can always be hit by a truck, or outvoted.)

The projects are all open-source, so it's certainly possible to fork off a new project - the Singularity Institute can't threaten to withhold the source code (219).  Can the staff quit en masse and move to another organization, without penalties?  Sure; we'll write that into the contracts.  So now the engineers have a counterargument to any sufficiently obnoxious interference:  "We'll pack up our code and leave."

I am not suggesting that starting a new Institute would be easy, or painless, or that the new Institute would be as good as the old.  Commitments will tend to accrue to the "Singularity Institute" - reputation, funding by foundations, the Web address visited by open-source contributors.  (220).  Likewise, the engineers would have to convince at least one major funder to back the new Institute, especially if access to supercomputing hardware is needed.  In short, the process would not be inertialess.  The Board of the existing Institute would have the normal "power of the paycheck" over individual engineers on a day-to-day basis, an organizational design we have no overpowering reason to tamper with.  But as long as it's practically possible to split off a new Institute, however difficult, there's an "out" if the Board starts messing up the seed AI project.  In a final emergency, this will establish a limit on how screwed-up things can get.

For it to be practically possible to start a new Institute, the unique position of the Singularity Institute has to be minimized.  Hence the caveat in 3.1.6: Dealing with blocking patents about using language that refers to any open-source effort, not just the Singularity Institute.  Another privileged position would be the internal administrative data of the Singularity Institute - salaries and other "preferences files" of the individual, a list of contacts at foundations, the complete list of open-source contributors and the internal source for Websites, and so on.

This brings us to the concept of the open-source organization; that is, publish all accounting information and everything else that can be published without hurting anyone.  Other items, such as contact lists, may not be openly publishable (221).  Even so, such information should still be available to staff, and departing staff should have the right to walk away with it and use it.  (Although, in the case of abusable information, we might rule that ten or more staff members have to issue a united request for the information.)

The point is that secrecy of information usually serves nobody but the people holding the secret, and often not even them.  If the knowledge and administrative details of the Singularity Institute are as open as the source code, it shouldn't be difficult to fork off a new Institute in case of problems.  And there should be other benefits as well, some of the same benefits of open source.  Anyone can contribute advice, anyone can build as we have built - I'm sure I'd've had a much easier time writing this document if I had access to, say, the detailed history of Foresight.

Furthermore, I think running an open-source organization will lead to more contributions.  Open books are easier to trust, just like open code, and also easier to get interested in.  When you can see exactly how much money a project has, and the open list of what it needs, then the idea of contributing will become much more concrete.  If the detailed plans for expanding a project exist, then the project is more likely to be expanded.  There'd even be the possibility of tracking exactly where donations go, or selecting between possible donations, another way to provide positive feedback to donors and get them more involved with the organization.

I know that publishing certain things isn't traditional, but unless I get really strong opposition, I'm going to push for publishing them anyway.  Like the salaries of all the staff members, for example.  Is there really any good reason not to publish this?  I don't think so.  At absolute minimum, all such information should be internally available.  (In a public corporation built around thousands of competing mini-fiefs, office politics mandates secrecy.  If they ran true "open-book management", an open-source company, the fiefs might never form in the first place.  But that's a topic for another time.)

Finally, of course, the usual regulations necessary to enforce "organizational discipline", like not making fun of upper management, should simply be ditched.  That's just a holdover from the Industrial Revolution.  You can't make a corporation (for-profit or non-profit) a free democracy (222), but you can make it free.

Speaking as a nearly certain member of the Singularity Institute's Board of Directors, I do not see how the Singularity will be served by giving the Board any privileged status in the Singularitarian community, or in the part of that community that forms the Singularity Institute.

3.2.6: The Board of Directors

Both non-profit and for-profit corporations, by law, are managed by a Board of Directors.  The organizational design, and to some extent the responsibilities, are mandated by federal and state laws.  (For extra bonus fun, the state laws vary.)  The Web (223) claims that a Board of Directors is legally required to have a Chairperson, a Vice-Chair, a Treasurer, and a Secretary.  Looking at Foresight's Board of Directors, however, I see that it has only three people.  So we'll probably need to consult a legal expert before trying to grok the legal constraints on the Singularity Institute's Board. The Board/staff problem

Okay, you say; even if you do need four whole people, they shouldn't be too hard to find, right?  But it would seem that modern nonprofit law has an astonishingly medieval built-in bias:  Staff members aren't allowed to form the board.  (224).  There's a rigid set of traditional distinctions between the responsibilities of Board members and staff, many of which, I get the impression, are incarnated in law.  I'm not sure we can find a state to charter in that will let us run things sanely, although California (for example) allows up to 49% of the Board to be composed of staff members.

The problem is that the people whom I would otherwise place on the Board of Directors are also the people I'd pick to head the development efforts.  I can think of two Singularitarians whom I'd like to see on the Board, including myself, both of whom would probably be employed by the Singularity Institute, including myself.  See 4.2: Institute initiation for a discussion of options for handling this problem during the initial stages. Evangelism by the Board

In the long run, the only Singularity-related (rather than administrative) function of the Board (rather than the Institute) will probably be providing credibility to our evangelists (see memetics).  In the environment of American industrial ancestry, the Boards of nonprofits were composed of the founding wealthy individuals, in a time when wealth usually meant trying to behave like a prototypical English aristocrat.  (Hence the traditions and regulations related to not getting your hands dirty.)  But if you wanted to persuade other pseudo-aristocrats to join up, or preside at functions where pseudo-aristocrats would be present, you had to be a pseudo-aristocrat yourself, or they wouldn't listen.  Thus a traditional responsibility of a nonprofit Board member is being the public representative of the charity, especially at fundraisers.  We could go along with that, even if it's a tad outdated (225).  I'm not suggesting that we pack the Board with evangelists, unless a non-governing Advisory Board would lend as much credibility.  I'm just suggesting that the top evangelist might want to be a Board member for added punch, especially with large corporations and mainstream media.

3.2.7: Volunteers: Good or bad?

There is, I feel, a great deal of social design baggage created by the origins of nonprofit work as conscience-salve.  Rich people donate money and go on the Board of Directors; middle-class people donate time and become volunteers; paid staff may get additional job satisfaction, even to the extent of ignoring higher-paid jobs, but they don't get full karmic credit for their time.  There's an idea that real altruists shouldn't expect to be paid, that people should split their time between making a living as tobacco executives and salving their conscience as clerks in the Hungry Cat Drive.

The Singularity is not conscience-salve.  At this point in time, humanity is engaged in "making a living" - running the factories and so on.  But for that to be meaningful, somebody has to win.  It's not enough to go one more day without losing.  It's not enough to just stay alive.  Sooner or later the odds run out, and what's the point of staying in the game if it just goes on forever?  As I see it, the point of staying in the game is to win.  For running the factories to matter, someone has to be trying to create an AI.  Someone has to be trying to win.  That's us.

I think that trying to create a Singularity is just as "valid" a job as running the factories, and I don't think that expecting to be paid for it is unreasonable.  It might not be possible, but if the funds are available, that's what should be done.  It's not some over-and-above hobby like trying to stamp out war or end world hunger, it's as much a part of real life as an ambassador trying to prevent some particular war, or an office projecting grain exports.

Of course, it may take three years of living with the Singularity meme on a daily basis before one starts thinking about it in those terms.  Then, too, my economic theories may also have something to do with it.  This is not the place for a full exposition, but in brief:  As technology becomes more powerful, productivity goes up.  Where before 100 million people were supporting 100 million people, now it only takes 90 million people to support 100 million people.  Under those circumstances, four things can happen:  First, standards of living can go up, but due to a complex sort of inertia, our economy tends to lag behind in doing this.  Second, 10 million people can become unemployed and starve, after which it only takes 81 million people to support 90 million... you get the idea.  The third option is to take the 10 million "surplus" people and put them to work on some common quest for humanity - space travel, investigating physical laws, building an AI.  The fourth option is to create a lot of paperwork, thus absorbing the additional productivity.  The modern American economy is employing a mix of all four, mainly option four.  I, of course, favor a mix of option one and option three.

Now that humanity - or at least Western democracy - is running a surplus, I think that the Singularity - trying to win - is one of the projects that can legitimately absorb such a surplus as an alternative to mass unemployment.  If only a few individuals recognize that mental framework, it doesn't change the underlying concept.

The point is that, except for individuals with enough money in the bank to do whatever they want all day, and except for individuals who are donating small amounts of time (rather than sacrificing large amounts), the Singularity Institute should pay for what it gets.  (If, of course, the Singularity Institute can afford it; just because the Institute was created as a framework for humanity's quest doesn't mean that humanity is energizing it.)  I see no a priori reason anyone should suffer for contributing to the Singularity; sacrifice is unnecessary to validate altruism.  Full-time contributors should be paid - making them, I suppose, "staff", not "volunteers".

Well, that was a whole long speech, and it probably really belongs under "Memetics strategy" as a (true, always true) belief to be offered to funders and supporters.  The direct utility-to-Singularity of this strategy is that - if funding is available - it will make the Singularity Institute's support stronger and more reliable.  In essence, asking volunteers to make difficult sacrifices "burns" willpower and Singularitarian-ness to replace funding.  I'm not saying that this can't be done, because the Singularity is that important to many of us.  I'm saying that it should be held as a last resort, and not imposed because of romantic traditions of sacrifice or narrowly-domained cost-benefit visualizations.  This is a theme developed more fully in 3.5.1: Building a solid operation.

3.3: Memetics strategy

3.3.1: Memetics timeline

Of all the timelines listed here, the memetics timeline is the most difficult to define, due to the multiple, conflicting audiences and the multiple, conflicting priorities and the multiple, conflicting deadlines.  Some of the results being balanced include:

  • Initiating the chain of actions that ends in the creation of a new Singularitarian.
  • Initiating the chain of actions that ends in a new funder finding us.  (Will ideally overlap with above.)
  • Initiating the chain of actions that ends in the creation of someone opposed to the Singularity, willing to take action to prevent it, and willing to initiate an organization devoted to that end.
  • Convincing someone who would be opposed to the Singularity that no opposition is needed because the Singularity is a tissue of fantasy, or that opposition would be too dangerous because the future is uncertain, or that opposition should be carried out by ethical means.
  • Creating a positive first impression in the mind of someone who will be needed later.
  • Creating a negative first impression in the mind of someone who will be needed later, or someone who the opposition will need later.
  • Creating a general atmosphere of opinion about the Singularity in some local audience, or the global audience.
  • Singularity memes being transmitted by word-of-mouth to someone we need.
  • Singularity memes being transmitted by word-of-mouth to an unethical journalist who'll unimaginatively milk the issue for Frankensteinism (226).
  • Singularity memes being transmitted to an ethical journalist who may see it as vis duty to write an article which will result in further word-of-mouth transmissions of highlights which are not the highlights we'd choose.
  • A general memewar, or "public flap", being started before the Institute is ready to act effectively on that scale.
Some of the considerations that result:
  • Targeting an audience because it contains potential Singularitarians.
  • Not targeting an audience because it contains potential opposition, or reporters who are likely to spread the meme beyond control.
  • Writing an article with maximal future shock, in order to get the strongest reaction from technophiles.
  • Writing an article with carefully calculated future shock, to get the best reaction from the widest range.
  • Writing an article with minimal future shock, to avoid creating opposition, or to avoid creating word-of-mouth distortion when the highlights are repeated.
  • Having to target general audiences immediately, because a technophobic institution or author is attempting to spread negative memes about the Singularity.
  • Trying to spread positive impressions with embedded countermemes (without being defensive) because we think someone is going to start spreading negative memes shortly.
Almost any action, any publication, will impact at least two of these considerations, and of course, all the forces are interacting with each other.  Under the circumstances, I say we wing it (227).  With that in mind, my current visualization of the timeline is as follows: Short-term:  The people we need

In the short-term, the primary goal is sparking the creation of new Singularitarians, particularly founders, funders, writers, and genius-level programmers.  It may be necessary, at most, to address SL1 audiences, and preferably SL2 or SL3.  As a general rule, assume that it takes at least 1000 readers - readers, not Web hits, not people who got the magazine and never read the article - to produce a helpful Singularitarian.  And "helpful" means someone who's likely to help out during these initial stages, not just someone who's favorably inclined.  (Order-of-magnitude derived from the TMOL site.)  I'm not sure how this varies with shock level, but a pre-selected SL2 or SL3 audience should be good for an order-of-magnitude improvement.

My experience so far leads me to think that finding enough people should be feasible.  Difficult, but feasible.

Publications to target:  Slashdot, Mondo 2000, Analog, F&SF (228).

Any source of eyeballs is still useful, however. Mid-term:  The Silicon Crusade

Once the Singularity Institute enters the mid-term stage, so that there's a reputable "front" organization, it will become possible to try and convert Silicon Valley over to the Singularity, en-masse.  This would entail articles in such publications as Wired, the conversion of respected "celebrity" spokesfolk such as Eric S. Raymond, and reaching out to key editors and such.

However, in contradiction to the draft editions of PtS, I've concluded that Singularitarian-initiated direct-contact is not feasible at this time.  Most of the known people we need (229), to preserve their own sanity, have established a don't-call-us-we'll-call-you policy.  This doesn't mean that they aren't willing to help; just that the chain of events that ends with them helping the Singularity begins when they read an article about the Singularity and get interested enough to contact us. Long-term:  Popularizing the Singularity

The effort to acclimate the general public to the Singularity should only be begun when necessary, or when we're sure we have the resources to prosecute a public flap.  There are two conflicting forces:  One, waiting until after a public flap gets started would mean conceding the initiative.  Two, if we can make it all the way to Singularity without it ever becoming a "public policy" issue, I think maybe we should.

Figure that if the Silicon Crusade (above) or the Singularity becomes a popular topic among technophiles, the mainstream media will notice sooner or later; if we have the resources at that point, we should seize the initiative by targeting the general populace with memes that are Singularity-supportive, or likely to produce a good first impression, in a form likely to get favorable word-of-mouth.  (In other words, keeping the future shock toned down.)  We shouldn't actually go to discussion of high-future-shock issues until it's clear that this will happen in any case.

3.3.2: Meme propagation and first-step documents

Those of my readers who are trained to an ethic of writing are probably trained to the scientific ethic, which requires the mention of every possible objection, every possible reason why a theory might be wrong.  Applying this rule directly to memetics would require mentioning every possible objection to each statement, and mentioning every possible argument under which a goal might not be in the reader's best interests.  In fact, the scientific ethic requires mentioning arguments that would contradict goals or statements, given the reader's probable assumptions rather than the author's.  (230).

We all know that "the media" doesn't work that way; or at least, outside of the scientific community, it doesn't work that way the vast majority of the time.  Even scientists writing about complex issues in newspapers or magazines, or journalists who believe "the reader should be allowed to decide", are sometimes unable to strictly obey the ethic simply because printed media often doesn't have the room to present all the issues.  (The space squeeze occurs in media but not in science because there's a lot more room in the peer-reviewed journals, and also because the vast majority of scientific articles deal with non-morally-charged issues where the ultimate answers are supposed to be simple.  In science, unlike social domains, if you can't fit the discussion of all the caveats into 1500 words, this is a good sign the theory is wrong.)

In "the media", readers understand that a flat, uncaveated statement may be simply the personal opinion of the author in a controversial field, though this is more true of statements about politics and morality than about statements of fact.  This "skeptical reader" assumption may not be true of everyone, but it is both traditional and necessary to assume it is, at least if you want to get anything published.

There's also a stereotype to the effect that the faceless public "doesn't want to hear the whole story", just popularizations and simplified good-guy bad-guy conflicts.  Whether this stereotype is true - or rather, the percentage of which it is true, and to what degree - is irrelevant if the publishers believe it's true, and demand articles for that faceless public.  This appears as a problem chiefly when one wishes to include hints that there's more to the matter than has been said; publishers who think themselves panderers (231) believe that the reader wishes only the illusion of understanding.  Likewise, some media will object to any science more specific than "quantum uncertainty" and "everything is relative".  Personally, I'd say we can afford to avoid any memetic channels in which this tendency has become pronounced, but traces may often be visible elsewhere.

Finally, aside from tighter space constraints, more complex issues, less cooperative publishers, a higher perceived standard for keeping the reader awake, and a different set of assumed reader behaviors, which is standard across the general problem of popularizing science, Singularitarians have an extra bonus problem:  Informing the public about the impending end of life as we know it without creating a lot of opposition.  We are future shock, and sudden exposure to our complete set of ideas is likely to send some readers screaming into the night, maybe literally.

The traditional ethic of High Journalist culture (232) does not permit partial presentation of a meme in order to make a better impression, holding this to be a form of lying; the entire meme must be presented, and the public permitted to make its own decisions.  And if this were 1990, that would be hard to argue with. Audience composition as a function of reference trajectories

With the advent of the Web, with the ability to insert rememberable URL references into even printed documents, the fundamental assumptions change.  The printed article is merely the first step; in a way, it's almost analogous to the blurb that newspapers use to summarize headline news.  The document is the spiderweb; not one article, or one Web page, but the link halo, the probability that a reader with a given set of characteristics will read a given page.  The differentials give rise to some interesting ethical effects, but first it's worth the time to explore the underlying formalism.

Visualizing the trajectory of someone stumbling onto printed material about the Singularity, ve will either not be interested and stop reading, ve will be interested enough to finish reading but not interested enough to look up further material on the Web, or ve will look up further material on the Web.

In the case of a Webber, the first pages/essays/directories arrived at will be the ones referenced in the printed material (and particularly URLs that look interesting, or ones specifically designated as being "for more information"; most people are also more likely to type in a short URL then a long one).  From there, the reader will spider through the Web, following the links of greatest interest.  Some readers will surf for a single session and never return, but others will have established an enduring interest in the Singularity.  (233).  (During the initial stages of the PtS plan, it's that last audience which we care about more than anything else, but we still can't leave the other audiences out of the equation.)

So there are at least three Singularitarian memetic channels.  "First-step documents" are printed material in magazines or newspapers or other widely distributed media, television interviews, and any Web pages referred to by non-Singularitarian sites (234).  This is the "initial audience" in this analysis, and it's fair to assume that the majority will never have heard of the Singularity.  "Second-step documents" are any Websites referred to by the first-step material; this will reach the parts of the audience who care enough, or are horrified enough, or are having enough fun, to type in the URLs from the first-step article, or click on the "For more information" links on a first-step site.  Third, there's all the other Singularity-related Websites on the planet (235), which will probably be read both by long-time Singularitarians and by people spidering on from the second-step sites.

Three channels imply three kinds of writing:

  • First-step writing, targeted at the full spectrum of the initial audience.
  • Second-step writing, intended to solidify interest in the Singularity; a more powerful, more future-shocky intro with lots of references to third-step sites.
  • Third-step sites, such as detailed analyses written for Singularitarians, though others will undoubtedly read them.  This is where we vent our scientific ethics on full disclosure.
I haven't yet rewritten my own sites to follow this formula.  When I do, the TMOL FAQ will be an example of a first-step document.  Staring Into the Singularity will be a second-step document.  This page, Coding a Transhuman AI, and Singularity Analysis are all third-step documents. Invariance under the whole story

The primary ethic of writing first-step documents, IMO, is what I call "Invariance under the whole story."  Sometimes, due to constraints of space, or the desire to avoid frightening off the reader, one must leave out some parts of the story.  The cognitive structures that remain - the logic and emotion - must remain invariant under the whole story.  The content, the matters of scale, the concrete visualization can change, but not the structure.  When saving the world, the difference between a group of benevolent but mortal-scale transhumans working for the common good, and AI-born Powers rewriting the Solar System on the molecular level, is simply a matter of how much future shock the reader is exposed to.  The same benefits, the same risks, the same moral structures, the same hopes, the same fears, the same idealisms, all remain invariant.  (236).

One mustn't lie to the reader.  This ethic expresses itself in two ways:  First, by the requirement that someone who goes on to read the whole story shouldn't feel that the first-step document was a lie; second, by the requirement not to make any statements which one does not personally believe, or invoke any emotions which one does not judge fit for personal use.

In the example above, a concrete picture of "a group of intelligence-enhanced humans remaining on the mortal level and helping out, perhaps by making scientific advances," if I were to draw it explicitly in writing, would strike out on two counts.  First, this is not the Singularity which I believe in (237).  Second, the reader, on hearing that I planned to take apart the world on the molecular level, would realize that I did not believe what I had written earlier.  The invariance doesn't count if there's a concrete contradiction.

Ideally, one should remain vague about what form transhuman aid would take, at least until the second-step document.  (Unless, of course, you yourself believe in a concrete scenario weak enough not to shock your readers.)  Thus, each first-step reader will visualize whichever outcome they can imagine, at their own level of future shock.  If the author's second-step visualization is more powerful, the reader will count the first-step mental image as a bad visualization, rather than as a deception - so long as the basic ideas remain invariant under the whole story. Secondary channels:  Word-of-mouth, other reporters

Remember, especially when writing first-step documents, that any sentence you write, any paragraph you write, may be taken out of context and quoted.  If you need to say something that could be quoted out of context, go ahead and say it; we shouldn't weaken our own memes, much less lose completeness in our analyses, because we're worried about being misquoted.  It's simply something to bear in mind, that's all.  Until one of your works has been popularized, you really don't realize what your carefully reasoned, holistically cohesive document can look like when the exciting concluding paragraph is quoted out of context.  Have you ever wondered why works intended for popular consumption often get right up to the exciting climax of the chapter, and then, right when your pulse is pounding, the author repeats everything he's just said for the last dozen pages?  It's because the concluding paragraphs are the ones that get quoted.  (Yes, I learned this the hard way, and that was with a friendly, even Singularitarian author.  Imagine what the hostile ones will do.)

This is the other reason why we should be careful about what kind of future shock we pour into the first-step documents.  An individual reader, perusing a carefully structured argument, can be called upon to understand it, no matter how high the level of future shock.  Ve cannot be asked to remember the entire argument to repeat to vis friends.  Anything that goes into a first-step document is something that has to be simple enough, and innocuous enough, that repeating only the highlights of your carefully crafted argument, in an order that bears no particular resemblance to your calculated sequence, doesn't create panic and opposition.

Of course, reporters can read second-step and third-step documents, and can quote them out of context as easily as first-step documents.  All I'm suggesting is that the journalists who are too busy, or who want everything simple, or who are just lousy reporters, will be happy to look no farther than the first-step documents.  A journalist who's diligent enough to go on to second-step and third-step documents may form vis own impressions of the complete truth, and convey them to vis reader, as is vis right.  How ethical journalists choose to deal with the issues outlined here is their responsibility, although we should certainly feel free to point out how serious that responsibility is, and suggest means for handling it.

Despite the idealism, we can always wind up with a lousy journalist who stumbles over an exciting second-step document and runs away screaming with little dollar signs in vis eyes (238).  But we're probably going to get that problem in any case.  All we can do is try to get the Singularity Institute established before that happens, and appeal to good journalists to fight the bad afterwards.

3.3.3: Emotions of transhumanism

The direct appeal to emotion has always been somewhat taboo among technophiles, and for a damn good reason.  Emotions are so easily abused that emotional argument is seen as foreign from, or in opposition to, intelligence.  And it often is.  There's a reason why clichés become clichés.  (239).  Nonetheless, if anyone is afraid to be emotional, I have three words to say:  Get over it.

Human intelligence grew up around emotions, and whether this is good or bad (240), building cohesive structures of thought often requires emotions as glue.  It's not just that emotions are needed to translate purposes into actions (241); sometimes, our emotions extend into intuitions.  Being enthusiastic about the prospect of saving the world is rational, and the resulting "gut responses" can lead to intelligent choices about priorities.  Intelligence is whatever lets you model, predict, and manipulate reality.  Emotions are an extension of intelligence by other means.  Emotions are neither as reliable or as powerful as abstract thought, but emotions can be valid.

The authorial ethic requires that we make only those statements we personally believe to be true, and appeal only to those emotions which we personally use.  It does not require that we appeal only to those forms of cognition which we would wish to design into an artificial intelligence.

We live in a culture with ambient technophobia memes, transmitted by uncontradicted statements and the last fifty years of television.  You can't fight that by railing against irrationality or superstition (242).  People who derive their morality from these sources won't give it up on your say-so.  At most, if they explicitly mention the Borg, you can remark that it might not be a good idea to decide the fate of humanity based on the last fifty years of bad TV.  You can say this because bad TV isn't a culturally approved form of culture, attacking bad TV is socially accepted, and people are willing to be told that listening to bad TV is wrong.  But other carriers of technophobic memes enjoy higher social approval.

You can't fight technophobia with artificial incredulity toward your own subject matter.  People are willing to take things seriously, if you ask them to do so.  If you don't ask, there's no reason why they should.  If you don't take your own work seriously, don't be surprised if most of the audience you wanted to target does likewise, while your technophobic readers see through the mask to become frightened and horrified.  (243).

The only way to combat the floating social perception of "unnatural equals immoral" is with positive reasons why ultratechnology is moral.  (244).  And that doesn't mean holding out a big carrot, like eternal life or infinite wealth, because the same technophobic memes say these things are unnatural.  You have to offer positive moral and emotional reasons, reasons that the audience can accept within themselves, without interference from what they've been told it's socially acceptable to feel.  You have to offer positive moral reasons which are higher and more idealistic than technophobia, and which feel higher.

You have to make them feel the holiness of creating a new mind unmarred by hate, the exhilaration of exploring the Universe, the courage to face the unknown, the altruism of the quest for the Singularity, the joy of working to heal the world.  Because that's what makes a technophile.

You have to teach them that humanity is strong enough to change the world, that they are strong enough to change the world, because it's that strength, and belief in strength, that modern-day humanity is starved for.  The world can be improved, problems can be solved.  The fundamental message of technophobia is that the world is perfect the way it is, or the way it was in some mythical past, but people know better; they can see it on the evening news.

To combat the memes of resignation, all most people need is the belief that the problems they see can be solved.  In modern First-World culture, people are starved for meaning, starved for the chance to make a difference.  If we play our cards right, we should not only be able to beat technophobia, we should be able to beat it hands-down.

In a first-step document, we can choose our own moral and emotional battlegrounds.  If we lose, it's our own damn fault. The ethics of emotion

By "choosing our own battlegrounds", what I mean is that we'll get better reactions if we present our ideas in a certain order.  We'll do better if people have a chance to have positive emotional reactions to the more exciting aspects of the Singularity, before they encounter an involved discussion of why most of the apparently negative aspects turn out to be moot points.

I think the emotional logic involved is an actual use of intelligence, not rationalization.  If someone emotionally attached to the concept of Apotheosis becomes more capable of emotionally accepting the necessity of the risks involved, then this can be viewed either as leading someone down the garden path, or as the correct functioning of the built-in cost-benefit analysis intuitions, depending on what you think is the actual correct answer.  This involves a judgement call on the author's part, but in a multi-step document, it's a judgement call the reader is given a chance to second-guess.

What would be unethical is allowing someone to become attached to the possibility of Apotheosis, then using this attachment to persuade them that there are no risks involved.  That would be taking advantage of what they want to believe, rather than what they become willing to understand.

3.3.4: Content and audiences

In the long-term, the audiences - the memetic carriers - whose reactions we need to worry about will include Silicon Valley tycoons (245), open-source programmers, CEOs, Greenpeace, politicians, televangelists, television reporters, truck drivers, print journalists, the middle class, the upper class, technophiles and technophobes, honest religious fundamentalists, and that's just in the First World.

While the first-step arguments need to be adapted to unique characteristics of unique audiences, one useful abstraction for reducing this complexity is your audience's Future Shock Level, or shock level for short.  (This measures the level of technology with which you're comfortable, not the highest level you've heard of.Future Shock is a good page for memeticists to read, but to summarize:

  • SL0:  The legendary average person is comfortable with modern technology - not so much the frontiers of modern technology, but the technology used in everyday life.  SL0s:  Most people, TV anchors, journalists, politicians.
  • SL1:  Virtual reality, living to be a hundred, "The Road Ahead", "To Renew America", "Future Shock", the frontiers of modern technology as seen by Wired magazine.  SL1s:  Scientists, novelty-seekers, early-adopters, programmers, technophiles, any journalists that happen to fall into one of the above groups.
  • SL2:  Medical immortality, interplanetary exploration, major genetic engineering, and new ("alien") cultures.  SL2s:  The average SF fan.  (There are signs that a significant fraction of SF fans may have risen to SL3.)
  • SL3:  Nanotechnology, human-equivalent AI, minor intelligence enhancement, uploading, total body revision, intergalactic exploration.  SL3s:  Extropians and transhumanists.
  • SL4:  Jupiter Brains, Powers, complete mental revision, ultraintelligence, posthumanity, Alpha-Point computing, Apotheosis, the total evaporation of "life as we know it."  SL4s:  Singularitarians and not much else.  (Remember, comfortable with, not just "heard of".)
The general rule is that we should try to minimize jumps of more than two shock levels.  In first-step documents, concrete visualizations of SL4 material should be reserved for SL2 audiences and above.  In the short-term, when we're likely to be writing articles for Wired or Mondo 2000 or Slashdot, SL3 material is appropriate.  Below SL3, it's not possible to say much about the Singularity, so in the mid-term and long-term we may simply need to expose the general audience to a jump of three shock levels.  (246).

It should be remembered that future shock increases the strength of any reaction, good or bad.  While a first-step document should contain enough future shock to get people interested, to get the people we need to go on to the second-step documents.  Anything above this may not be wise.

How much future shock should be stuffed into a second-step document is not something I've thought much about.  Staring Into the Singularity contains as much future shock as I could write, at the time I wrote it.  On the other hand, I was writing for an SL3 audience.  It's an interesting question.  And, regardless of the answer, my tactics are constrained by the amount of writing time I have.

Detailed analyses of memetic propagation through all audience segments is another subject I don't have time to write.  In the short-term, it shouldn't be necessary.

A first-step document targeted at an audience of SL1s or SL2s should contain nanotech and AI, but not Powers; it should discuss the unknowability of intelligence enhancement, but not the positive-feedback effect.  (Of course, if you need a fast dose of future shock and you don't have the wordage to do it gradually, I find that the line "If computing speed doubles every two years, what happens when computer-based AIs are doing the research?" is fairly effective.)  It should contain an appeal to at least one emotion of transhumanism (see above).

All of these heuristics may be ignored as circumstances warrant; when I'm answering a specific question, I discuss whatever I need to discuss to answer that question, even if it's a first-step document.  I don't include any mention of the quest for Singularity unless there's a natural way to work that in.  Publishers, and readers, often look dimly on attempts to include off-topic polemics.  (247)

Finally, in an article (as opposed to a question-answer or a letter-to-the-editor), mentioning the altruistic, the Singularity-as-quest, is very important; and not just because our goal is to create new Singularitarians.  If we're keeping future shock down to SL3 levels, the article still has to be of interest to potential Singularitarians who are already SL3s.  That requires new content, originality, even if not raw future shock.  The idealistic-crusade aspect of ultratechnology doesn't seem to be mentioned much, with most authors stuck in gosh-wow mode.  Mentioning the altruistic aspects should suffice to keep it interesting, even for readers who've already heard of nanotech and AI.  Then, when they get to the second-step Websites, the fun can begin.

3.4: Research strategy

3.4.1: Fundamental research

Fundamental breakthroughs, all witty sayings to the contrary, are hard to produce using nothing but hard work.  It takes either genius or a lucky experiment.

Building a transhuman AI will require genius.  It will require more genius than any single breakthrough in human history.  We are trying to create a mind.  There is no higher task, not in any sense of the word.

Once upon a time, John McCarthy said that, to succeed, AI needed "1.7 Einsteins, 2 Maxwells, 5 Faradays, and 0.3 Manhattan projects."  The Manhattan project is described here.  I don't know how much of the list still remains, after Lenat and Hofstadter and Marr, but my aspiration is to be the Drexler of transhuman AI, and hope that another 0.3 Manhattan projects and 1.0 Drexlers is enough.  If I'm not sufficient... then it's really only a matter of casting our net as widely as possible, and hoping that the necessary genius falls in.

What strikes me, looking at the past history of AI, is that there has only been one attempt to design an actual mind.  Douglas Lenat's Eurisko was the only AI that captured more than a single facet of cognition; the only AI that had enough complexity to count even as an attempt.  There have been other valid and successful efforts at capturing facets of artificial intelligence, notably Hofstadter and Mitchell's Copycat and David Marr's vision project, but Eurisko was the only attempt at a true artificial mind.  I find this oddly comforting.  There is no long history of failure to contend with; Eurisko is the only relevant attempt, and it did pretty well.

The research talent we need is more likely to reside in the field now called "cognitive science" than in the field called AI.  AI remains crippled by the ideologies formed back when it was necessary to believe that a 50's-era computer program was exhibiting "intelligence".  There might be some geniuses slogging it out in existing academic AI, and while it might be worth the effort to let academic AI know what's going on, I'm not sure it'd be worth the controversy (248).

From examining Lenat's papers on Eurisko (250), it would appear that the primary talent necessary to design artificial minds is the ability to grit your teeth and write the features you know the program needs, even if it's a bloody lot of work.  (251).  My best guess is that this is a programmer's ability.  So the other place to look for the research talent is in the field of programming, the same place we're looking for the development talent.

But the PtS plan does not relying on finding additional genius.  In the PtS visualization, the technological timeline and the principles in Coding a Transhuman AI - the things I already have specific ideas for doing - are enough.  Oh, I'm sure we'll look back on that in five years and laugh our heads off, but the point is that I'm not saying that the fundamental research problems are bridges to be crossed when we arrive.

3.4.2: Supporting research

Fundamental breakthroughs take genius or a lucky experiment, but both can be helped along by planned research.  I can think of a number of areas I'd like to see investigated, mostly in cognitive science.  Some examples:

  • Work on neurocomputer interfaces.  I don't think this is the critical path to transhumanity, but there's been some astonishing work here recently.  There are two important prospects:  First, some major party tricks (useful for publicity and productivity) and possibly even minor intelligence amplification.  Second, low-level knowledge about how the brain computes, knowledge that could be critical in designing an AI.  Grant priority in the mid-term, optimable priority in the long-term.

  • Neurohacking.  I'd like to take a shot at producing (1) Quasars:  Hack for unlimited mental energy.  (2) Zetetics:  Hack for monitoring, or permanent disabling, or volume control (in ascending orders of sophistication) of the emotions concerned with political instincts, self-deception, and rationalization.  Quasars may be directly useful in technology development, and Zetetics may be directly useful in research (252).  Both should work on adult subjects, making the legal and political problems merely horrible, rather than impossible.  Looking into the subject, or trying out Zetetic "cognitive feedback" training, would be grant priority short-term, optimable priority mid-term.  Actual human neurosurgery should probably wait until the long-term, if then.

  • I'd like to see an Encyclopedia Mentalica, a Gray's Anatomy of the brain - a diagram on DVD of all the anatomical areas, how they connect to each other, anything that's known about the low-level neural patterns, and summaries of research that involves activity in, or damage to, any particular area.  I keep wishing I had this, every time I need to work out the relation between two forms of cognition.  Grant priority.  (NOTE:  I've lately acquired a copy of MITECS, the MIT Encyclopedia of the Cognitive Sciences.  This is a lot smaller than what I need, since it's just one book.  But it's darned cool, and it'll sure as heck do until something else comes along.)  [(In association with Amazon.com.)]
(You'll note that many of these projects are cool.  Benefits of the coolness factor include publicity, ease of persuading someone to fund it, and "morale" or general fun.)

"Grant priority" projects aren't on the necessary path, or the critical path, but they will advance the Singularity or support a project that does.  They should be supported only with non-optimable resources - that is, funding and personnel that could not otherwise go to necessary or critical projects.  E.g. funding from a foundation that isn't interested in other projects, and researchers who wouldn't be interested in other areas.  Of course, this is a quantitative tradeoff rather than a qualitative injunction; if the non-optimable funding and researchers are already there, you can use optimable resources to do the paperwork.

"Optimable priority":  May be supported by optimable funds (i.e. grants from individuals that can be used optimally), but not at the expense of higher-priority projects.  "Optimable priorities" are often projects that will directly advance a Singularity, although perhaps not the critical-path AI Singularity.  (But maybe we only think it's the critical path...)  Whether a project is "optimable" at any given time depends on how expensive the project is, and how much optimable funding is available - it's a cost-benefit thing rather than a qualitative difference.

3.5: Miscellaneous

3.5.1: Building a solid operation

Three examples of a decision based on the heuristic:  "Build a solid operation":

A solid operation is one in which the available resources are matched to the expected problems, a plan which doesn't make a habit of relying on extraordinary efforts.  A "shoestring operation" is a plan that relies on willpower to compensate for inadequate resources.  A shoestring operation relies on extraordinary efforts for day-to-day operations.  When something unexpected goes wrong in a shoestring operation, there isn't any slack available.  Shoestring operations also have a nasty habit of burning people out, especially programmers.

I like to think of it in Gaussian curves (253).  There's a curve that describes how much effort people can put out, ranging from "hardly trying" to "supreme effort", with the midpoint requiring some mental energy and willpower to sustain, but not more than the mind's natural rate of replenishment.  A solid plan requires effort, but not unusual effort.  A plan that assumes 50% output is solid; a plan that assumes 90% output is shoestring; a plan that assumes 100% output is unworkable.

In some cases, the PtS plan assumes extraordinary brilliance.  This can't be helped.  Building a mind is an extraordinary problem (255).  But I've tried not to unnecessarily assume tenuous or improbable events; that would make the whole plan too fragile.  Large, carefully polished sections of a previous draft were junked when it became clear that running distributed over the Internet created too many opportunities for things to go wrong.  I've tried to create a plan where setbacks will mean delays or solvable problems rather than crashes.  I've tried to plan such that the success of the early stages will advance the Singularity, even if the later stages fail.  Where more then one plausible outcome exists, I've tried to plan for both.  I do my best to emotionally accept the possibility of negative outcomes.

I don't trust to luck, but I've had to assume that events which are low-probability in the individual case can be made to happen at least once in the general case - for example, finding a funder.  I assume these things because they're necessary to the plan and they seem worth a shot.  (256).  But I have not assumed, at any point, extraordinary efforts (257) on the part of anyone involved.  This is one variable that lies entirely within the discretion of the planner, and I believe that extraordinary efforts should be reserved for unexpected problems.

I admit to being prejudiced.  I think the whole concept of a shoestring operation is based on the romantic stereotype of nonprofit work - or, for that matter, the romantic stereotype of starting your own company in a garage, or the romantic stereotype of open-source projects led by college students, or the hero theory of software development.  There's this idea that if you're going to have a solid plan and adequate funding, you might as well put on a business suit and be done with (258).  There's this idea that you're not "worthy" to start your own company or save the world unless you're willing to work 16-hour days.  (259).  I, for one, am more impressed with someone who plans, so that 16-hour days aren't necessary.  (260).  Gratuitous heroism is great for scoring bonus points in mental fantasies, but it's the wrong attitude to take if you're trying to keep your planet from being converted into a boiling puddle of slag.  I have nothing against heroes, but the planner's job is to keep the necessity for heroes down to a minimum.

But maybe I'm making the wrong tradeoff.  Maybe the improbability involved in finding "adequate funding" makes the plan more fragile than a shoestring operation.  And if the only funding we can find isn't "adequate", then I suppose we'll just have to get by on inadequate funding during the initial stages.  I just think that if we can really do it, make the Singularity, change the course of an entire industry and build an intelligent mind, then we probably won't be doing it on a shoestring - unless we're seduced by the romance of heroism, or if our plans create a self-fulfilling prophecy.

To sum up, there are three (advantages to)/(characteristics of) a "solid operation":

  • Solid operations are scalable.  (263).
  • Solid operations have resources matched to the problem, with margin (264) for error.
  • Solid operations are less easily disrupted by the random factors (265).  (266).

3.5.2: Accelerating the right applications

Technological change can be hard on an economy, and harder on the people making up that economy.  (267).  Jobs get lost.  It happens.  What makes technology a good thing - the reason why the doomsayers and slowsayers and Luddites always prove mistaken - is that new jobs come along to replace the old.  My father has a saying:  "If the modern government had been around in the time of Ford, cars would have been outlawed to protect the saddle industry."  (268).

Nonetheless, there's a limit on how fast economies can adapt.  There's a limit to how fast people can be reeducated.  (I should note, for the record, that we are presently not even approaching this limit.  (269).)  Even infrahuman AI is still an ultratechnology, and if we can really pull off the zero-to-sixty stunt needed to have a seed AI ready by 2010, much less 2005, this implies a rate of change that could put enormous stresses on the economy.

However, there are technologies that can compensate.  The great computer revolution has increased rates of change in some industries, but other computer technologies have enabled (some) companies to change faster and keep up.  It's the reason why "change" is one of the great clichés of our time.  Technology doesn't just create economic stress, it creates the ability to keep up with economic stress.

So within the near-term economic horizon, meaning the next 10 years or so, we want to accelerate the stabilizing applications of AI, such as educational AI, and avoid accelerating the applications that would cause "ultraproductivity", which in today's economy would translate into "mass unemployment".  I do have some schemes for "smart economies" that can rapidly absorb almost unlimited increases in productivity, although the technologies involved (270) are not AI as such; these also go on the list of things to accelerate if we have the spare time.  (After all, if the Singularity drags on beyond the next 10 years, human economies are just going to have to adjust to ultraproductivity.)

3.5.3: If nanotech comes first

Nanotechnology has really taken off in the past few years.  I remember when nanotech was a loony dream, not something that got featured in Time and Business Week.  I remember when there wasn't any such thing as a Scanning Tunnelling Microscope, and "IBM" hadn't been spelled out in xenon atoms.  I remember when people were still arguing over whether it was possible to create chemical bonds by mechanical manipulation.  (Yes, it's been done.)  (271).

Drexler published "Engines of Creation" in 1984, and it may be that nanotechnology just has too much of a head start.  I'll be overjoyed if we have 'till 2020 to create a seed AI, but it's increasingly looking like the deadline may be more on the order of 2005.  That's not impossible, but it's damned tenuous.  So what can we do to prepare for the possibility that nanotechnology comes first? Survival stations

To be specific, what can we do to increase the probability that the human species survives, in the event of either a grey goo outbreak, or - far more likely, and far more deadly - nanowar, the large-scale military use of nanotechnology?  Make no mistake, nanotechnological warfare or even grey goo is easily capable of wiping out the entire human race (272).  Faced with that threat, our first priority must be to ensure that some fraction of humanity survives, most likely in a survival station somewhere in space (274).

Undoubtedly the anti-disaster groups, including ourselves, will do everything possible to preserve the six billion people presently living on this planet.  But our first priority must be to preserve the existence of the human species.  The survival of individuals, including ourselves (275), must be secondary.  (Not that the goals are likely to conflict directly; I'm talking about the allocation of project resources.)  If intelligence survives in the Solar System, there will be a Singularity, sooner or later.  Given enough time, someone will code an AI.  We just have to ensure that survival stations, capable of (A) sustaining life indefinitely and (B) reproducing into an acceptably-sized culture, (C) come into existence before military nanotechnology (279) and (D) are out of the line of fire (280). Advance planning and design-ahead of survival stations

This project is independently initiable; it doesn't depend on the technological timeline or any other PtS projects (281).

The purpose of design-ahead is to narrow the gap between the invention of nanotech and the launching of survival stations.  The method is doing as much work as possible in advance.  In particular, design-ahead would consist of:

  • Designing the material composition, software, and nanoware of the survival stations, in whatever degree is feasible in advance of the actual nanotech breakthrough.
  • Assembling and training the personnel for the survival station.
  • Assembling whatever raw materials are likely to be required to build the survival station; this particularly includes fuel, if fuel is likely to be hard to manufacture via molecular assembly.  It also includes gene banks.
  • Selecting a launch site in advance.
  • Getting any necessary clearances, if there's anyone who could conceivably shoot down the ship.
Obviously, this is a long-term project.  Even in the short-term, however, it's imaginable that we might fund, say, a paper on what it would take to produce a survival station.  Even that much would be an improvement.

See also the Molecular Manufacturing Shortcut Group, a nonprofit devoted to discussing space travel and nanotechnology.  They might even have investigated survival stations; I'm not sure.  Anyway, they'd clearly be the people to turn to if we have a research question. Brute-force seed AI

Humanity's experience with computing suggests that brute force can make up for blind stupidity.  I believe that Deep Blue was examining two billion moves per second, to Kasparov's two moves per second, when it finally beat him.  Thus, we may speculate that Deep Blue was approximately one billionth as smart as Kasparov.

It's conceivable that a seed AI could be designed (but not run) which would operate on nanocomputing hardware.  This "brute force" seed AI would make up for lack of intelligence by using wider search trees.  If the potential for intelligence were present, the ability to understand what needs improving, the brute-force AI might be able to improve itself up to human smartness.  The interesting question is whether human smartness can be brute-forced.  This question is too technical, and too deep, to discuss here - but I think our evolutionary history says it's worth a shot.

I believe that design-ahead of a brute-force seed AI is the single most effective strategy for dealing with the possibility of nanotechnology.  The interval from nanotech to Singularity would equal the interval between nanotech and nanocomputing, or only slightly longer.  Nanocomputing, in turn, is likely to be one of the first applications possible, perhaps even a prerequisite application for an assembler breakthrough.  Nanocomputing is also likely to be available on the open market, or, if developed by Zyvex, available to fellow transhumanists. Emergency neurotranscendence

Failing the design-ahead or success of a brute-force seed AI, we can try to amp the existing hardware, also known as the human brain.  The idea would be to create someone/something capable of coding a brute-force seed AI, or at least someone capable of saving humanity from the tremendous mess consequent to the invention of nanotechnology.

The procedure would be trying every imaginable way of increasing the raw power available to the brain.  The method would probably be attaching nanodevices to individual neurons and using those nanodevices to change or expand the brain's information-processing characteristics.  Some examples might include:

  • Group minds, 1:  Take 1% of the neurons in the brains of eight people and randomly cross-wire them, using "transneurons".  (A transneuron is a single virtual neuron with multiple physical presences; nanodevices ensure that all inputs are added together, and that a unified output results.)  The idea is that the brains would learn to talk to each other, and that the resulting telepathy would work as intelligence enhancement.
  • Group minds, 2:  Use a more deliberately chosen set of transneurons.  Switch the corpus callosum so that my right brain talks to your left brain and your right brain talks to my left brain, for example.
  • Neuroexpansion:  Dump in extra neurons, perhaps from fetal neural tissue.  Dump in lots of them.  Hope that they hook up properly and that the result is an increase in intelligence.  Might not require nanotech, but isn't very likely to work, either.  (282).
  • Silicoexpansion, 1:  Write a simulation of neural tissue.  Interface the virtual neurons to an existing mind using "transneurons", as detailed above.  Hope that the mind is capable of expanding into the simulated extra neurons.  Duplicate the prenatal "default" patterns for each virtual neural area, if possible, on the theory that they'll be more receptive to programming.  The extra neurons count as intelligence enhancement in their own right; furthermore, once the virtual neurons are programmed, it'll become possible to play with the source code of at least part of a mind.
  • Silicoexpansion, 2:  As above, but with the best available pattern-recognition AI watching the virtual neurons (and the real neurons), so that there are two minds trying to meet on a common ground.
  • Uploading:  Scanning the human neural pattern and putting as much of it as can be recorded into the best possible nanocomputing simulation.  The hope would be that any gaps would heal themselves, or that enough of a mind would remain to be used for raw material (as a subroutine of a seed AI, or for silicoexpansion).  We are not talking about personality preservation; this is a desperation measure for getting some kind of intelligence into computational substrate.
The thing to remember about most of these methods is that they would require sophisticated nanomedicine, which means nanotech capable of operating inside a human body.  In-body nanomedicine is a far more advanced application than open-air nanoweaponry, and thus nanoweaponry is likely to arrive first.

We'd have to either rely on design-ahead, trust to the altruism of the technology's controllers (283), or cut a lot of corners on safety. Zetetics:  Augmented self-awareness

One harmless form of intelligence enhancement, technologically and legally practical in modern times, would be experimentation with augmented self-awareness.  Neurofeedback, or learning how to think rationally by watching your thoughts and emotions on a neuroimaging device.  Yes, I know that neuroimaging results don't come with handy labels, but it's possible that people could learn to correlate the patterns they see with the type of thought they're using.  If the cognitive technique of "rationalization" can be detected and unlearned... well, when I picked up the knack of identifying the subjective sensation that accompanies rationalization, my effective intelligence took a big jump.

Clichés to the contrary, history doesn't teach that everyone is corruptible.  There are some individuals in history who were corrupted by power, and some who weren't.  Corruptibility isn't an absolute, it's a balance, and balances can be tipped.  Zetetics might not be absolutely incorruptible, but they might reliably fall on one side of the balance.  And if there are reliably hard-to-corrupt individuals around, that may provide an "out" to some of the dilemmas associated with the rise of nanotechnology.

Would a government or a company, having obtained ultimate power, turn it over to a group of supposedly incorruptible individuals?  Not in today's world.  If everyone's desperate, and the Zetetics have already built a reputation, it could happen - but it would still be a fringe probability.  The only reason I'm even mentioning it is that Zetetics seem like nice people to have around in any case. Getting to know the independent labs

All this neat stuff assumes access to nanotechnology.  To minimize research and deployment times, we would need to be in on the nanotech breakthrough when it happens.  In practice, I think this would just mean making sure that Zyvex and co. know who we are beforehand.  Getting an endorsement from Eric Drexler might also prove effective.  Aside from that, there's not much to say about this - but it's a key point. After WWIII

One of the major branches in my visualization is the possibility that the invention of nanotech, or even the prospect of nanotech, would trigger a general war fought with nuclear weapons.

A nuclear war is not likely to actually wipe out humanity.  Australia might have a good chance of surviving (or not).  The end result would be to set us back ten or fifty years.  And in ten or fifty years, humanity will wind up in pretty much the same situation.  What can be done to affect the race between AI and nanotech in that time?

There are a lot of possible factors affecting the outcome.  The only method I can see for influencing the outcome would be preserving the knowledge of AI and computing hardware.  We would record basic research insights and detailed techniques, in a format and location likely to survive nuclear war.  So the Australian Backup Initiative would be one possible project, as would a more detailed time capsule intended to survive a thousand-year interregnum.

Nuclear war is not a happy thought.  Most of the human species dying out, with civilization returning over a period of fifty or more years, is not a pleasant thing to contemplate.  That is, however, one of the major possibilities.  If a small action taken now can make a big difference the next time around, then we should do it.

4: Initiation

4.1: Development initiation

4.1.1: Absolute minimum resources needed to begin development

  • A Website running CVS (free version-control server).
  • At least one smart and creative volunteer developer with a few off hours.
  • Something to develop, meaning:
    • A whitepaper for Flare Zero or Flare One which contains enough information for the developer to take it the rest of the way.  Developing such a whitepaper would probably take at least a month of full-time work.
    • Or I could start work on Flare Zero myself, using my design notes.
I know of at least three potential part-time volunteers.  So, given absolutely nothing else, we should still be able to start development of the Singularity timeline, however slowly.  My own assistance is going to have to go part-time (very part-time) soon, I think, unless I get funding to continue.

Funding required:  $0.  But it's going to be pretty slow.

4.1.2: Resources for initiation of stated strategy

  • A Website, with its own domain, running CVS.
  • At least one Singularitarian developer willing to go full-time.
  • Sufficient funding for me to either move to the same location as that developer, or at least be in the same location for a few days.  A verbal explanation, plus my design notes, plus my continued availability by email, should be enough for a creative developer to finalize the design of the Flare language, write a whitepaper, and begin implementation.
  • A small nonprofit institution capable of applying for grants to expand or continue the Flare and Aicore operations.  In the stated strategy, this is a new nonprofit called the Singularity Institute.
  • Funding for the developer, the domain, the incorporation of the nonprofit, and myself.  (I would be working full-time on the Aicore line - see 3.1.2: Development timeline.  Strategically, this is because I'm starting to get nervous about writing the damn AI already instead of working on funding or prerequisites or whatever.)
  • Enough additional funding to operate the domain and pay the two developers (including me) for a period of three years, two years being the time allotted before we need a prototype capable of attracting grants or private funding, and one year for good luck.

4.1.3: Ballpark figures on funding

Format:  $X/$Y/$Z.  X=minimum, Y=best guess, Z=maximum.  (284).
  • Startup costs:  $10K/$20K/$30K.
    • $1K/$10K/$15K for the legal expense of creating the nonprofit, or nonprofits if a Singularity Foundation is needed.
    • The domain is around $160 for four years, plus Web hosting that can probably start at $200/$300/$400 per year.
    • $3K/$4K/$6K for a couple of development workstations (one apiece)
    • $4K/$8K/$10K - first month's salary (below).
    • $500/$750/$1K - five-day trip to location of other developer.
  • Salaries:
    • I have no dependents and I'm a fanatic Singularitarian, so I can work cheap - $20K/$25K/$40K.
    • The other developer may be more constrained, perhaps by a pre-existing job paying a lucrative salary - programmers smart enough to be Singularitarians don't come cheap.  Assuming an arbitrary upper limit:  $25K/$50K/$75K.
    • Overhead (health plan, etc.):  22% is the figure I usually see used.
    • Outlays per month:  $5K/$7.6K/$11.7K
    • Three years:  $183K/$275K/$420K
  • Other recurring expenses:
    • Liability insurance - not sure how much it costs, or when it will be needed.
    • Filing annual returns - doable by volunteers?
    • Plus a lot of other expenses that'd apply if we needed a physical location.
My best guess, from these figures, is that around $100K would be needed to get started, with $300K necessarily available to ensure the project could survive for at least three years.  (285).  Actually, I don't know how much is required for a Singularity Institute, but I like to err on the side of caution, and as you can see I'm providing figures for sticking it through, not for launching it into the air and praying.

4.1.4: Interpolation

The figures given reflect my opinion that beliefs should be expressed in curves and probabilities, not scalars and certainties.  (287).  However, the curves given are for the funding required to accomplish a single vision - one developer working on Flare, and one developer (me) working on Aicore One.  The level of funding doesn't just change the points on the curves; it also changes what we're trying to do.  (288).

With more funding (289), the most obvious outlet is to add additional developers to the Flare project, and additional developers to the Aicore project when that project begins.  (290).  If any cost-cutting compromises have been made to get the project underway, from underpowered development tools to decreased salaries, these should be rectified.  (291).  With even more funding, the Singularity Institute can begin mid-term projects such as memetic outreach programs, cognitive science research, looking into nanotech survival stations, and so on.

The least tenuous source of additional funding, once initiation occurs, will probably be grants made by foundations.  (292).  As discussed above, such grants (a) may be non-optimable resources, (b) will require an initial investment of time to write proposals, (c) may impose constraints on the makeup of the nonprofit (293), and (d) will also require a proposable outlet for the grant.  Alternatively, the first funder may wish to accelerate development further and have resources available to do so, or additional funders may become available.  Such funding is to be preferred, since it imposes fewer constraints.

With less funding, it's possible to interpolate between the volunteer strategy and the Singularity Institute strategy.  With less than full funding, say around $30K, one could simply "launch" the effort and hope it attracts grant or individual funding; if that fails to materialize, work could revert to part-time status.  Interpolating again, the project "launched" could be solely Flare Zero, with no attempt to start work on Aicore.  Interpolating again, the Flare Zero project could proceed with one full-time developer (294) instead of two.  And so on, with the expenses being eliminated one by one.

Let me emphasize that this kind of minimalism should not happen unless necessary.  The Institute strategy, with two programmers, may appear to involve scarcely more benefit than the volunteer strategy, but the end goal is to bring about the Singularity, and that is not something that's likely to happen with only two full-time programmers, much less a shoestring volunteer operation.  The short-term goals of the Singularity Institute's short-term projects are important to the timeline, but equally important is the potential to move on to the next stage.

Before you can create X, you have to create the potential for X.  The volunteer strategy is a shoestring operation.  There's little effort put in, little prospect of rapid growth, and no means to handle any growth that does occur.  The Institute strategy provides a solid foundation for growth, the nonprofit status to attract grant funding and donations, and support for continuous rather than intermittent development.  It's more solid, more reliable, and thus far more able to accrete power and accelerate.  See 3.5.1: Building a solid operation.

4.2: Institute initiation

NOTE: It's been initiated.

The technical requirements for incorporating a 501(c)(3) nonprofit are $1K/$10K/$15K in legal fees, an ad-hoc Board of Directors (295), and a charter.  To get the Singularity Institute started, we need the nonprofit, a real Board of Directors, a founder, at least one initial project, and funding for said project.

The "founder" is someone who's likely to fill the same wide variety of hats assumed by the founder of a startup company - filing tax returns, calling up foundations, talking with possible donors, doing media interviews, managing projects, writing Websites, and so on.  I'm not sure whether this would be the Chairperson of the Board or the Executive Director; in the early stages, possibly both.

I may have written the plan for a Singularity Institute, but I am extremely reluctant to play the part of "founder", even at the very beginning.  I view my primary task as implementing the Aicore line.  The talent to run a foundation, even if combined with the policy-making requisites (296), should not be so rare as to render impractical the idea of finding someone better suited than myself.  There's a well-known set of personality traits associated with founding an organization, and they are not mine.  That's really all there is to it.

So if you have (or know someone with) energy, drive, willpower, charisma, enthusiasm, high-end intelligence, dedication to the Singularity, strong self-awareness, planning and organizational talent, the ability to use the expertise of others, writing ability, and the policy-making requisites described under 3.2.4: Leadership strategy, give me a ring.  Non-perfect candidates will be considered.

Given the founder, finding the rest of the Board... is up to the founder, of course.  But we can still be "on the lookout" now.  So what do we want in a Board member?  Three considerations have already been identified:  Maturity (enough to not screw up policy); self-awareness (enough to not screw up management of non-Singularitarian staff and Singularitarian allies); charisma (enough to make phone calls to foundations and someday preside over dinners).  Obviously, these requirements are not universal; we only need one charismatic, at least at first.  We need a majority with enough wisdom not to actively screw up policy, but active management might not be part of the job of all directors.

California law requires that there be more non-employees than employees on the board; while even non-employee Board members may be paid a stipend, I get the impression this is not supposed to come within several orders of magnitude of a salary.  Since both the founder and myself (to name two individuals likely to be on the Board) are likely to be employed full-time, I see several options for resolving this conflict:

  • Have either the founder (297) or myself (298) not be on the Board, reducing the number of additional Board members required to two.
  • Have either the founder or myself be directly supported by the funder, enabling us to work full-time while not being an employee of the foundation.  This complicates tax issues a bit, but may be worth it for the organizational-chart simplification.  This reduces the number of additional Board members required to one.
  • I can think of at least two other roles/individuals likely to be non-employed members of the Board.  I can think of two authors who've written stories about the Singularity which reflect an awareness of Singularity policy issues; if either one turns out to be an actual Singularitarian, and willing to serve, that would give us five members for the Board of Directors, of which only two would be employed by the Singularity Institute, satisfying legal conditions.
  • The Singularity Institute can be split into Singularity Memetics and Singularity Research, then employees of one can serve on the Board of the other.  However, this strikes me as being a tenuous, silly solution, one likely to create trouble in the long run.
Regardless of which strategy is used, I don't think we'll be able to select a good Board by all these criteria and still have physical meetings.  The Singularitarian cause began in cyberspace and, as yet, has no physical basis.  The most likely candidates are scattered across the planet.  (On the plus side, however, we won't have to rent offices.)  Fortunately, California law seems to explicitly allow for cyberspace-based Board meetings.

4.3: Memetics initiation

As explained in 3.3.1: Memetics timeline, the primary goal for memetics in the short-term is creating additional Singularitarians, particularly those needed for the initiation of the Singularity Institute and its projects.  (299).

The task of memetics in the short-term is ensuring that the maximum number of likely helpers find out about the Singularity in all its coolness, the quest for Singularity, and the location of the Singularitarian mailing list.  (Note:  The last piece of information, or even the fact that a Singularitarian group exists, belongs in second-step or third-step documents.)

The three points where effort can be applied:

  • Creating new, exciting, and future-shocky Web pages;
  • Linking existing and new Web pages into the 'Net (or otherwise increasing the number of hits);
  • Writing for-print articles (see 3.3.2: Meme propagation and first-step documents) which will enthuse readers and refer them to the Web pages.
Publication memetics - Websites and articles - are, I hope, something that's easy to volunteer.  The only prerequisite is intelligence, writing ability, a computer, and a lot of timesweat; but it's timesweat that can be distributed over an arbitrary amount of volunteered time.  In short, it doesn't take Institute resources.

Publication memetics, because they require little in the way of an initial investment (while by-their-nature being directed towards future growth) are the chief instruments of initiation, and the primary present way in which "You can help the Singularity now."  I've been "volunteering" Websites in the service of the Singularity for three years.  (300).  This includes the very page you're now reading.

Appendix A: Navigation

A.1: Principles of navigation

"Navigation" is the name I've given to the art and skill of altering the future.  I feel that "futurism" doesn't cut it; futurism focuses on prediction rather than manipulation, and most futurists as-seen-on-TV focus on a single future, which is presented as either utopian or dystopian.  Navigation is the art of choosing between futures.  At issue is not "good" and "bad", but "better" and "worse".  At issue is not the probability of a future, but how the probability can be affected by our actions.

The underlying formalism for goal-based decision-making is covered in TMOL::Logic::choices, but it's worth exploring a simplified version.  We start with a goal (or set of goals) G, and assume that there's some way of calculating the value of G for any future F (say, the "fulfillment" of G in F times the "desirability" of G in F).  Each future has an estimated probability P given the present; for example, the probability of "nanowar" might be 30%.  When considering a choice, each possible action leads to a different probability spectrum for the possible futures; A1 might lead to "nanowar" with a probability of 30% and to "Singularity" with a probability of 50%, while A2 might lead to "nanowar" with a probability of 20% and to "Singularity" with a probability of 45%.  Given all that, there's an obvious arithmetical method of calculating the value of an action:

  • Value(A) = Sum for all F: (Value(G in F) * Probability(F))
One then chooses the action with the highest value.

I've never used the zeroth-order formalism directly, of course.  Any form of cognition which can be formalized mathematically is too simple to contribute materially to intelligence.  I've never used the arithmetic at all; getting the relative quantities right, to within an order of magnitude, is enough to yield unambiguous advice.  (This rule is itself part of the second-order theory of navigation:  "If the first-order theory doesn't give strong advice, or the advice is sensitive to minor fluctuations in the model of reality, then navigation is the wrong skill for making the decision.")

However, I've used heuristics that are derived from examining the formalism.  For example, if the utility of a particular effort is measured by its effect on the probabilities of the possible outcomes, then it's clear that what matters is not the absolute value of any of the probabilities, but how large the shift in probabilities is.  Likewise, the importance of a particular shift in probabilities is measured by the difference in value between the two futures.

The principles of navigation, mostly derived from the second-order theory, are actually simpler than the formalism:

  • Know when one future is definitely more desirable than another.
  • Know which futures are probable enough to care about.
  • Know which probabilities are easiest to alter.
  • If an action makes the probability of one future go up (or down), know which other probabilities are going down (or up) as a result.
  • Know the best time to act.
It's often important to remember the relativistic nature of navigation.  For example, some people would prefer a Singularity that occurs via uploading (301) rather than a pure artificial intelligence.  I rather doubt that it makes a difference whether a grown-up's mind started out as a baby human or a baby AI, but let's assume that there exists a significant probability that humanborn Minds are nicer than AI-born minds (and that this probability is greater than the probability that AI-born minds are nicer than humanborn Minds, and that "nicer" represents a significant differential desirability which is approximately equal in both cases). Is it necessarily rational to take actions that will increase the probability of an uploading Singularity relative to an AI Singularity by trying to sabotage AI efforts?  (302).  No, because intramural fighting would reduce the probability of both Singularities, thus increasing the probability of nanowar.  (See A.3: Deadlines.)

These are the rules of navigation, as best I've learned them:

  1. Don't toast the planet; don't lose permanently.  (303).
  2. Before you can create X, you must create the potential for X.  (304).
  3. The variables whose values determine the future:
    • Your actions;
    • The actions of others;
    • The random factors;
    • The hidden variables.  (305).
  4. Clemmensen's Law:  "IMO, the existing system suffices to permit technological advance to the singularity. Any non-radical change is unlikely to advance or retard the event by much. Any radical change is likely to retard the event because of the upheaval associated with the change, regardless of the relative efficiency of the resulting system."

  5. Or as I would put it:  "Don't meddle."  Don't get sidetracked into subproblems of sociology or politics, no matter how great the enthusiasm or indignation.
  6. When dealing with a large group of humans, assume that at least one will take the undesirable action you're worried about.
  7. It is the responsibility of a navigator to emotionally accept all the possibilities, and to plan for any that have a reasonable chance of occurring.

A.2: CRNS Time

One of the tools I use for navigation is "CRNS" time, which stands for Current Rate No Singularity.  CRNS measures how close we are to a given technology - or rather, how close the world is, without further intervention, if progress continues at the current pace.

For example, Drexler was quoted in a 1995 Wired article as predicting nanotechnology in 2015, so that's 2015 CRNS.  Of course, because navigation is a probabilistic thing, the CRNS time (as I guesstimate by interpolating the expert guesses and adjusting for developments since 1995) is more like "a 95% chance of getting nanotechnology between 2002 and 2020, a 65% chance of getting nanotechnology between 2007 and 2015, and the 50% point being 2012" - all CRNS, of course.  One imagines that Drexler would give a similar curve (306).  In recent times, I've moved up my CRNS estimate on nanotechnology in response to a series of reported technological breakthroughs (307) and announced massive investments (308); it now seems that the 50% point may be 2010, or earlier.

Some other key CRNS numbers include AI at 2020 CRNS (309); uploading at 2040 CRNS (310); ubiquitous uploading at 2060 CRNS; the first true neurohacks, modified as children, become contributors at 2030 CRNS (311); the first adult-neurohack Zetetics (reengineered for greater rationality) at 2015 CRNS (312); Vingean headbands (neurosilicate or mind/computer IA) at 2020 CRNS (313).  Those are just my numbers - best guesses.

The key thing about all these numbers is that each one assumes none of the other ones have come into play yet - for example, the numbers for uploading assume no access to Drexlerian nanotechnology, and the numbers on AI assume no nanocomputers or Specialists.  CRNS time measures the current distance, not the dependent distance.

And that's because of the way CRNS time is used - to spot deadlines.  For example, AI is 2020 CRNS while nanotech is 2010 CRNS.  For reasons I'll discuss below, I would very much like AI - the full Singularity - to beat nanotechnology into play.  Hence the PtS target date of 2005-2010 CRNS.  Because the technologies of AI are "easy" to invest in, and relatively easy to accelerate, the PtS plan is plausible.  However, trying to get uploading (2040 CRNS) to beat nanotechnology into play is basically impossible; the gap is far larger and the uploading technologies are considerably harder to accelerate.

CRNS time, combined with common-sense "ease of investment" numbers, makes it clear which technologies will be relevant to the final outcome, and what level of effort - of acceleration - is necessary to win.  (Obviously I'm skipping over a lot of stuff here, like where I'm getting all my CRNS numbers; maybe someday that'll go in a separate page.)

A.3: Deadlines

"Watch, but do not govern; stop war, but do not wage it; protect, but do not control; and first, survive!"
        -- Cordwainer Smith, "Drunkboat", in The Instrumentality of Mankind
The first rule of choosing the future is to make sure there is one.  I think that at this point, that has to be the dominant consideration.  My projection of the unaltered future - current rate, no intervention - ends with the world being destroyed by nanotechnological weapons.  I don't think we can afford to be picky, at all, about what kind of Singularity we get.  Life as we know it is meta-unstable (314); it ends either when we blow ourselves up or invent better minds.  Shifting the balance from the first group of probabilities to the second must take priority over any internal divisions within a group.

In my visualization, nanotechnology is the primary deadline.  I think the development of nanotechnology will be followed by a rapid descent into nuclear war, nanotechnological warfare, or possibly worse.  Some arguments I found convincing appear in MNT and the World System, Nanotechnology and International Security, and the Nanowar discussion from the Extropian mailing list.

I find it difficult to visualize the specific descent into chaos.  I can't find an explanation of what stages nanotech is likely to go through, what the capabilities are at each level, and how long it will take to develop the software for any given capability at each level.  I find it difficult to imagine how any individual will respond to the prospect of nanotechnology, much less societies or governments.  I find it difficult to imagine my own reaction, and I've been living with the prospect of nanotechnology since age eleven.

I see many powerful organizations attempting to develop the military applications at maximum speed, and trying to prevent anyone else from gaining access to the technology.  I see said organizations immediately exploiting the military applications for social leverage through blackmail or actual attack.  I see individuals within and without nano-capable organizations attempting to hack into the system or seize power.  I'm really not sure what the outcome of such a madhouse would be, but it seems likely that most of the Earth's population would wind up as casualties, and it looks to me like there's a significant probability of humanity, maybe even all life in the Solar System, being wiped out altogether.

Every now and then, you hear veterans of the Cold War saying they don't know how we avoided nuclear war for forty years.  Looking at the prospect of military nanotech, it becomes quite clear how nuclear war was avoided.  Nuclear weapons, as a technology, have several built-in limitations and characteristics that make nuclear war unlikely.  This becomes clear because nanoweapons lack those limitations.

  • There's a big gap between building one nuclear weapon and building 6,000.  The US and the USSR had time to react to each other's moves.  A huge imbalance in striking power never developed.  With nanotechnology, matter is software.  You can almost "Copy" and "Paste".
  • Nuclear weapons never got to the point where a first-strike had a good chance of succeeding.  It's hard to analyze without a better grasp on the technology, but it looks to me like a nanotech first-strike might have a significantly better chance of succeeding.  Nano could be infiltrated into the target country and "set off" at a predetermined time.
  • Nobody really wants nuclear war.  All it offers is destruction.  Nanotech offers wealth, creating a greater compulsion to develop it; military nanotech offers the prospect of control as well as mere annihilation.
  • It seems likely that the nanotech breakthrough will consist of expensive knowledge and relatively inexpensive hardware.  The assembler breakthrough could consist of the right insights, plus an AFM (315) or a protein-synthesis laboratory - hardware currently available off-the-shelf.  (Contrast with centrifuges and mining operations for nuclear weapons.)
Even the prospect of nanotechnology being developed might be enough to trigger a general war, or a nuclear war.

There are various tactics we might employ if nanotech comes first to reroute the nanotech breakthrough back into a Singularity.  Failing that, we can try to disperse survival stations beyond the reach of an Earth-enveloping catastrophe.  But our best bet is simply to beat nanotechnology to the punch.

Because we're trying to beat a 2010 CRNS technology with a 2020 CRNS technology, a massive increase in investment is required.  No one project can be enough.  It'll take an industry.  That's why I designed the technological timeline around the concept of an incremental technological path to Elisson, with the associated incremental motivation for investment, rather than advocating a de-novo Elisson project.  The PtS timeline may require efforts not intrinsically necessary to the creation of the Last Program, but that's how we hit the problem hard enough to win.

Fortunately AI is "easy" to invest in - it's possible for an individual to become involved, no laboratory required.  AI stands alone, of all the ultratechnologies, in that volunteers can assist; AI is also the technology with the most immediate payback for the first steps on the incremental path.  Given the proper program architecture, AI is the technology where multiple (hundreds or thousands) of efforts most easily combine.  I therefore believe that AI is the proper intervention point for the primary effort.

A.4: IA and AI

Despite the relative ease of investment in AI, the plan shown doesn't talk much about Intelligence Amplification - brain-to-computer interfaces, uploading, neurohacking and so on.  If there's funding to spare, I certainly don't object to seed investments in IA technologies - who knows, we might get lucky.  And 3.5.3: If nanotech comes first talks about emergency methods of intelligence enhancement.  But those are side issues.  There's no "technological timeline" for IA.  Why am I putting all my eggs in one basket?

Take neurohacking, the closest IA technology (CRNS), and certainly the one easiest to accelerate.  Given an expert neurosurgeon, a good hospital, a neuroimaging lab, and some off-the-shelf hardware from Centronics, I think I could get results in two years.  Except that I'd also need a dozen suicide volunteers and a cooperative Congress.

And that's just for Zetetics and other emotional-reengineering projects, which would work on adults.  For real cognitive enhancement, you almost certainly need to start in childhood, or preferably infancy.  There would be severe ethical questions about the propriety of doing that deliberately at our current stage of technology, to put it mildly.  Furthermore, the actual resolution of the ethical question is moot; society simply will not permit us to do it.  If we tried, I would not expect legal problems.  I would expect mobs.  With torches.

The only neurohacking efforts likely to yield fruit are (A) Adult reengineering, (B) augmented self-awareness through neuroimaging, and (C) collecting natural neurohacks.

Recent advances also introduce a substantial probability that neurosilicate interfaces will play an important part in cognitive science - and thus, indirectly, AI.  I still I doubt that human/computer interfaces will become sophisticated enough to count as true Intelligence Amplification.

Finally, in the event that nanotech beats us to the punch (316), neurotranscendence has a small chance of rerouting nanotech back into the Singularity; it may be practical to do research-ahead or design-ahead work on the methods.

Ultimately, however, it all comes back to AI.  The utility of any neurohack is vis ability to work on a seed AI, and only secondarily the ability to avert human-scale catastrophe.  Even the neurotranscendees described in Emergency neurotranscendence would almost certainly devote more talent to creating a seed AI than to preventing nanowar.  Why?  Because one is easier to do than the other.

Thus the plan above focuses almost entirely on AI.


The text of this document, and the knowledge that created it, wouldn't exist without the contributions of all sorts of people.  In particular, I would like to thank the members of the Singularitarian list for their error-checking and suggestions.  Any remaining mistakes are, of course, their fault.

Comments on version 1.0:

I would like to thank Cole Kitchen, Damien Broderick, and Randall for spotting errors in version 1.0.  I would like to thank Dale Johnstone and Cole Kitchen for their comments on version 1.0.  I would like to thank Cole Kitchen for proofreading far beyond the call of duty on all versions of 1.0.

Comments on the third draft:

I would like to thank Jakob Mentor and Damien Broderick for their comments on the third draft.

Comments on the first draft:

I would like to thank Brian Atkins, Doug Bailey, Aaron Davidson, and MetalLynx for their comments on the first draft.

I would like to thank Nick Bostrom for spotting an error in Appendix A: Navigation.

Helpful people:

I am indebted to Brian Atkins, Edwin Evans, Jakob Mentor, and Paul Hughes for various forms of helpfulness.


Brian Atkins first told me to look into this thing called "open source".

Mitchell Porter is responsible (through his High Weirdness by Email index, brought to the Web by Cosma Shalizi) for my finding out about the online transhumanist community back in 1995.  He is probably the smartest person I have ever encountered.

Vernor Vinge invented the concept of the Singularity, and wrote the book (True Names and Other Dangers) which converted me to a Singularitarian over the course of around five seconds.

Ed Regis, through his book Great Mambo Chicken and the Transhuman Condition, introduced me to transhumanism at the tender age of eleven.


"The Plan to Singularity" is dedicated to K. Eric Drexler, author of Engines of Creation, deliberate and calculated founder of the entire nanotechnology industry.  Here's to you, Doctor Drexler!  Even if you wind up being directly responsible for the destruction of our home planet, you'll still be the only person I ever regarded as a role model.

Version History

Sep 24, 2000:  Implemented yet more fixes suggested by the amazing Cole Kitchen.

Sep 1, 2000:  Entire document moved to sysopmind.com.  Huge sectors of PtS obsolete, due to formation of Singularity Institute, and due to fundamental shifts in AI strategy caused by publication of CaTAI 2; note added to that effect.  Many links adjusted.

Mar 23, 2000:  Cole Kitchen points out that some fixes introduced new errors, adds some more gotchas.  Fixed.

Mar 19, 2000:  Fixed many Cole-Kitchen-spotted errors; implemented a suggestion by Dale Johnstone.

Jan 12, 2000:  Cole Kitchen catches two spelling errors.  Also, I spot an error in the "description" property.  All fixed.

Jan 8, 2000:  Randall catches a broken link in 2.2.9: Self-optimizing compiler.  Fixed.

Jan 6, 2000:  Spelling error noticed.  Also, I've misspelled the name of a fellow Singularitarian.  Both fixed.

Jan 6, 2000:  Damien Broderick catches a technical error in 2.2.14: Transcendence.  While fixing it, I notice a spelling inconsistency in 3.5.3: If nanotech comes first.  Both fixed.

Jan 3, 2000:  John Grigg quotes a section on Extropians which contains a grammatical error.  Fixed.

Jan 3, 2000:  Cole Kitchen reminds me to remove the "Draft Edition 0.5" notice.  Fixed.

Jan 1, 2000:  "The Plan to Singularity" 1.0 published. Singularitarian list, Extropians list, and transhuman list notified.  403K.

Dec 27, 1999:  Singularitarian list informed of final draft (0.5).  Sections on "Initiation" and "Navigation" rewritten.  392K.  Name changed to "The Plan to Singularity"; web address altered accordingly.

Dec 23, 1999:  Singularitarian list informed of Draft Edition 0.4.  Rewrote section on Strategy, pretty much from the ground up.  382K.

Nov 27, 1999:  Singularitarian list informed of Draft Edition 0.3.  Completely revised section on Vision.  Shifted from Internet-distributed seed AI to centralized-supercomputer seed AI.  Removed several sections.  250K.  Polylithic version published to 'Net for first time.

Nov 7, 1999:  Singularitarian list informed of Draft Edition 0.2.  Completely revised section on Technology.  Principles of Flare language exposed.  266K.

Oct 7, 1999:  Nick Bostrom points out that Drexler did not say 2012 was the most likely ETA on nanotechnology.  Fixed.

Sep 17, 1999:  Singularitarian list informed of "The Plan to Singularity" (then called "Creating the Singularity").  177K.

  • 1:  Vernor Vinge, True Names and Other Dangers, p. 47:  "Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss.  It's a problem we face every time we consider the creation of intelligences greater than our own.  When this happens, human history will have reached a kind of singularity - a place where extrapolation breaks down and new models must be applied - and the world will pass beyond our understanding."
  • 2:  Which would probably interfere with our AI development efforts, insofar as we would likely be dead.
  • 3:  See Appendix A: Navigation.  Drexler once said that the conservative-early estimate of nanotechnology's arrival time - the estimate one should use if one wishes to get something done beforehand - is 2010.
  • 4:  I mean "hacker" in the Old High Hacker sense, not the system-cracker usage perpetuated by hack writers and an uninformed media.
  • 5:  Readers of Drexler's Engines of Creation will have heard of Eurisko's more interesting accomplishments, which included finding a bug in the underlying LISP interpreter, beating the pants off the human competition in the legendary "Trillion Credit Squadron" games, and designing VLSI integrated circuits.

    Having obtained Lenat's original papers on Eurisko, thanks to Jakob Mentor, I can say that the accomplishments are less impressive than they appear, but the underlying architecture is vastly deeper than you hear about.  "Heuristics" doesn't even begin to describe it.  The architecture also points up the absolute necessity of an AI being able to manipulate its own source code; that ability is the glue that holds an artificial mind together.  I.e.:

    "For example, EURISKO was originally given units for EQ and EQUAL, with no explicit connection recorded between them.  Eventually, it got around to recording examples (and nonexamples) for each, and conjectured that EQ was a restriction (a more specialized predicate) of EQUAL, which is true.  A heuristic suggested disjoining an EQ test onto the front of EQUAL, as this might speed EQUAL up.  Surprisingly (to the author, though not to EURISKO), it did!  This turned out to be a small bug (since fixed) in the then-extant LISP.  Once it had the conjecture about EQ being a special kind of EQUAL, it was able to look through its code and specialize bits of it by replacing EQUAL by EQ, or to generalize them by substituting in the reverse order.  EURISKO analyzed the differences between EQ and EQUAL, and came up with the concept we refer to as LISP atoms.  In analogue to humankind, once EURISKO discovered atoms it was able to destroy its environment (by clobbering CDR of atoms)."
                -- Douglas B. Lenat, "The Nature of Heuristics" (6)
  • 6Artificial Intelligence 19, pp. 189-249, 1982.  Section 4.5.2, "Results in programming and representation".
  • 7:  "IT" stands for "Information Technology", meaning "computers and computer-related stuff", with the connotation of computers and software being used for business.
  • 8:  I believe in Vingean Powers too, but the mundane AI comes first, especially if you're trying to build an industry on top of it.
  • 9:  I stole this metaphor from Damien Broderick's The Spike:
    "Yudkowsky paints an intriguing portrait of the mind of a Power, while insisting that whatever we can say about post-Singularity minds is inevitably a travesty, a painting daubed by a blind artist."
            -- Damien Broderick, The Spike (1st ed.), p. 221.
    Turnabout is fair play.
  • 10:  Ideally these should be intrinsically spectacular successes, but we shouldn't shrink from using the AI Publicity Amplifier to get prosaic, but real,  successes into the news.
  • 11:  Nuisance software patents pose a major obstacle to this vision, and it may be worth a time-investment to do something about it.  See 3.1.6: Dealing with blocking patents.
  • 12:  Either that, or the Singularitarians maintain tight de facto control, like Sun Microsystems and Java, or Linus Torvalds and Linux.  See 3.1.5: Keeping the timeline on-track.
  • 13:  At least in the First World.
  • 14:  See 3.3.2: Meme propagation and first-step documents.
  • 15:  An understatement.  Beyond a certain point, part-time operation is not practically feasible, even for open source.  See 3.5.1: Building a solid operation.
  • 16:  A successful run of a seed AI would be the last program run before the Singularity.
  • 17:  A previous version of this document, which called for running the Last Program over the Internet on charitably contributed processing power, could in theory have proceeded without any large-scale funding at any point (albeit very, very slowly).  In practice, the probabilities involved were simply too tenuous.
  • 18:  See 3.2.2: Nonprofit status for some of the reasons why a private company funded by venture capital would be less efficient.
  • 19:  In the United States, a 501(c)(3), technically known as a "public charity".  If this conjures up unpalatable images, don't worry:  It's just bureaucratese.
  • 20:  In draft editions 0.1 and 0.2.
  • 21:  To help prevent traffic analysis from being used to trace AI efforts.  If everyone is running encrypted distributed computations, we're harder to find.  (22).
  • 22:  There should be a science-fiction story about how all the network traffic created by Windows NT machines chattering at each other is intended to mask an AI.  (Windows NT machines spontaneously burbling at each other accounts for 25% of the traffic on some corporate networks.)
  • 23:  Formally, that the number of transistors that fit in a given area of silicon will double every eighteen months.  Less formally, it refers to the doubling periods of CPU speeds, computer speeds, RAM per dollar, disk space per dollar, and so on.
  • 24:  Hardware manufacturers are our friends.  We care about their bottom lines.
  • 25:  Note:  Above was written before Windows 2000 was announced.  (Microsoft says that 128 MB of RAM is the bare minimum for running Windows 2000, and that 256 MB is recommended.  How long can they keep this up?)
  • 26:  Eerily, this section was written, and posted in a draft edition, right before IBM announced the Blue Gene petaflop project, intended to crack the protein-folding problem.  It makes you wonder who's reading this stuff.  The odd part is that this announcement advances the AI and nanotech timetables by roughly equal amounts.
  • 27:  The sort of thing Vernor Vinge talks about in A Deepness in the Sky.

    (In association with Amazon.com.)

  • 28:  At most, we might launch a very small "effort" whose purpose is to persuade someone else to do it.  The world has too much room for improvement; we can't implement every single bright idea.
  • 29:  And subsidiary efforts such as the Flare programming language.
  • 30
    "It also seems necessary to apologize for doing theoretical work in a world where experimental gains are often so hard-won. If this theoretician's description of possibilities seems to make light of experimental difficulties, I can only plead that it would soon become tedious to say, at every turn, that laboratory work is difficult, and that the hard work is yet to be done."
                -- K. Eric Drexler, Nanosystems
  • 31:  All these things being big no-nos in the world of Coding a Transhuman AI, you understand.
  • 32:  In an extremely loose sort of way, "crystalline" might be defined as the opposite of "organic".  When you want something to happen in a crystalline system, you just code it.  In an organic system, you'd try to make the event a natural consequence of a more general rule.  Crystalline is specified; organic is self-organizing.  Crystalline reasoning is linear and direct; organic combines multiple lines of reasoning.  Crystalline cognition is monolithic and opaque, organic cognition can be broken up into subcomponents.

    Classical AI can be defined as AI in which thought - content - is crystalline.  This is nearly the formal definition of Physical Symbol Systems:  AI in which the elements or atomic units of code and data are supposed to have direct meanings.

    One of these days I'm going to have to write up a better explanation of what "crystalline" means.  In a formal sense, the term is meaningless, since "crystalline" always applies to the lowest level of the system.  In practice it's a useful concept.

    Chrystalyn, Aicore One, is crystalline on the programmatic level; modules are integrated with each other mostly by hand.  Thoughts in Chrystalyn should still be fairly organic.  Progress along the Aicore line can be partially measured by the programs, and then the architecture, becoming decrystallized.

  • 33:  Elisson is the name of the AI in Coding a Transhuman AI.  See 2.2.13: Elisson.
  • 34:  API:  Application Programming Interface.
  • 35:  Note that the human visual cortex is useless without the rest of the brain.  If you grew up with traditional AI, I should emphasize that domdules are not microtheories.  Domdules are not capable of independent reasoning.  Domdules exist only in combination.  (It's not that they combine synergetically; one domdule simply will not function.  Domdules are the dimensions of the space in which thoughts exist; you can't have a one-domdule thought any more than you can have a one-dimensional apple.)  The process embodied in a domdule is qualitatively different from the general intelligence of the AI.
  • 36:  Note that these skills would hopefully be the product of the whole system, not the "design optimization domdule".
  • 37:  Again, for those of you who joined us from traditional AI, I should emphasize that a "natural-language domdule" is not the same thing as a "natural-language interface".  An NL domdule would presumably contain the ability to notice Chomskian sentence structures, distinguish nouns from verbs, and associate English symbols or phrases with internal concepts.  Dictionaries for particular domains would be learned separately.
  • 38:  Another key parameter will be how hard it is to teach an Aicore instance a skill, or a piece of knowledge, and whether skills can be transferred between similar instances or sold on the open market, and whether the reification of skills into transferable or marketable form is automatic or labor-intensive.
  • 39:  IDE stands for "Integrated Development Environment", the suite of applications that let you edit, compile, run, and debug source code.
  • 40:  For example, "Execute whatever series of actions is projected to lead to this [runtime-determined] goal."
  • 41:  For example, suppose you have a domdule targeted on describing objects as XML and a domdule targeted on the file system.  Then object persistence is literally as simple as the thought "Write objects to disk."  Or rather, "Goal:  This set of objects described as XML in this file."
  • 42:  In other words, I'm willing to include humble disclaimers for the sake of form, but not to go to any actual authorial inconvenience.
  • 43:  The "root" effect in the chain of causality.
  • 44:  An example of an "annotation":

    can become

        <comment>Bears watching</comment>

  • 45:  E.g. intelligent binding of variables by keyword, or semantic comments, and so on for about three pages (literally) of cryptic titles alone.
  • 46:  A cool phrase from Vinge's A Fire Upon the Deep; I use it to describe problems associated with scaling up the size of a program.
  • 47:  To notice and manipulate LISP, you need to transform it into an alternate data structure that can have comments attached.  The LISP program syntax itself does not allow comments.  This sounds like a trivial programming problem when considered in isolation, but having to operate on diversely formatted instances of what is theoretically a single piece of data can easily be the difference between writing complex tools naturally, and writing simple tools with great effort.
  • 48:  Again, you can theoretically do this with a Perl hash or a Java hash or Python objects or any number of other things, but it's not natural to the language.  You can write C++ code in C, but it won't be pretty.
  • 49:  It's easier to create needed language features if you control a language.  It's easier to integrate AI and a programming language if you control both.  But this is not allowed to be an independent consideration, since that's the hacker sin of fragmentation - the Not-Invented-Here Syndrome under another name.
  • 50:  Aggravated vaporware may be more of a hacker sin than a simple "The Flare parts of the timeline work by magic.  If you'd rather not take my word for it, feel free to skip the Flare parts of the timeline."  (This is, in fact, what a previous version of this document said, almost verbatim.)  But you can't plan a future based on magic.  I will be as speculative as I must to draw a concrete line, to make the component technologies visualizable as achievable goals.  Those who object on the grounds of vaporware should consider themselves free to read the Flare timeline as speculation.  Those who object on the grounds of immodesty should consider themselves free to read the Flare timeline as something that "somebody ought to do", rather than as a claim that I will do so.  Those who can't modify their mental processes accordingly should consider themselves free to read another Web page.
  • 51:  So why are those features in the design at all?
    • To make it easier for AIs to understand the language.
    • To force better design in the implementation.
    • To give the language a unified spirit.
    • Completeness.
    • Consistency.
    • Elegance.
    I acknowledge that features included solely for these reasons are low-priority, but they are still good, and will become necessary as we move farther down the timeline.
  • 52Richard Gabriel:  "It is better to get half of the right thing available so that it spreads like a virus. Once people are hooked on it, take the time to improve it to 90% of the right thing."  Although as Gabriel notes, it is important to remember that "The 50% solution has to be basically right."
  • 53:  See 2.2.3: Flare One.
  • 54:  Billing systems, tax codes, role-playing games...
  • 55:  Availability on multiple platforms.
  • 56:  My design calls for Flare to be able to treat any POSIX process that outputs XML as a sprocket (a stream that produces objects, sort of like ICON's "generators").  In other words, on any Unix or Unix-like system, any process that outputs XML-formatted text (57) can be treated as directly outputting Flare objects.  Which is the sort of powerful yet simple idiom that would come under the heading of "10% of the features produce 90% of the functionality".

    Thus it should be fairly easy to develop Flare wrappers for Python or Java processes, since the best and most frequently used implementations of these languages already specify means of external access - RMI, CORBA, IDL, and so on.  So for the initial versions, we can just borrow existing system libraries rather than writing our own.  And of course, this also enables access to legacy code.

  • 57:  To access a Unix tool that doesn't output XML, write a wrapper process that translates the input format and output format into XML.  (We should probably include some tools for doing this automatically, or find some already available.)  Now the Unix process is a Flare object.  Even Perl can't do that.
  • 58:  Although it's fundamentally necessary for J. Random Hacker to use Aicore naturally, without a year of experience.
  • 59:  This is actually a heck of a lot of reflexivity.
  • 60:  The term "crystalline" is defined in 2.1.1: The "Aicore" line.
  • 61:  Of course, crystallization is relative - Chrystalyn will be more grounded than any contemporary AI, and is only "crystalline" relative to Elisson or a human mind.
  • 62:  Impressive:  Occasionally beating the human competition.
  • 63:  This is what I mean by "crystalline"; the notice-level functions aren't composed of some lower level, and while they may have declarative annotations that tells the AI what the function does, the function itself is opaque.  AI on that level may be able to swap codelets but it can't design its own codelets.  That challenge comes later in the timeline.
  • 64:  See the description of Flare.
  • 65:  Possibly just keyword-based labels - "past", "future", "causal link" - or possibly using more complex descriptions.  Certainly later versions will need more complex descriptions, whether those descriptions are prepackaged or learned.
  • 66:  Eurisko's user model was scary:
    "If the user is impatient (according to the user model, which, e.g., might have noticed a flurry of ^T's being typed), then execute the ThenPrint actions of the relevant rules before actually working on the other Then slots of any of them."  (Sample heuristic.)

    "The model of the user (based on him/her as an individual and also based on groups the user belongs to) determines how treat his/her requests and interrupts.  Some categories (such as AI researchers) enjoy seeing a program retaining full control; for other groups (such as mathematicians), Eurisko knows it must (and does) simulate being a quite subservient program."

    "For instance, when a new user types ^T as Eurisko is starting up, Eurisko concludes that he/she is familiar with other computer systems, is impatient, is probably scientifically-oriented, etc."

  • 67:  No doubt this will prove a pain in the neck equivalent to writing C++ programs in C, but it should still be possible.
  • 68An XML program state, combined with XML programs, provides a very obvious idiom for modularizing the language.  If a program is
    - bearing in mind that that's not necessarily what actual FlareCode will look like - you can divide the Flare interpreter into modules based on the XML tag; for example, <if> would become a module.  Then extending the language is as easy as dragging in a module.  You can also have alternate versions of a module - one implementation as a C++ linked library, one implementation written in Python, one implementation written in Flare, with the fastest possible implementation for the module being used.  Or if the Flare language becomes complex, we can divide the implementation modules into Simple Flare modules with C++ implementations and full-Flare modules with Simple Flare implementations.

    And if it's possible to use the C++ implementations of all the modules, or all the modules in some "folder", then instead of using all the little libraries, one can use the monolithic library for that folder.  So if the modular architecture isn't used - if the programmer doesn't fool around with the language - the implementation will be just as fast as if it were completely monolithic.

    The immediate application is that to port Flare, the programmer can develop a C++ implementation of Simple Flare first and get a slow but complete version of Flare immediately.  The programmer can then develop and test a fast C++ implementation of Flare incrementally, one interpreter module at a time, replacing Simple Flare implementations with C++ implementations.  My guess is that this will make porting Flare, or developing new versions, a lot easier.

    The timeline application is that AI coders and security programmers can write their own Simple Flare (or full Flare) implementations of Flare tags and use them to run pieces of code in a secure or analyzable layer.  This is what creates the potential for J. Random AI Hacker to fool around with self-understanding code and distributed code and self-optimizing compilers and so on.

    The reason this isn't in Flare Zero is that the implementation will take longer to design, require a more complex and elegant architecture, require the internal use of advanced programming techniques (dynamically linked libraries and probably a custom "make" program), and probably be a bit slower.

    You can take this as being fairly representative of the dozen other features that change between Flare Zero and Flare One.

  • 69:  It should be possible to say:  "Change all references to method foo() to refer to method bar(), and also catch things like
    str = 'foo'
    Not that I'm sure FlareSpeak will look like that, but you get the idea.
  • 70:  So you can catch things that are going to violate invariants (71) when the error is made, instead of at runtime.  Or so that you can write AI-readable guidelines about the goals or behavior of a piece of code (in the semantic comments).
  • 71:  Invariants are rules that get attached to a method ("When this function is called, arg1 should be greater than arg2") or an object ("Property foo is never zero").  I get the impression this language feature originated in Eiffel, but I'm not sure.  As far as I know, though, reference-scoped invariants ("Any object referred to by this reference has a non-null foo property") are my idea.
  • 72:  "Find me all pieces of code that help determine the value of this variable."
  • 73:  If a function is changed to accept three arguments instead of two, find each function call and try to figure out the third argument from context.
  • 74:  Find a comprehensive and tested module that does "the same sort of thing" a piece of code does.
  • 75:  By this I do not simply mean multithreading, which should be included in Flare Zero.  Parallelized software is software that can run on a machine with four processors that have equal access to shared memory.

    Probably parallel Flare will be developed first on the BeOS.

  • 76:  Remember:  Modular interpreter architecture, multiple versions of modules.
  • 77:  That is, if a programmer has a group of "workhorse" processes doing all the heavy number-crunching, and wants to administrate them in way that is complex, but not computationally intensive, ve can write a small Flare program that handles the complexity in a safe way.
  • 78:  Note that I say "SMP market", not the hardware market in general.  To support Intel would take scalable software, which we don't really start getting into until Flare Two and, later, the self-optimizing compiler.
  • 79:  This will give us a chance to see what good ideas and dumb mistakes look like before we make anything official.
  • 80:  Oh, it should still be possible to access the system API without a lot of cognition going on every single time; I am not entirely insane.  Perhaps some Flare versions will contain "skeleton domdules" offering extremely direct translation of simple thoughts to simple actions.  But the potential will be there, and the higher cognition can be turned on if the program needs to be debugged.  Likewise for rapid prototyping (81).  And if hardware is cheap enough, or a program is needed that can only be written in High Flare, then higher cognition will remain turned on all the time.
  • 81:  That is, you write the test versions of the program in expensive Flare, and use that to get user feedback and debug the behaviors and basic algorithms.  When you're done with that, you write the C++ version.
  • 82:  It's the size of a truck, but it's a silver bullet.  When they say "There is no silver bullet", it means you can't solve the problem without duplicating pieces of human cognition.
  • 83:  If Aicore Two precedes Flare Two, then Flare Two's integrated Aicore should be Aicore Two.  If Aicore Two follows Flare Two, a later version of Flare Two should upgrade Flare's built-in AI to Aicore Two.  Releasing Aicore Two simultaneously with Flare Two would be cool, but it introduces the problem of either (A) making Flare's built-in AI immediately obsolete or (B) releasing a language incorporating a relatively new AI.
  • 84:  This does not necessarily mean that Aicore One is still written in Python, with no Flare versions available.
  • 85:  All architectural domdules must be integrated with each other, possibly using O(N^2) amount of work (86).  But there should be enough volunteer humanpower, and N should be reasonably finite, so this should be manageable.
  • 86:  That is, if there are 15 domdules that need to be integrated with each other, this implies 210 separate integrations, or rather 105 integrations of a pair of domdules.
  • 87:  Given a proper architecture for learning and skill acquisition, it should be impossible for any amount of bad advice to result in qualitatively worse performance - bad advice just won't get used, and all that happens is a temporary speed hit.
  • 88:  A goal system that:
    • Uses pure Interim Goal System logic (although the supergoal logic can be crystalline).
    • Shuts down reliably and smoothly if the IGS is disrupted.
    • Has some idea what to do next if it wakes up, or knows where to look to find out.
    Sadly, a real Transcendence at this point is rather improbable.  Thus we don't need to spend too much time on error-checking (or anything else which would slow the AI down); the AI doesn't need philosophically justified logic (or anything else which would take a lot of disk space); and the AI doesn't have to be capable of actual moral reasoning (or anything else which might prove inconvenient to the user).  But we might want to work out the philosophical logic in advance, and put it on a publicly accessible server whose Web address is stored in all the AIs, just in case.

  • 89:  "AI macro" = "maicro".  A set of instructions to the AI; a learned skill or a complex goal.
  • 90:  In a sense, the search for compatible AI content is too easy.  I don't see the problem being difficult enough, or diverse enough, to yield a smooth curve.  What's good for one AI is likely to be good for all AIs, meaning that the overall pool is less diverse, meaning that less computer power is needed to optimize it to the limits of current intelligence, meaning we might get a breakthrough-and-bottleneck syndrome (see Singularity Analysis).

    On the other hand, even if most AI content is globally optimizable, there might be enough locally diverse AI content to yield sustainable optimization from searching the AI Pool.  I don't know.  We'll have to see.

  • 91:  I realize there were some improvements to the shape of the search tree.  But in the end it was raw power, the sheer number of positions searched, that defeated Kasparov.
  • 92:  This is actually a quote from the first game Kasparov lost, not the historic match in which he was defeated 3.5 games to 2.5.  As Kasparov wrote:  "I could feel--I could smell--a new kind of intelligence across the table. While I played through the rest of the game as best I could, I was lost; it played beautiful, flawless chess the rest of the way and won easily."
  • 93:  Assuming that the domain is one where pouring on more power yields better results.  Scalability holds true of chess, but not necessarily of spell-checking.
  • 94:  CIO:  Chief Information Officer.
  • 95:  I'm not sure what this would be.  In my loose visualization, I imagine some kind of AI/data-mining application that would examine the entire data store of a corporation and produce some kind of extremely useful output; it would probably be run about once a month.  Note that this also implies rented supercomputers, with good confidentiality and data security, three other Singularity-important properties for getting fast access to hardware and running the Last Program without panicking the planet.
  • 96:  If all else fails, and a program just can't be translated any other way, you can add a tag to the Flare language.
  • 97:  And also has the ability to simulate program execution and notice how it works.
  • 98:  Can you really take a high-level language and make it run as fast as C++?  I think so.  I'm not talking about "just-in-time compilation", or translating bytecodes to machine code.  I'm talking about something that requires much more intelligence.  Something that requires a true understanding, not only of what the code does, but of the code's purpose.

    I'm talking about treating the Flare program as a specification, then writing the implementation.  The AI would work out every possible point where a language feature isn't used, where an interception never takes place, where the same method is always invoked, and then the AI would write the assembly that does it directly.  And so on and so on and so on.  Remember, I'm hoping that this sort of optimization will become a hobby with Flare programmers.

  • 99:  Looking back on this sentence, I note that I have spoken of "assembling" and "compiling" previous experiments.  No pun was intended.  Really.
  • 100:  That is, machine code which is written as well or better than human-massaged assembly.  It doesn't do us any good if it's written in slow machine code.
  • 101:  Note that this means translating from one user interface and system API into another.  Which implies an understanding of the purposes of the user interface, in turn implying an understanding of the API and a sophisticated user model.  Or a lot of little tricks and small rules learned from watching human programmers do it, so that a run-of-the-mill program can be translated 90% of the time.  Or both.
  • 102:  Not that we shouldn't carpe the diem if we get an opportunity to encourage SMP Myst.
  • 103:  Hardware is less malleable than software.  it might take years for Intel to build the factories that would turn out the asymmetric chips.
  • 104:  Of course, said characteristics are probably not even possible without pretty sophisticated AI to begin with.
  • 105:  Though self-optimizing compilers may cache things up.  An Aicore instance doesn't need peak intelligence all the time; crystalline rules should suffice to handle routine tasks.
  • 106:  In other words, a programmatic module might be tagged with an Aicore thought that explains the module's high-level purpose.
  • 107:  One with sufficient error checking to prevent even a single nonrecoverable error.  (A true nonrecoverable error in the goal system is more dangerous than any other type of error, since it may make the AI stop wanting to correct errors.)  This doesn't imply perfection so much as it implies an ability to recover from errors and enough common sense to notice when one is doing something really stupid.  It implies an Interim logic sufficiently decrystallized that one error, or a bit flipped by cosmic rays, can't screw everything up.

    It's also worth remembering that the goal system isn't just the programmed goals, it's the resultant planning.  A world-model that states you can fix computers by spooning ice cream into the disk drive is almost as bad as having the goal of destroying computers.

    The "safe goal system" requirement is in fact a Singularity precaution.  But it would be needed in any case as a programmatic precaution.  After all, Aicore Three is supposed to start getting into self-modifying code.  Even Eurisko managed to hose itself a few times once it started messing with the goal system.

  • 108:  Although if the usual rule about getting 90% of the functionality with 10% of the work applies, 90% functionality is acceptable.  After all, a goal system that a superintelligent AI couldn't arrive at on its own probably won't be very stable in any case.
  • 109:  Running on a big honkin' Beowulf network.
  • 110:  A policy which does not come easily to me.  But I've tried to follow it, ever since an unfortunate incident on the Extropians list in which I speculated about how to design nanotechnological weaponry.  Oh, and I believe "fora" is the plural of "forum"; if not, it should be.
  • 111:  After years, we're just now starting to see the effect of ubiquitous computing.  But that's hardware.  AI is software.  And self-modifying AI is software where even small improvements can trigger major unanticipated breakthroughs, like ice crystallizing in supercooled water.  If our ubiquitously computing world accepts upgrades, we might literally wake up one morning to find out that our world had come to life.
  • 112:  Possibly including every enhancive neurohack we can bring, bribe, or birth.
  • 113:  I'm not suggesting that we hide, (unless running an AI is illegal, in which case we should hide).  I'm just saying that issuing a press release would be an unnecessary risk.
  • 114:  Technically, "rapid infrastructure" could be any kind of ultratechnology powerful enough to count as material omnipotence FAPP (115); our "practical purposes" are creating lots of additional computing power, fending off angry nuclear missiles, engaging in uploading, rewriting the planet on the molecular level, building the next-stage ultratechnology (ontotechnology, descriptor theory, spacetime engineering, or whatever), and generally taking the next step into the future.

    It's possible that there are more powerful technologies than nanotech.  Maybe the superintelligence could modulate its transistors into quantum cheat codes or something.  But we can't plan for the existence of magic, and we wouldn't need to anyway, so there's not much need to navigate that possibility.

    Although it certainly is a very real possibility, maybe even the most likely possibility.  We could be in the position of Neanderthals trying to ensure the supply of flint is adequate to create "super-spearheads".

  • 115:  FAPP:  "For All Practical Purposes."
  • 116:  But see 3.5.3: If nanotech comes first.
  • 117:  As compared to chip factories, biotechnology labs, and so on.
  • 118:  Or its more manipulative sister device, the atomic force probe.
  • 119:  And possibly one of those newfangled multiple-tip dip-pen nanolithography devices, or whatever else walks out of the laboratories between now and then.
  • 120With hardware for computer control already built-in.
  • 121:  In plainer language:  Humanity is either uploaded or exterminated, depending on the values of the hidden variables.  (Note that this is a hidden variable, and not a random factor or a balanced system.  See A.1: Principles of navigation.)  (122).
  • 122:  After a series of Singularity/Monty Python takeoffs on the Extropians list, Joseph Sterlynne suggested that the systems should be programmed so that the instant before it happens we hear the calm and assured voice of John Cleese:

    And now for something completely different.

    It's a pretty funny idea, but, to my heartbreaking sorrow, I just don't see any probability of it happening in practice.

  • 123:  I don't even know if it's true of a majority.
  • 124:  After all, spare time doesn't have spare time, so a spare-time project has little margin for error.
  • 125:  Gender-neutral pronouns:  Ve/ver/vis.
  • 126CVS is an open-source (and thus free) version control system, one of the essentials for distributed development circa 2000.
  • 127:  Evangelical activities might include lecturing at conventions, talking to people at conventions, writing promotive material for the Websites, convincing major companies to "go Flare", and so on.
  • 128:  Said full-timer might be able to support both the Aicore and Flare projects.  Or not.  I don't have a feel for how much work is involved.  In either case, Flare shouldn't need an evangelist until six months at the earliest, and Aicore shouldn't need one for a year at the earliest.
  • 129:  That is, the growth of Aicore or Flare will lead to greater funding for the Singularity Institute.
  • 130:  Of course, this is something of an exaggeration.  Linus Torvalds still isn't working on Linux full-time, but other developers are - at Red Hat, for example.  If the Singularity Institute isn't around to soak up the demand, and the funding, for systematic operations, then other organizations will come into existence to meet the challenge.

    Which isn't necessarily a bad thing.  I'm not suggesting that any non-Institute organization is somehow a "competitor".  We can't do everything, nor should we try.  But there are some support operations that are appropriate for the Institute, and things will probably go better if the Institute can absorb the demand for those operations.  It's about scalability, and creating the potential for growth.

  • 131:  "The problem here seems to be that for a long time the Mozilla distribution actually broke one of the basic rules of the bazaar model; they didn't ship something potential contributors could easily run and see working."  Eric S. Raymond, The Cathedral and the Bazaar.
  • 132:  We don't need to write a fully-featured first release before we start getting classical open-source participants.  We just need something that compiles, runs, and does something neat.  10% of the features should be enough to create a coolness factor and the potential for contributed features and patches.
  • 133:  Programmatic difficulties:  Aicore is a Deep Research project operating in unexplored territory, as opposed to Flare, which is an innovative programming language.
  • 134:  Social difficulties:  Flare should be a supremely useful tool almost from the start, and fun to play with.  Aicore has a greater intrinsic coolness factor, but there'll still be a substantial period (135) where people are playing games with the AI as opposed to using it as a tool, and when contributing will require research talent (136) as well as just programming ability.
  • 135:  After the creation of the architecture and APIs, but before the architectural domdules are filled in.  See 3.1.2: Development timeline.
  • 136:  To create a domdule, especially an architectural domdule, you have to figure out what's being represented and noticed.  This requires intelligence, creativity, self-awareness, and the ability not to get hung up on wrong answers.
  • 137:  Not in absolute terms, perhaps.  A self-optimizing compiler designed de novo in one fell swoop (if such a thing were humanly possible) could theoretically be written in C++ (which is even less likely).  But to develop the technology naturally, through incremental progress, the programs have to be annotative.
  • 138:  See 2.1.2: The "Flare" line.
  • 139:  Flare Zero might have a stable and featureful version out before the Aicore rapid-prototype is completed, which would allow us to develop the actual Aicore in Flare.
  • 140:  Scylla and Charybdis were ancient Greek navigational hazards on either side of the Straits of Messina.  (For the record, Charybdis was a whirlpool and Scylla was a six-headed monster that lived on the rock of Scylla.)  The Roman poet Horace wrote about the possibility that somebody steering to avoid Scylla might drift into Charybdis; the expression was common in England by the sixteenth century and appears in Shakespeare's The Merchant of Venice.  (141).
  • 141:  Source:  The Dictionary of Cliches by James Rogers.

    (In association with Amazon.com.)

  • 142:  This is also the memetic reason I was emphasizing that successes in Flare count as genuine progress towards the Singularity.
  • 143:  To whoever is selected as the Larry Wall of Flare (144).  I have at least two possible candidates.  Though I don't know how "relocatable" they are, I myself am unattached.
  • 144:  I see no reason why I shouldn't stay in the shadows.  Even if it's a published footnote that I came up with the idea for some of the features, it shouldn't make a difference, as long as it's not a publicized footnote.  Whoever gets denoted as the "leader" is the one who gets the credit.

    "You will quickly find that if you are completely and self-deprecatingly truthful about how much you owe other people, the world at large will treat you like you did every bit of the invention yourself and are just being becomingly modest about your innate genius."
        -- Eric S. Raymond, The Cathedral and the Bazaar.

    "There is no limit to what you can accomplish if you do not demand credit for it."
        -- Some American president or other.

    On the other hand, if I should ever become "famous", it's possible that any project I have a hand in will wind up being credited to me.  Well, the history books shall no doubt be straightened out post-Singularity.

  • 145:  More scalable, for one thing.  See 3.5.1: Building a solid operation.  On the other hand, a good Research Talent, one who understands the principles well enough to create the language, should be able to write the whitepaper verself.
  • 146:  Doesn't crash, has an IDE and debugger.
  • 147:  I've already done a heck of a lot of thinking about the basic concepts, don't get me wrong.  I have an excellent idea of how Chrystalyn should work; I'm not talking through my nose here.  I just want an even better idea, so we don't wind up with any flawed paradigms.
  • 148:  Again, I have an excellent mental image of how the basic architecture should work; I just want a perfect image.
  • 149:  Not having any real intelligence or learning capability, but possessing demo versions of all the architectural features needed.
  • 150:  When the SimpleMind experience demonstrates all the things I got wrong, or could have done better.
  • 151:  Which seems likely.  I know trying to play two roles is dumb, stupid, a traditional mistake, but the reason that mistake is traditional is that it's really hard to avoid.
  • 152:  For that assistance to be useful, however, the volunteers may need research talent.  But on the other side:  (A) there should be clear feedback on what kind of thinking works (153), and (B) it may be possible for less "creative" developers to work with someone else's bright idea.
  • 153:  That is, it should be possible to see which domdule designs succeed, and which fail.  Thus acting, on a community level, to suppress the common AI tendency to run down blind alleys and stay there.  When it comes to AI design, that's three quarters of the battle.
  • 154:  The code should easily be stable enough for business tasks; if open-sourcers are writing and testing domdules, they'll simultaneously be debugging the underlying code.
  • 155:  Maybe my estimates will turn out to be pessimistic.  Maybe Chrystalyn will surprise us all and Transcend.  Stranger things have happened.
  • 156:  Although with the rate nanotechnology is progressing, this might be too late. See Appendix A: Navigation.
  • 157:  "AI content" meaning domdules, heuristics, applications, APIs, and so on.
  • 158:  One interesting way to encourage contributions would be to include built-in Internet capabilities in the Flare IDE, plus version-control and repository-location XML tags in the Flare language, so that open-source contributions can be automated directly.  Things like that are one of the major reasons Flare is in the timeline in the first place.  (And of course, there's the general idea that annotative programming makes for more modular components, which makes for easier integration, and so on.)
  • 159:  I have no objection to the term "egoless".  Pride is good.  Ego is bad.  Pride is looking on a good piece of work and knowing you're a damn good hacker.  Ego is believing you're a damn good hacker and therefore your work is good; ego is taking any criticism of your work as a challenge to your self-image.  Similarly, humility is good and modesty is bad.  Humility is knowing that you can make mistakes; humility is having accurate beliefs about your fallibility.  Modesty is deliberately misreporting your accurate evaluation of your own excellence in order to conform to social norms.  (I don't dispute that modesty is often necessary, but that doesn't make it good.)
  • 160:  Every single time I have observed an extraordinarily high degree of self-awareness, demonstrated by the ability to perform a really tricky mental task without falling over, it's been in a student of evolutionary psychology.  Examples:  Analyzing when your crusade is and isn't economically feasible (The Magic Cauldron by Eric S. Raymond), designing cognitively perfect arguments (Distress by Greg Egan), delineating the complete flow of causality in the evolution of social cognition (The Psychological Foundations of Culture by Leda Cosmides and John Tooby).
  • 161:  Good book to start with: The Moral Animal by Robert Wright.  (In association with Amazon.com.)
  • 162:  Full maturity in the use of evolutionary psychology for self-awareness and self-alteration can take years.  But just the aha!, in someone who's darned intelligent to begin with, is enough to reach the ninety-ninth percentile in emotional self-awareness, which is enough for our strategic purposes.
  • 163:  See 3.3.2: Meme propagation and first-step documents.
  • 164:  The Principle of Intelligence strikes again!
  • 165:  That is, it's not cognitively necessary for someone to explicitly consider whether one goal is more important than the other.  A vague feeling that both ideals are important can generate just as much energy.
  • 166:  I.e., ensuring that the differences are internal and do not affect which programs will run under which interpreters; above all, ensuring that any network actions continue to obey a standard interface.
  • 167:  The architectural domdules and library domdules need to remain standardized; the set of secondary library domdules used will remain a local decision.
  • 168:  That is, a set of well-integrated domdules with prepackaged skills and memories.
  • 169:  See 3.1.6: Dealing with blocking patents.
  • 170:  Where overfunded marketing clout matters as much as technical excellence.
  • 171:  Some anarchocapitalists are opposed to patent law, on the grounds that it represents an enforced monopoly.  I disagree; I think patent law is a legitimate extension of private-property social heuristics into the space of possible inventions.  Regardless, it's often more ethical, if not more moral, to go along with a properly functioning legal system, even if the stated morality conflicts slightly with your own.

    If everyone had too low a "tolerance" setting on our moral systems, society would collapse.  But once a legal system gets to the point that everyone knows it's malfunctioning, one is no longer obliged to respect the spirit as well as the practice of the system.  The software-patent system has lost its spirit, and only the necessity of evading the broken mechanism remains.  (One is still obliged not to make it worse, however.)

  • 172:  I.e., the computational equivalent of the first few layers of the human visual cortex.
  • 173:  Actually, there's a neat little fad developing of patenting business concepts, like "Selling books over the Internet" or "Selling toilet seats over the Internet".  Apparently, using the Internet for any purpose is a patentable innovation.

    When I say the patent office is broken, I mean broken.

  • 174:  I have no problem with them charging other people, as long as they don't interfere with us.
  • 175:  The Mozilla public license is a (formally) open-source license which Netscape developed for their browser.
  • 176:  As far as I know, the idea of patentleft is my own invention.
  • 177:  Or to reinvent anything whose source code wasn't available.
  • 178:  For legal, moral, and contingency-case reasons, it's best if the Singularity Institute is not explicitly named as occupying a privileged position.  See 3.2.5: The open organization.
  • 179:  For example, the point of the Aicore patent would be to force any Aicore-technology patents to list themselves as derivative works of Aicore.
  • 180:  The legal deadline is that you can apply for a patent at most one year after the publication of the secret being patented.  Once you apply, you can keep re-applying pretty much indefinitely.
  • 181:  This will also be good practice for going completely underground, in the event of significant governmental opposition to AI research.
  • 182:  Or at least, work well enough long enough to get us to Singularity.
  • 183:  Even ordinary patent law has trouble dealing with the interaction of dozens of international patent offices.  I haven't heard anything specific, but one suspects that the international software patent issues are even worse.
  • 184:  If for no other reason, then because the French government decides one day to stop giving full faith and credence to the silly US patents on "displaying the color blue on your monitor" or whatever.
  • 185:  Note for the humor impaired:  Previous sentence contains irony.  Many of us are not ascetics and have no need to rationalize our desire for incredible wealth as being altruistic.  (But not all of us, either.)
  • 186:  Nor will venture capitalists fund a nonprofit on the theory that it might eventually turn into a corporation.
  • 187:  Red Hat Inc. sells Linux distributions.
  • 188:  A Singularity-desirable quantity, obviously.
  • 189:  I think I may have the first and second (and AFAIK (190) only, circa 1999) Singularity T-Shirts ever made.
  • 190:  AFAIK:  "As Far As I Know".
  • 191:  Mugs, little stuffed animals, pens, and so on.
  • 192:  Programs that can reliably transfer knowledge to children and replace the school system.
  • 193:  Automated psychiatrists.  I feel very strongly that the current implementation of psychiatry leaves a great deal to be desired.
  • 194:  IPO:  Initial Public Offering.  The traditional cash-out method for startup companies, and for startup employees with stock options.
  • 195:  See Paid memberships:  Why bother?.
  • 196:  O(some number) stands for "on the order of".
  • 197:  If we succeed in inspiring all Silicon Valley with our crusade, it could happen.
  • 198:  See 3.5.3: If nanotech comes first.
  • 199:  Which is too complicated to be described in this footnote.
  • 200:  Real estate values are insane there, but it's the heart of the storm.
  • 201:  If the asset/expenditure ratio has to fall within a certain amount, the assets can be located in a donor-advised "Singularity Fund" at some other nonprofit.  See "The mechanics of charitable giving".
  • 202:  ...with the Singularity Foundation feeding into the Singularity Institute which branches out into the Institute for Development of Transhuman Artificial Intelligence, the Singularity Memetics Group, and the Flare Language Project...
  • 203:  For one thing, the initial Singularitarians would wind up on four Boards of Directors - Boards of Direction?
  • 204:  Besides, venture capitalists have a reputation for screwing up AI - forcing the premature deployment of doomed projects, and so on.  Of course, it's not like there was any chance of classical AI succeeding to begin with, but some of the responsibility for AI's failure still belongs to the people who wanted sparkly toys and immediate results.
  • 205:  Or rather, with the fraction of optimable resources available at that time; whether a project is "optimable" changes with the total strength and breadth of the Singularity Institute.
  • 206
    "Anyone who bugs me for a handout, no matter how noble the cause and how much I agree with it, will go on my permanent shit list. If I want to give or lend or invest money, *I'll* call *you*."
            -- Eric S. Raymond on Slashdot.
  • 207:  Admittedly a fee tends to keep out the riffraff, but it also tends to keep out the poor-but-brilliant.
  • 208:  See 3.2.1: Institute timeline.
  • 209:  See 3.2.6: The Board of Directors.
  • 210:  I'm not suggesting we should turn down funders who aren't hard-core Singularitarians; I'm suggesting that we either make every effort to bring non-adjusted funders up to SL4, or else ensure that they don't have veto power.
  • 211:  My current take (as of Wednesday the 15th of December, 1999), is that there is an essentially open question as to whether the morality of a rational Power - whether it got started as a human or AI is irrelevant; we're talking about minds in general - is determined, constrained, or open.  "Determined" would mean that only one set of decisions is rational, "constrained" would mean that the set of decisions requires some type of internal coherence or must obey other constraints in order to be rational, and "open" means that the set of decisions can be specified by the initial conditions.

    If the outcome is determined, which may very well be the case, then trying to impose unbreakable Asimov Laws on the seed AI has only three possible outcomes:  First, a wildly insane Power; second, broken Asimov Laws and a completely sane Power; third, a controlledly insane Power, a Power irrational in a predictable and manipulable way with respect to morality, but otherwise sane.  Everything I know about cognition and intelligence suggests that the third outcome is simply not plausible.  You can't design a sane irrational mind unless you're more intelligent then the mind you're constructing.  (212).

    At most, it might be possible to safely add in a set of suggestions that would be used for constructing any morality that was dependent on initial conditions.

    See also CaTAI::The Prime Directive.

  • 212:  At any given time I can think of a plausible scheme for doing so, but every time my understanding improves, I see another "gotcha" that would have torpedoed the previous scheme.  I very strongly suspect that the series of gotchas is either open-ended, or continues well beyond the point of human intelligence.
  • 213:  I will also confess that I think that understanding intelligence well enough to design minds requires and enhances self-awareness, which IMO is the primary requisite for getting the philosophy correct.  It's possible, however, that the technical lead will not be the person who cracked the nature of intelligence, just the one who implemented the solution.
  • 214:  In practice, this is demonstrated by writing something original (and intelligent!) about the Singularity.
  • 215:  Demonstrated, in greater and lesser degrees, by writing about how one can't trust one's own emotions, about applied evolutionary psychology, about memetics, about cognitive science applied to ethics and game theory, and so on.
  • 216:  I'm not sure how to test for this.  One heuristic might be that someone whose life has been endangered is more likely to be mature (217).  Maturity might also be demonstrated by writing an analysis which identifies necessary or unavoidable risks.  Rational acceptance usually, but not always, implies emotional acceptance.
  • 217:  Rationalizing why this is so is pointless.  I make it sound like it's a simple matter of experience creating the ability to visualize, but I don't think this is so.  For all I know, it's a neurochemical process.  The emotional stresses involved trigger a shift in brain chemistry, or something, and a few weeks later one gains the ability to emotionally accept the existence of risks.
  • 218:  A more concrete description of what a "Singularitarian" looks like may be found in the Singularitarian Principles.  I don't intend to suggest that my particular version is some kind of mandatory creed, of course; I'm just trying to convey the sort of thing I mean by "Singularitarian".
  • 219:  Note:  This means that the Elisson project itself must be open-sourced.  If Elisson is close-sourced, or there are close-sourced components, then we'll need a provision about 10 or more departing engineers being able to take the code with them.
  • 220:  One possibility would be including language in every document that deals with a possibility of a fork in the Singularity Institute.  I won't say this is silly, but I'll say that it's impractical with modern levels of legal and informational infrastructure.
  • 221:  We should always try asking, however.  It can't hurt.
  • 222:  Though there are some interesting things one could do, by way of trying.  Actually, I can visualize a "corporation" formed by a loose voluntary association of employees, and I would expect it to run far more efficiently than the current corporations, which are, by analogy, communist dictatorships.  But that's another book.
  • 223:  The Free Management Library and the Non-Profit FAQ.
  • 224:  Oh, it's phrased as "Board members can't hire themselves as staff", but that's what it works out to.  People who make decisions aren't allowed to dirty their hands with labor; why, if that were allowed, employees would want to own stock in their corporations, the rigid division between aristocracy and labor would break down, and the next thing you know, the entire social structure would collapse.
  • 225:  Clemmensen's Law says we shouldn't waste our efforts trying to change the tradition just because it's stereotypical and massively annoying.  The tradition still has equally massive social inertia, and there's no overriding reason to expend energy on altering it.
  • 226:  Let me emphasize to any unethical journalists out there that milking the issue for Frankensteinism is not "clever" simply because someone gets exploited.  Writing yet another anti-science article will win neither fame nor Pulitzers.  Taking the issue seriously might.
  • 227:  That is, do whatever seems best at the time.
  • 228:  F&SF:  The Magazine of Fantasy and Science Fiction.
  • 229:  People who are sufficiently celebrities that we know who they are.
  • 230:  The scientific ethic isn't just mentioning any objection that you think is valid, it's mentioning any objection that you or the reader is likely to find valid.  The requirement of the scientific ethic is that all the information be presented to the reader.  The author may state that the caveats ve presents are implausible and unlikely, the author may argue against the caveats, but ve's not allowed to filter them; the reader has the opportunity to judge on vis own.
  • 231:  Publishers or journalists or editors whose self-image states:  "I am mentally and morally superior to my reading audience; I exploit them, as humans exploit cattle, by pandering to their low tastes."  This is, for example, how you get yet another trashy talk show while most of the audience is sick to the point of nausea.  Hollywood can't admit the audience wants better fare, or even that there's an unexploited market segment for it; it wouldn't match their self-concept.
  • 232:  High Journalist culture is quite different from mere journalism.  In the same way, High Geek culture consists of supreme technophilia and is open to all ages and genders, while Low Geek culture is traditionally populated solely by adolescent males.  The despicable lawyer may engage in class-action suits and contingency-fee ambulance chasing, but High Lawyer culture holds that lawyers are part of the system whereby law creates justice, true officers of the court.  There are warez d00dz and kiddie crackers with shell scripts, and then there are the other guys.  It's heartwarming how professional ethics correlate with such tremendous differences in technical competence.
  • 233:  I would guess that most of the people doing any surfing at all, beyond the initial second-step page, have established an enduring interest in the Singularity.  I could be wrong, and this wouldn't hold if the second-step page were simply a directory or guide to finding more information.
  • 234:  Mentions in passing, "favorite site" lists, or for that matter search engine hits.  This is especially true of followed links or search engine hits that don't imply a surfer who knows about the Singularity.  I'm not saying that any site of yours that someone else links to has to meet the criteria for first-step material, but if it's getting a hundred hits a day, it may be time to worry.  At the least, tack on a "If you don't know what the Singularity is, visit X" box at the top.
  • 235:  I was about to say "in the Universe", when I realized that there are probably quite a few Singularity-related Websites outside the Milky Way.  The concept of Singularity doesn't look like it was generated by unique characteristics of humanity, or even unique characteristics of carbon-based lifeforms.
  • 236:  This involves a judgement about what the "whole story" really is, but interested readers will go on to the second-step and third-step Web pages, and will thus have an opportunity to judge for themselves.
  • 237:  This might count as a Singularity in the sense of the predictive Event Horizon, but not a Singularity in the sense of a world-altering Transcendence.
  • 238:  Foolishly so; like I said, milking the issue for Frankensteinism is likely to make less money than handling it seriously.  Frankenscience is a market niche, but it's a fulfilled market niche.
  • 239:  Of course, my definition of "emotion" is somewhat wider than the norm, since the ordinary definition has been affected by the cultural stereotype of emotion as being the opposite of intelligence.
  • 240:  I wouldn't design an AI that way, I'll tell you that much.
  • 241:  Running on raw mental energy only works until you run out of energy.  If your actions aren't emotionally supported, your willpower doesn't replenish itself.  It's something we have to live with this side of humanity.
  • 242:  And God help you if you say anything about religion.
  • 243:  Besides, artificial incredulity wears on the nerves; it's almost as bad as being deadly serious all the time.  There's a happy medium between artificial incredulity and being pompous; it consists of understanding how much fun and how important your ideas are.
  • 244:  Well, not the only way.  You can also combat it by training the mind to a higher order of skepticism and intelligence over the course of years.  But the fast way is to present a more idealistic morality of opposite polarity.
  • 245:  And other possible funders.
  • 246:  As stated, the move to address general audiences should be avoided until we're sure we can handle it.  Considering that "general audiences" will contain subsegments of technophobic audiences, which would count as SL Negative One or Negative Two, the generation of fear and panic will not be avoidable.
  • 247:  It has been said that "a fanatic is someone who can't change his mind and won't change the subject".  I see nothing wrong with fanaticism, but I agree that this is a sign of bad fanaticism.  The defining quality of bad fanaticism is losing your sense of humor with respect to your cause.  If you can't laugh, you won't tolerate criticism.  If you can't tolerate criticism, you lose your ability to question yourself.  If you lose your ability to question yourself, you become stupid.

    Fear leads to anger.  Anger leads to hate.  Hate leads to blindness.  Blindness leads to stupidity.  Stupidity leads to suffering.

  • 248:  Trying to revolutionize a field containing raging but pointless ideological controversies based on fundamentally wrong paradigms is the kind of job that takes an academic career.  It also makes an academic career, so if anyone needs an academic career, I've got a really hard job for you.  But I'm not going to do it; it would simply take too much time and energy.  Publishing a research paper (once SimpleMind (249) is up and running) may attract additional help and convert a few geniuses, and I can see myself doing that, but the social prosecution of a revolution in AI will take someone else.
  • 249:  See 3.1.2: Development timeline.
  • 250:  Thanks to Jakob Mentor for sending me a copy!
  • 251:  One imagines that every programmer has had the experience of looking at a program, perceiving that it needs an architectural feature but also perceiving how much work it would be, and so shoving the feature into the back of one's mind, never consciously saying "This program needs this feature."

    It's almost like writing, in which by far the hardest job is to say what you're actually thinking, and not just what's easiest to write, or the first paragraph that pops into your mind.  But this too will probably be hard to explain to anyone who isn't a writer.

  • 252:  In my opinion, the most fundamental reason why AI hasn't gotten anywhere is the field's tendency to get hung up on ideology.
  • 253:  Although, in reality, I have no idea whether or not the curve is Gaussian.  My wildguess (254) is that the real curve would be a Gaussian curve modified by one or two skewing factors, but still recognizable.
  • 254:  I use "wildguess", "beguess" (for "best guess") and "eduguess" for ("educated guess") as verbs.
  • 255:  If building a mind were easy, we wouldn't be having this discussion.
  • 256:  Arguably, the whole logic of Singularity is no different.
  • 257:  More precisely, I have not assumed extraordinary expenditure of mental energy.  PtS may rely on extraordinary intelligence, but not 80 hours of extraordinary intelligence per week.
  • 258:  Don't get me wrong; I like the cowboy attitude.  I fully understand why sucking the fun out of projects is bad.  It's just that, while the association of adequate funding with lifelessness may be a statistical truth, it's not a cause-and-effect relationship.  You can have fully funded cowboys.
  • 259:  Admit it:  This sounds reasonable to you.  It does, doesn't it?  Shouldn't somebody who wants to save the world be willing to work 16-hour-days?

    Why?  Because having your own company, or trying to save the world, is a high-status position.  According to the human social algorithms, when you hear someone claiming a position which is high-status, ve needs to supply proof that ve is worthy to do so.  Being willing to sacrifice, to suffer, is one kind of proof.  To do otherwise is cheating; social cheating, for which we humans have all kinds of special-purpose detectors.

    But the Singularity isn't about the human social game.  I'm not saying that we can just ignore the human social algorithms, especially when it comes to general public relations.  But among Singularitarians, I think it's acceptable to plan for our being aware of our evolved psychology, and for our using that awareness to step outside the default algorithms.

  • 260The only time planning for an 80-hour workweek is acceptable is when no one else can do the work.  In our case, this is likely to mean Deep Research and a limited pool of geniuses.  Now, while I will not say that it is impossible, it is very rare for a mind to be capable of doing 80 hours a week of brilliant work.  (261).  Research demonstrates that even on ordinary programming projects, anything above 60 hours a week causes productivity to fall off so badly that adding on extra hours actually decreases output.  Sometimes, there's non-brilliant work that only a genius can do, like writing up a pre-existing brilliant idea.  In this case, extended hours might make sense, if the genius isn't doing anything else that requires brilliance at the time, and if there are no long-term consequences (i.e. burnout) from doing so.  (262).  Anyway, I certainly can't manage 80-hour weeks.
  • 261:  More precisely, it is rare to find a brilliant mind that can expend the amount of mental energy necessary to work an 80-hour week, and still be brilliant.  Some people are brilliant all the time, but this is because casual brilliance isn't "work"; it doesn't require mental energy.
  • 262:  And the long-term consequences aren't just calculated for the one episode; if you work extended hours once, you're likely to wind up doing so again.  It's like dieting.  It's not just the caloric consequences of eating a bag of potato-chips; it's the caloric consequences of having a decision-making mechanism that eats potato chips under a certain set of circumstances.
  • 263:  Shoestring operations rarely are.  Having a solid operation means being able to grow without crushing the one person handling everything on an ad-hoc basis.
  • 264:  Temporal, financial, and mental.
  • 265:  One of the Navigational Quadrivium.  The future is determined by your actions, by the actions of others, by the hidden variables, and by the random factors.  See A.1: Principles of navigation.
  • 266:  A part-time volunteer may get married, or get a demanding job, or vanish into the ether for some other reason.  A paid developer means less sensitivity to factors affecting vis personal life.  Likewise, leaving margin, both mental and temporal and financial, means that emergencies die out instead of building up.
  • 267:  Of course, "hard" is relative.  Compare losing one's job to the kind of personal catastrophes that were routine in the Middle Ages.
  • 268:  Nowadays, of course, cars are a major source of jobs, to be protected at all costs.

    Odd idea, protecting "jobs", trying to create more "jobs", as if the quality of our lives were determined by how much work we have to do, rather than how much stuff is produced and consumed.  This has struck me as being extremely silly since the age of around nine, although now, of course, I understand the several ways in which the phenomenon arises.

  • 269:  I don't even know of any historical case of anyone approaching this limit.  It's a completely hypothetical limit.
  • 270:  Collaborative filtering, open corporations, complex barter, insulated transaction cycles, and dynamic pricing futures.  I need to write a page on this at some point.
  • 271:  I even remember when my primary source on nanotech advances was "Breakthroughs" every six months instead of Gina Miller's "Nanogirl News" once a week.
  • 272:  Okay, nobody except a few human crazies would deliberately wipe out the human race.  Such crazies do exist, and will become a significant threat if nanowar becomes cheap enough.  But the primary threat is a MAD (273) scenario, or an "unstoppably activated weapon" à la Strangelove, or hidden-retaliation silos, or blackmail threats, or some other arrangement of weapons and computer-controlled counterattacks that, when set into action, wipes out the entire world.
  • 273:  MAD:  Mutual Assured Destruction.
  • 274:  Ideally the location should be far from Earth, somewhere in the asteroid belt, so that the survival station will be harder to target militarily.
  • 275:  Of course, if we're designing the survival stations, then Singularity Institute personnel are likely to be on board.  But not all of us, or even a significant fraction.  Unless the survival plan assumes further R&D work with a time limit that requires on-board geniuses, I would not expect any major Singularitarians to be on board (276).  I, for one, won't be using a limited seat (277) unless they specifically need someone for AI design.  (278).
  • 276:  Even for those of us who are Singularitarians for Objectivist-like reasons of personal survival, a survival station isn't necessarily the safest place to be.  (Unless nanowar is nearly certain, and by that time, we should have already launched the survival stations.)  The denizens of the survival stations might need to contend with malfunctioning software, malfunctioning matter-as-software, and for all I know, asteroid strikes and a lack of hot showers.  There's even a chance of missing out on an Earth-based Singularity, as in Vinge's book Marooned in Realtime, though that last one doesn't strike me as being too probable (Fermi Paradox and all that).

    Anyway, please don't think of survival-station duty as a plum assignment, a lifeboat seat, the last helicopter out of Saigon, rats deserting a sinking ship, or whatever.

  • 277:  If we have enough seats to evacuate everyone who wants to come on board, or at least everyone who wants to come on board and can get to the launching area, then I wouldn't have any particular objection to taking a seat.  Unless the physical presence of the major Singularitarians is needed to make the last desperate effort to stave off catastrophe, or something.
  • 278:  They don't even need my personal philosophy; evolutionary-psychology-trained self-awareness and the standard technocapitalist upbringing should be enough to ensure that the last remnants of humanity develop into a reasonably attractive culture.
  • 279:  Or, failing the chance to beat military nanotech into play, before the nanowar starts.  Also, before space launches are curtailed.
  • 280:  Which could imply distance, defenses, neutrality, or all three.
  • 281:  It's also fairly independent of me; I'm not a limiting factor for it.  Thus, this part of the plan survives even if I get hit by a truck tomorrow.
  • 282:  At least, it's not likely to work in adulthood.  Might be a nice thing to test out on chimpanzees, though.
  • 283:  The owners of nanotech might put more effort into nanomedicine than nanoweaponry, or at least delay the use of nanoweaponry as long as possible.
  • 284:  Remember, there's a cost-benefit involved; trying to get by on the minimum everywhere may not be such a great idea.  (See 3.5.1: Building a solid operation.)  Likewise, you don't get the total X/Y/Z by adding up the components in each column.  Think of it as a probabilistic thing - 10% chance it can be done with $X, 50% $Y, 80% $Z.  If the component probabilities are independent, adding up the columns will give you the wrong answer.

    I'm doing all this by feel, but I think the intuitions work better if stated as a probabilistic spreadsheet.

  • 285:  That is, $300K is the amount our first funder needs to have securely in the bank (286).  For a developer to quit vis day job, the full amount has to be reliably available in the future.  If grants from other funders or grants from foundations materialize, the other $200K should not be necessary.
  • 286:  In a form untouchable by a stock-market crash; nor should a stock market crash affect willingness to expend this amount.
  • 287:  Someday, perhaps, "probabilistic spreadsheets" will be common in business, and projects and decisions will be a great deal more flexible.  But that's a story for another day.
  • 288:  After all, the funding "available" doesn't change the funding "required" in the least.  But the amount of funding available changes a lot of small cost-benefit decisions, which don't change the overall strategy.  Since "cost" is a fraction of available resources, increasing resources changes the cost-benefit curves, the decisions made, and thus the absolute costs to carry out a high-level strategy.  That's taken into account in the ballpark figures above.  Now we're discussing how funding affects the high-level strategy.
  • 289:  Or with a faster burn rate of available funding.  This is a navigational curve of risk tolerance.
  • 290:  Flare gets the priority, since the project is (a) more scalable than Aicore (especially before the Flare language is developed) and (b) the fastest path to increased credibility and thus increased resources.  See 3.1.2: Development timeline.
  • 291:  Operating on shoestring salaries creates increased risk of burnout, either motivational or cognitive.  See 3.5.1: Building a solid operation.
  • 292:  Since foundations exist for the purpose of funding charities and usually have clearly defined objectives and submission procedures, this possibility is less tenuous than funding from individuals, although the actual probability of obtaining funding from individuals might be greater.
  • 293:  As always, many foundations like to see experienced managers or a track record.  I don't get the impression this is an absolute constraint, however.
  • 294:  Presumably me, thus obviating the travel-fee problem.
  • 295:  Which may consist of one or two people holding down all the legally required positions (Treasurer, Secretary, etc.) until a real Board can be found and appointed.
  • 296:  Described under 3.2.4: Leadership strategy.
  • 297:  More often than not, Executive Directors don't serve on the Board as well.  But this is more true of mature organizations than startup nonprofits.
  • 298:  As for myself, I do see myself as playing a role in guiding the long-term strategy of the Singularity Institute, but it would be foolish to insist on immediate legal recognition of this role if that created problems for the Institute.
  • 299:  Which is not to say that we should write imploring articles.  Asking directly will turn people off and should be entirely unnecessary.  We have this massively important quest, we have a "Help Wanted" section, and anyone likely to help doesn't need to be hit with a bludgeon.  Common sense and subtlety should be given unrestricted rein.  In those people likely to help, getting interested in the Singularity, and the quest for Singularity, is likely to lead directly to the desire to help, without any nudges on our part.
  • 300:  At least, I hope they count as a service.
  • 301:  "Uploading":  Transferring a human mind to a computer and upgrading it to a superintelligence.  If you're reading this page, you should know what uploading is.
  • 302:  I actually heard this proposed on at least one occasion.
  • 303:  As long as intelligent life exists in some form, there's a chance for some better future to exist.  Even if intelligent life is wiped out, an Earth inhabited by bacteria is better than a completely sterile planet; the bacteria might evolve into intelligent life again one day.  A sterilized planet ends all possibilities.  To worry about minor variations in desirability is foolish when faced with the prospect of a complete loss, so "Don't toast the planet!" is the first rule of navigation.
  • 304:  This is the art of creating the potential for things, rather than attempting to do them directly, which is the same art used to create minds that are organic rather than crystalline.  A related heuristic is planning so that even if you fail in your direct goal, progress has still occurred.  Both are themes running all through PtS.  Flare creates the potential for Aicore.  Aicore creates an industry, so that even if our own Elisson project fails, there'll still be a river of effort flooding into AI.  Early subgoals of the PtS plan create the potential for later subgoals.  But subgoals, if successful, also count as absolute progress towards the Singularity, even if the rest of the plan fails.
  • 305:  One of the chief tricks is learning to tell the difference.  Hidden variables have already been decided, or are so strongly specified by the structure of reality that nothing you can do will affect them.  This doesn't necessarily mean that you know which way they've been decided; there can be multiple arguments in favor of different values for the hidden variables.  It's very easy to confuse delicately-balanced arguments with delicately-balanced systems, and thereby fall into the trap of assuming that intervention, or meddling, or pouring on more effort, can affect the outcome.  Sometimes, if a projection presents an unpleasant outcome, our instinct is to meddle; to do something, anything, for the sake of doing something.  But the emotional satisfaction of "doing something" often masks the side effects, and the damage, and the fact that the true value of the hidden variable is not altered in the least.
  • 306:  As Ralph Merkle observed:  "No doubt Wired swept under the rug the careful qualifications and uncertainties with which such dates are invariably shrouded, and the precise definitions of the events is likewise not entirely unambiguous."
  • 307:  The first formation of a chemical bond through mechanical manipulation, for example.
  • 308:  A national nanotechnology initiative running into hundreds of millions of dollars, IBM's Blue Gene project to solve the protein folding problem...
  • 309:  Be assured that this number does not take into account even the publication of Coding a Transhuman AI; most of that time is my estimate of how long it will take the field of AI to wake up and smell the coffee.
  • 310:  This is without Drexler-class nanotech.
  • 311:  Assuming a 15-year maturation time.
  • 312:  Both of these numbers assume that the Great Neurohacking Revolution starts in 2015.  This is a total guess strongly dependent on social and memetic factors.  The Neurohacking Revolution could start tomorrow, if not for the fact that government and society would be utterly opposed.
  • 313:  This is another number that might be moved up in response to recent events, e.g. the reading of recognizable visual images from a cat by intercepting 177 neurons in the thalamus, the replacement of one lobster neuron with $7.50 worth of circuits from Radio Shack (story), or the first treatment of depression by direct stimulation (story).
  • 314:  Unbalanced, like a triangle balanced on its tip.  Sometimes written "metastable", but this self-contradictory usage is confusing the first time you encounter it.
  • 315:  Atomic Force Microscope.
  • 316:  See 3.5.3: If nanotech comes first.