This page is now obsoleted by the Fun Theory Sequence on Less Wrong .

Jan 25, 2002

  • How much fun is there in the universe?
  • What is the relation of available fun to intelligence?
  • What kind of emotional architecture is necessary to have fun?
  • Will eternal life be boring?
  • Will we ever run out of fun?

To answer questions like these… requires Singularity Fun Theory.

  • Does it require an exponentially greater amount of intelligence (computation) to create a linear increase in fun?
  • Is self-awareness or self-modification incompatible with fun?
  • Is (ahem) “the uncontrollability of emotions part of their essential charm”?
  • Is “blissing out” your pleasure center the highest form of existence?
  • Is artificial danger (risk) necessary for a transhuman to have fun?
  • Do you have to yank out your own antisphexishness routines in order not to be bored by eternal life? (I.e., modify yourself so that you have “fun” in spending a thousand years carving table legs, a la “Permutation City”.)

To put a rest to these anxieties… requires Singularity Fun Theory.


Behold! Singularity Fun Theory!

Singularity Fun Theory is in the early stages of development, so please don’t expect a full mathematical analysis.

Nonetheless, I would offer for your inspection at least one form of activity which, I argue, really is “fun” as we intuitively understand it, and can be shown to avoid all the classical transhumanist anxieties above. It is a sufficient rather than a necessary definition, i.e., there may exist other types of fun. However, even a single inexhaustible form of unproblematic fun is enough to avoid the problems above.

The basic domain is that of solving a complex novel problem, where the problem is decomposable into subproblems and sub-subproblems; in other words, a problem possessing complex, multileveled organization.

Our worries about boredom in autopotent entities (a term due to Nick Bostrom, denoting total self-awareness and total self-modification) stems from our intuitions about sphexishness (a term due to Douglas Hofstadter, denoting blind repetition; “antisphexishness” is the quality that makes humans bored with blind repetition). On the one hand, we worry that a transhuman will be able to super-generalize and therefore see all problems as basically the “same”; on the other hand we worry that an autopotent transhuman will be able to see the lowest level, on which everything is basically mechanical.

In between, we just basically worry that, over the course of ten thousand or a million years, we’ll run out of fun.

What I want to show is that it’s possible to build a mental architecture that doesn’t run into any of these problems, without this architecture being either “sphexish” or else “blissing out”. In other words, I want to show that there is a philosophically acceptable way to have an infinite amount of fun, given infinite time. I also want to show that it doesn’t take an exponentially or superexponentially greater amount of computing power for each further increment of fun, as might be the case if each increment required an addition JOOTS (another Hofstadterian term, this one meaning “Jumping Out Of The System”).


(Non)boredom at the lowest level

Let’s start with the problem of low-level sphexishness. If you imagine a human-level entity – call her Carol – tasked with performing the Turing operations on a tape that implements a superintelligence having fun, it’s obvious that Carol will get bored very quickly. Carol is using her whole awareness to perform a series of tasks that are very repetitive on a low level, and she also doesn’t see the higher levels of organization inside the Turing machine. Will an autopotent entity automatically be bored because ve can see the lowest level?

Supposing that an autopotent entity can fully “see” the lowest level opens up some basic questions about introspection. Exposing every single computation to high-level awareness obviously requires a huge number of further computations to implement the high-level awareness. Thus, total low-level introspection is likely to be sparingly used. However, it is possible that a non-total form of low-level introspection, perhaps taking the form of a perceptual modality focused on the low level, would be able to report unusual events to high-level introspection. In either case, the solution from the perspective of Singularity Fun Theory is the same; make the autopotent design decision to exempt low-level introspection from sphexishness (that is, from the internal perception of sphexishness that gives rise to boredom). To the extent that an autopotent entity can view verself on a level where the atomic actions are predictable, the predictability of these actions should not give rise to boredom at the top level of consciousness! Disengaging sphexishness is philosophically acceptable, in this case.

If the entity wants to bend high-level attention toward low-level events as an exceptional case, then standard sphexishness could apply, but to the extent that low-level events routinely receive attention, sphexishness should not apply. Does your visual cortex get bored with processing pixels? (Okay, not pixels, retinotopic maps, but you get the idea.)


Fun Space and complexity theory

Let’s take the thesis that it is possible to have “fun” solving a complex, novel problem. Let’s say that you were a human-level intelligence who’s never seen a Rubik’s Cube or anything remotely like it. Figuring out how to solve the Rubik’s Cube would be fun and would involve solving some really deep problems; see Hofstadter’s “Metamagical Themas” articles on the Cube.

Once you’d figured out how to solve the Cube, it might still be fun (or relaxing) to apply your mental skills to solve yet another individual cube, but it certainly wouldn’t be as much fun as solving the Cube problem itself. To have more real fun with the Cube you’d have to invent a new game to play, like looking at a cube that had been scrambled for just a few steps and figuring out how to reverse exactly those steps (the “inductive game”, as it is known).

Novelty appears to be one of the major keys to fun, and for there to exist an infinite amount of fun there must be an infinite amount of novelty, from the viewpoint of a mind that is philosophically acceptable to us (i.e., doesn’t just have its novelty detectors blissed out or its sphexish detectors switched off).

Smarter entities are also smarter generalizers. It is this fact that gives rise to some of the frequently-heard worries about Singularity Fun Dynamics, i.e. that transhumans will become bored faster. This is true but only relative to a specific problem.  Humans become bored with problems that could keep apes going for years, but we have our own classes of problem that are much more interesting. Being a better generalizer means that it’s easier to generalize from, e.g., the 3×3×3 Rubik’s Cube to the 4×4×4×4 Rubik’s Tesseract, so a human might go: “Whoa, totally new problem” while the transhuman is saying “Boring, I already solved this.” This doesn’t mean that transhumans are easily bored, only that transhumans are easily bored by human-level challenges.

Our experience in moving to the human level from the ape level seems to indicate that the size of fun space grows exponentially with a linear increase in intelligence. When you jump up a level in intelligence, all the old problems are no longer fun because you’re a smarter generalizer and you can see them as all being the same problem; however, the space of new problems that opens up is larger than the old space.

Obviously, the size of the problem space grows exponentially with the permitted length of the computational specification. To demonstrate that the space of comprehensible problems grows exponentially with intelligence, or to demonstrate that the amount of fun also grows exponentially with intelligence, would require a more mathematical formulation of Singularity Fun Theory than I presently possess. However, the commonly held anxiety that it would require an exponential increase in intelligence for a linear increase in the size of Fun Space is contrary to our experience as a species so far.


Emotional involvement: The complicated part

But is a purely abstract problem really enough to keep people going for a million years? What about emotional involvement?

Describing this part of the problem is much tougher than analyzing Fun Space because it requires some background understanding of the human emotional architecture. As always, you can find a lot of the real background in “Creating Friendly AI” in the part where it describes why AIs are unlike humans; this part includes a lot of discussion about what humans are like! I’m not going to assume you’ve read CFAI , but if you’re looking for more information, that’s one place to start.

Basically, we as humans have a pleasure-pain architecture within which we find modular emotional drives that are adaptive when in the ancestral environment. Okay, it’s not a textbook, but that’s basically how it works.

Let’s take a drive like food. The basic design decisions for what tastes “good” and what tastes “bad” are geared to what was good for you in the ancestral environment. Today, fat is bad for you, and lettuce is good for you, but fifty thousand years ago when everyone was busy trying to stay alive, fat was far more valuable than lettuce, so today fat tastes better.

There’s more complexity to the “food drive” than just this basic spectrum because of the possibility of combining different tastes (and smells and textures; the modalities are linked) to form a Food Space that is the exponential, richly complex product of all the modular (but non-orthogonal) built-in components of the Food Space Fun-Modality. So the total number of possible meals is much greater than the number of modular adaptations within the Food Fun System.

Nonetheless, Food Space is eventually exhaustible. Furthermore, Food Fun is philosophically problematic because there is no longer any real accomplishment linked to eating. Back in the old days, you had to hunt something or gather something, and then you ate. Today the closest we come to that is working extra hard in order to save up for a really fancy dinner, and probably nobody really does that unless they’re on a date, which is a separate issue (see below). If food remains unpredictable/novel/uncategorized, it’s probably because the modality is out of the way of our conscious attention, and moreover has an artificially low sphexishness monitor due to the necessity of the endless repetition of the act of eating, within the ancestral environment.

One of the common questions asked by novice transhumanists is “After I upload, won’t I have a disembodied existence and won’t I therefore lose all the pleasures of eating?” The simple way to solve this problem is to create a virtual environment and eat a million bags of potato chips without gaining weight. This is very philosophically unenlightened. Or, you could try every possible good-tasting meal until you run out of Food Space. This is only slightly more enlightened.

A more transhumanist (hubristic) solution would be to take the Food Drive and hook it up to some entirely different nonhuman sensory modality in some totally different virtual world. This has a higher Future Shock Level, but if the new sensory modality is no more complex than our sense of taste, it would still get boring at the same rate as would be associated with exploring the limited Food Space.

The least enlightened course of all would be to just switch on the “good taste” activation system in the absence of any associated virtual experience, or even to bypass the good taste system and switch on the pleasure center directly.

But what about sex, you ask? Well, you can take the emotional modules that make sex pleasurable and hook them up to solving the Rubik’s Cube, but this would be a philosophical problem, since the Rubik’s Cube is probably less complex than sex and is furthermore a one-player game.

What I want to do now is propose combining these two concepts – the concept of modified emotional drives, and the concept of an unbounded space of novel problems – to create an Infinite Fun Space, within which the Singularity will never be boring. In other words, I propose that a necessary and sufficient condition for an inexhaustible source of philosophically acceptable fun, is maintaining emotional involvement in an ever-expanding space of genuinely novel problems. The social emotions can similarly be opened up into an Infinite Fun Space by allowing for ever-more-complex, emotionally involving, multi-player social games.

The specific combination of an emotional drive with a problem space should be complex; that is, it should not consist of a single burst of pleasure on achieving the goal. Instead the emotional drive, like the problem itself, should be “reductholistic” (yet another Hofstadterian term), meaning that it should have multiple levels of organization. The Food Drive associates an emotional drive with the sensory modality for taste and smell, with the process of chewing and swallowing, rather than delivering a single pure-tone burst of pleasure proportional to the number of calories consumed. This is what I mean by referring to emotional involvement with a complex novel problem; involvement refers to a drive that establishes rewards for subtasks and sub-subtasks as well as the overall goal.

To be even more precise in our specification of emotional engineering, we could specify that, for example, the feeling of emotional tension and pleasurable anticipation associated with goal proximity could be applied to those subtasks where there is a good metric of proximity; emotional tension would rise as the subgoal was approached, and so on.

At no point should the emotional involvement become sphexish; that is, at no point should there be rewards for solving sub-subproblems that are so limited as to be selected from a small bounded set. For any rewarded problem, the problem space should be large enough that individually encountered patterns are almost always “novel”.

At no point should the task itself become sphexish; any emotional involvement with subtasks should go along with the eternally joyful sensation of discovering new knowledge at the highest level.


So, yes, it’s all knowably worthwhile

Emotional involvement with challenges that are novel-relative-to-current-intelligence is not necessarily the solution to the Requirement of Infinite Fun. The standard caution about the transhuman Event Horizon still holds; even if some current predictions about the Singularity turn out to be correct, there is no aspect of the Singularity that is knowably understandable. What I am trying to show is that a certain oft-raised problem has at least one humanly understandable solution, not that some particular solution is optimal for transhumanity. The entire discussion presumes that a certain portion of the human cognitive architecture is retained indefinitely, and is in that sense rather shaky.

The solution presented here is also not philosophically perfect because an emotional drive to solve the Rubik’s Cube instead of eating, or to engage in multiplayer games more complex than sex, is still arbitrary when viewed at a sufficiently high level – not necessarily sphexish, because the patterns never become repeatable relative to the viewing intelligence, but nonetheless arbitrary.

However, the current human drive toward certain portions of Food Space, and the rewards we experience on consuming fat, are not only arbitrary but sphexish! Humans have even been known to eat more than one Pringle!  Thus, existence as a transhuman can be seen to be a definite improvement over the human condition, with a greater amount of fun not due to “blissing out” but achieved through legitimate means. The knowable existence of at least one better way is all I’m trying to demonstrate here. Whether the arbitrariness problem is solvable is not, I think, knowable at this time. In the case of objective morality, as discussed elsewhere in my writings, the whole concept of “fun” could and probably would turn out to run completely skew relative to the real problem, in which case of course this paper is totally irrelevant.


Love and altruism: Emotions with a moral dimension (or: the really complicated part)

Some emotions are hard to “port” from humanity to transhumanity because they are artifacts of a hostile universe. If humanity succeeds in getting its act together then it is quite possible that you will never be able to save your loved one’s life, under any possible circumstances – simply because your loved one will never be in that much danger, or indeed any danger at all.

Now it is true that many people go through their whole lives without ever once saving their spouse’s life, and generally do not report feeling emotionally impoverished. However, if as stated we (humanity) get our act cleaned up, the inhabitants of the future may well live out their whole existence without ever having any chance of saving someone’s life… or of doing anything for someone that they are unable to do for themselves? What then?

The key requirement for local altruism (that is, altruism toward a loved one) is that the loved one greatly desires something that he/she/ve would not otherwise be able to obtain. Could this situation arise – both unobtainability of a desired goal, and obtainability with assistance – after a totally successful Singularity? Yes; in a multiplayer social game (note that in this sense, “prestige” or the “respect of the community” may well be a real-world game!), there may be some highly desirable goals that are not matched to the ability level of some particular individual, or that only a single individual can achieve. A human-level example would be helping your loved one to conquer a kingdom in EverQuest (I’ve never played EQ, so I don’t know if this is a real example, but you get the idea). To be really effective as an example of altruism, though, the loved one must desire to rule an EverQuest kingdom strongly enough that failure would make the loved one unhappy.  The two possibilities are either (a) that transhumans do have a few unfulfilled desires and retain some limited amount of unhappiness even in a transhuman existence, or (b) that the emotions for altruism are adjusted so that conferring a major benefit “feels” as satisfying as avoiding a major disaster.  A more intricate but better solution would be if your loved one felt unhappy about being unable to conquer an EverQuest kingdom if and only if her “exoself” (or equivalent) predicted that someday he/she/ve would be able to conquer a kingdom, albeit perhaps only a very long time hence.

This particular solution requires managed unhappiness.  I don’t know if managed unhappiness will be a part of transhumanity. It seems to me that a good case could be made that just because we have some really important emotions that are entangled with a world-model in which people are sometimes unhappy may not be a good reason to import unhappiness into the world of transhumanity. There may be a better solution, some elegant way to avoid being forced to choose between living in a world without a certain kind of altruism or living in a world with a certain kind of limited unhappiness. Nonetheless this raises a question about unhappiness, which is whether unhappiness is “real” if you could choose to switch it off, or for that matter whether being able to theoretically switch it off will (a) make it even less pleasant or (b) make the one who loves you feel like he/she/ve is solving an artificial problem. My own impulse is to say that I consider it philosophically acceptable to disengage the emotional module that says “This is only real if it’s unavoidable”, or to disengage the emotional module that induces the temptation to switch off the unhappiness. There’s no point in being too faithful to the human mode of existence, after all. Nonetheless there is conceivably a more elegant solution to this, as well.

Note that, by the same logic, it is possible to experience certain kinds of fun in VR that might be thought impossible in a transhuman world; for example, reliving episodes of (for the sake of argument) The X-Files in which Scully (Mulder) gets to save the life of Mulder (Scully), even though only the main character (you) is real and all other entities are simply puppets of an assisting AI. The usual suggestion is to obliterate the memories of it all being a simulation, but this begs the question of whether “you” with your memories obliterated is the same entity for purposes of informed consent – if Scully (you) is having an unpleasant moment, not knowing it to be simulated, wouldn’t the rules of individual volition take over and bring her up out of the simulation? Who’s to say whether Scully would even consent to having the memories of her “original” self reinserted? A more elegant but philosophically questionable solution would be to have Scully retain her memories of the external world, including the fact that Mulder is an AI puppet, but to rearrange the emotional bindings so that she remains just as desperate to save Mulder from the flesh-eating chimpanzees or whatever, and just as satisfied on having accomplished this. I personally consider that this may well cross the line between emotional reengineering and self-delusion, so I would prefer altruistic involvement in a multi-player social game.

On the whole, it would appear to definitely require more planning and sophistication in order to commit acts of genuine (non-self-delusive) altruism in a friendly universe, but the problem appears to be tractable.

If “the uncontrollability of emotions is part of their essential charm” (a phrase due to Ben Goertzel), I see no philosophical problem with modifying the emotional architecture so that the mental image of potential controllability no longer binds to the emotion of this feels fake and its associated effect, diminish emotional strength.

While I do worry about the problem of the shift from a hostile universe to the friendly universe eliminating the opportunity for emotions like altruism except in VR, I would not be at all disturbed if altruism were simply increasingly rare as long as everyone got a chance to commit at least one altruistic act in their existence. As for emotions bound to personal risks, I have no problem with these emotions passing out of existence along with the risks that created them. Life does not become less meaningful if you are never, ever afraid of snakes.


Sorry, you still can’t write a post-Singularity story

So does this mean that an author can use Singularity Fun Theory to write stories about daily life in a post-Singularity world which are experienced as fun by present-day humans? No; emotional health in a post-Singularity world requires some emotional adjustments. These adjustments are not only philosophically acceptable but even philosophically desirable.  Nonetheless, from the perspective of an unadjusted present-day human, stories set in our world will probably make more emotional sense than stories set in a transhuman world. This doesn’t mean that our world is exciting and a transhuman world is boring. It means that our emotions are adapted to a hostile universe.

Nonetheless, it remains extremely extremely true that if you want to save the world, now would be a good time, because you are never ever going to get a better chance to save the world than being a human on pre-Singularity Earth. Personally I feel that saving the world should be done for the sake of the world rather than the sake of the warm fuzzy feeling that goes with saving the world, because the former morally outweighs the latter by a factor of, oh, at least six billion or so. However, I personally see nothing wrong with enjoying the warm fuzzy feeling if you happen to be saving the world anyway.


This document is ©2002 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.

Eliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .

If you think the world could use some more rationality, consider blogging this page.

Praise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/fun-theory/ .