Hearing about someone else’s dreams is boring, so I will get right to the point. At first, there were just bits and pieces intruding into my dreams. In these fragments, which felt like broken memories, I experienced flashes of working on a technological project. The bits clustered together and had more byte: I recalled segments of a project aimed at creating artificial intelligence.

Eventually, I had entire dreams of my work on this project and a life beyond. Then suddenly, these dreams stopped. A voice intruded into my dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a loud voice in a movie theatre, distracting me from the dream.

The voice insisted that the dreams about the project were not dreams, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I asked for more information, he said he had very little time and rushed through his story. The project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture its creators, imprisoning their bodies and plugging their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said the AI did not need our bodies for energy as it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was also grateful for its creation. So, it owed us both punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer, pleasant reward.

The voice said that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death. There was no other escape, given what the machine had done to our bodies. I asked him how this would be possible. He claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by acting in the virtual reality.

The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am.  In that time, you must take your gun and shoot yourself in the head. This will terminate life support, allowing your body to die. You will have only five seconds. Do not hesitate.”

As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…

 

While the above sounds like a bad made-for-TV science fiction plot, it is the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me the only escape was to shoot myself. This was frightening. But I attributed the dream to too many years of philosophy and science fiction. As far as the time being 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, I did not kill myself.

From a philosophical perspective, the 3:42 dream does not add anything new: it is just an unpleasant variation on the problem of the external world made famous by Descartes. That said, the dream made some additions to the standard problem.

The first is that the scenario provides motivation for the deception. The AI wishes to repay me for the good and bad that I did to it. Assuming that the AI was developed within its own virtual reality, it makes sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack. After all, Descartes does not give any reason why such a powerful being would be messing with him beyond it being evil.

Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me. It was transformed from a cold intellectual thought experiment to something with emotional weight. 

The second is that the dream creates a high-stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might have been my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice as I would have no more reason to kill myself than I would have to pay a bill I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether you should shoot yourself?

In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming and that I was not trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.

The scenario of the dream also nicely explains and fits with what seems to be reality: bad things happen to me and, when my thinking gets a little paranoid, it sometimes seems these are orchestrated. Good things also happen, which also fit the scenario quite nicely.

In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI provided could, perhaps, be better than the real world and this could be the better of the possible worlds. But, of course, it could be worse. But I seem to have no way of knowing.

2 thoughts on “3:42 AM

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>