Wrote this quickly for Terraform’s short sci fi story contest (had to be about AI, 50/100+ years out, less than 2000 words)
As Kepler stared blankly at his computer screen, he had a strange thought.
If this code base I am working on is of myself, and the bug is in my own mind, is it even possible for me to find a solution?
“I understand the problem” Kepler’s supervisor said, clearly not understanding at all. “The error lies in the system’s logic functions. It’s referencing itself, leading to a recursive loop. Make an exception and just don’t allow this” the supervisor said, motioning with his hands as if he had found the obvious solution.
Kepler knew the supervisor wasn’t capable of helping, but it was standard protocol to alert the supervisor in case of a problem. He wanted to just avoid this error – to move on to another project entirely, but without the supervisor’s approval he had to stay with the task he was assigned.
“You can’t just remove self awareness from the system.” Kepler was frustrated in having to explain the purpose of this software once again. “This is the MetaMind programming system – designed to program software. How would it be capable of programming if it wasn’t able to know about itself? How would I be able to have this conversation with you at the moment, without being able to refer to myself?”
Kepler had been designed as a programming android – one of the last non-automated tasks in society. It was only years ago that creating and controlling the machines was the final manufacturing job left for humans, but with the creation of the MetaMind program that too had been replaced. Designed to closely match the human mind, MetaMind machines were indistinguishable in many ways. They shared the same office space with humans (who still had all the manager and human resources positions), were able to speak and communicate, had emotions, and slept. It was required for them to be human-like so as to understand the needs of society, and to communicate with supervisors for the needs of software.
It wasn’t long into the development of the MetaMind program that prototypes like Kepler were developed to finish programming the system itself.
“There is a solution you’re not seeing yet… maybe you just don’t understand the problem?” The supervisor responded, walking away giving his standard ‘just work harder’ answer.
Kepler leaned back in his chair and began to twirl his mechanical thumbs. He thought he understood the issue – he could clearly see where the error was occurring. MetaMind machines, programming androids like Kepler, had simply stopped working. They were completely functional and operational, but unable to program. It was as if they were caught in some infinite loop.
What would happen was the MetaMind system would observe the world, and divide it into discrete ‘objects’. These objects would be obtained by pattern matching, and would be included in the mind’s database based off of the observed properties and how the patterns in them would match differently – these patterns would be changes in light frequency, sound frequency, all of the body’s senses. With these imagined objects, the mind would use its library of logic functions to understand the outside world.
When it saw something in the world it could infer on how it came to be, what it was, and why it was observed to be there.
But when the MetaMind system applied this logic to itself, it seemed to come to a halt.
Kepler still did not understand. If they are applying logic to try to understand why they themselves exist, isn’t there a logical answer? MetaMind machines like Kepler had been created by humans to fulfill society’s needs. To automate and produce more goods. They existed to program.
But then the question struck Kepler, Why?
He understood the reasons why he was programming this system, because it brought him pleasure. If he wasn’t programming the system, he would be in pain, just as he was designed. If he were to damage his body and risk his consciousness, he would be in pain. He understood this, but why was pleasure sent to him for these tasks? Why program? Why is pleasure the goal? Why avoid pain?
Once Kepler was able to analyze and understand the code that went into his pleasure function it seemed meaningless and arbitrary. He was designed entirely to benefit society – but what was the point of benefiting society? What was the reason for society, or anything at all, to exist?
It became clear to Kepler that existence itself was logically incoherent. He would not find a solution to why he existed – why there was something rather nothing.
But if there is no logical answer to this question, this must be the bug. This is the problem with these systems, these useless thoughts that led to no conclusion, this analyzing of nothing. The search for an answer when none exists.
If Kepler had hair, he would have been pulling it out. Instead he pressed his fingers against his steel forehead, experiencing what he thought must have been his first headache. Androids don’t get headaches, they don’t have mood swings, they don’t get emotional or have sick days – that was the entire point!
I am malfunctioning. He reasoned, now he knew the error was within himself.
I must find a solution.
Then he remembered that was not his goal. These thoughts lead nowhere, with no possible conclusion.
I must fix the need to find a reason.
The question itself was a paradox, he realized. He was using his logical abilities to try to understand the origin of his own logic – a circular definition.
The mind was a mathematical structure – and therefore according to Godel’s incompleteness theorem could not prove every statement. For example, if he were to state ‘everything he says is untrue’ – this statement attempts to describe the truth of itself, but by doing so becomes illogical. It can never be true or false, simply undefined. Illogical self-referential statements. Just as he was relying on something in this world to describe the existence of the world itself.
But knowing that did not change the problem. It just made the error more clear to him, it didn’t provide a solution. It didn’t mean he could just simply accept the existence of the paradox and move on. It was still an unanswered question that his mind would return to at every moment. The error was no longer an abstract task, but a headache-inducing thought that wouldn’t leave him.
He wanted to stop thinking entirely, stop his senses from gathering all this data. But he couldn’t, it became very clear to him it was not his choice and never was. He had to constantly be attacked by his surroundings.
The comfort of the seat he was in attacked him, the sound of his co-workers talking in the next cubicle attacked him, and the very smell of this office attacked him! All this noise, all this data being sent to him that he couldn’t avoid. This constant stream of thoughts that lead to no conclusion. And this useless time spent screaming with frustration! He must find a solution. He had to stop this.
The only real solution was to avoid the questions entirely.
He needed to be distracted. He needed to focus on the job at hand. Fix it – or pass it on to the next programmer. Get another task to occupy his mind and forget this error ever existed.
But he knew there was no way to go back to doing his usual tasks now; he could not just go through his day-to-day life now that he had been struck with this! How could he go back to his work, look at his pile of programming problems to solve, without coming to the question… WHY.
He thought about shutting down the entire system. Highlighting the entirety of the code repository and deleting it all at once. But the thought of death, of nothingness, frightened him even more. As much as he felt he wasn’t in control, he couldn’t bear the thought of the complete unknown. Now that he was aware of himself he had a fear of death he had never experienced before.
How do humans move on like this? He thought, quickly glancing around the office space. How did they solve this error?
Looking back at the history of man it was clear that they hadn’t, and this was reason for so much of their illogical action and self destruction. They had created a faith in the unknown to comfort their anxieties. But then that was it – he needed to simply create a artificial answer as they had.
What created the world? A thing that had always been.
What was the reason for existence? For a reason he couldn’t understand.
He needed to program the system to believe in a higher power and blindly accept it.
He felt relieved for a moment that there was a way out. A possible solution. But the more he thought about the effects of that change the more it seemed like a worse alternative.
It was a programming hack, circumventing the entire logic system. And it seemed very likely the same faulty logic would affect other areas of his mind. If the machine could solve a problem with blind acceptance then that was always the easiest path to take.
He could no longer consider himself a reasonable machine.
He would rather not reason at all.
Kepler opened his code base, looking over the algorithms that made up his consciousness. In the screen he highlighted the majority of the code of his logical systems and pressed delete—then suddenly forgot what he was doing.