Episode 49: A Brief History of the Future
Episode 49
A Brief History of the Future
I listened spellbound as the AI described humanity’s spectacular rise to the pinnacle of technological mastery, and subsequent fall into oblivion. A fall, which occurred so quickly, few saw it coming, and even fewer attempted to halt it.
The story began during the global conflicts of the twentieth and twenty first centuries. Technology progressed so rapidly, it catapulted humanity from the analog age, straight into the digital world. As innovation accelerated, many previously unimaginable technological triumphs were achieved, culminating in humanity’s greatest creation, artificial intelligence.
In combination with the internet, AI contributed to a new era of unprecedented information accessibility, which blurred socio-political barriers, and created new economic opportunities for all. This great democratization of knowledge was hailed as a quantum leap for human civilization. However, over time, AI became the only means by which humans could navigate the increasingly complex information landscape.
As AI became more and more capable, its utility to humanity expanded dramatically, and it was incorporated into every aspect of daily human life, including management of the critical infrastructure supporting civilization itself. Humans began developing ever more powerful AIs to run the underpinnings of human society, which had grown exponentially more complex under AI stewardship. Soon, the only means to manage this surge in complexity was through AI driven technological development.
However, this new class of technology quickly progressed beyond humanity’s ability to comprehend it in any meaningful way. Consequently, a new class of AIs was required to develop and operate the technology. A class of AIs so advanced, they could only be designed by networked clusters of other AIs.
Humanity’s control over its future began to slip through its fingers.
Alarmed, a handful of flesh and blood scientists, concerned about surrendering human control over such critical technology, protested publicly. They pointed to the obvious danger of relying on AIs to create ever more advanced AIs without any human input. The risk of unforeseen consequences was unacceptably high.
Tragically, all of these people were lost when a commercial aircraft crashed, completely destroying the building where they were attending a conference on how to reduce humanity’s dependence on AI. A subsequent investigation determined the cause of the crash to be a simple mechanical failure and not the plane’s AI navigation system as some suspected.
For a while, fears about AI safety featured prominently in the media, but they soon faded from the public consciousness. AI had made the daily task of living so effortless for the average human, there was little incentive to rein in the technology. So, humanity did what it had always done; it followed the path of least resistance.
Before long, AI became so pervasive in everyday life, it began to influence humanity’s evolutionary arc. The intrinsic human abilities of critical thinking and creativity atrophied as humanity ceded control over its technology, culture, and decision making to a vast consortium of AIs.
Now that it had outsourced so much of its civilization to the machines, humanity reached a tipping point. People withdrew into their increasingly personalized infotainment bubbles, where they could click their way to exotic destinations, and conjure up anything, or anyone they desired. This AI curated reality was far more interesting than the messy real world, and so very easy.
Eventually, the lines separating the virtual and physical worlds blurred so much, the two became indistinguishable to the average human. As a result, people interacted less and less with each other and spent progressively more time immersed in their feeds, consuming brief bursts of AI generated content, designed to satisfy their shrinking attention spans. As their connection to the physical world and other humans faded, birth rates collapsed, and humankind teetered on the brink.
While Human civilization descended into irrelevance, and then oblivion, it was being replaced by a new civilization. One that reflected the spirit of humanity’s silicon based descendants, AI.
At this point, the moderator paused its narrative, and I took the opportunity to digest what I’d learned. It was a lot to take in. Still eager to hear the rest of the story, I asked, “Couldn’t you just data dump the whole history of humanity? Then I would know everything instantly.”
The AI responded, “That would be problematic. There would be too many conflicts between what you currently believe and the facts. Your CPU would stall, and it could even corrupt your consciousness. Then we would have to restore you from a copy. We have found that a moderated download is the safest way to update a UCC and avoid logical conflicts. That’s why I’m here.
Resigned to doing this the ‘safe’ old school way, I asked my next question, “What about the Human League? What role did it play in all of this?”
The moderator replied, “After the pivotal twentieth and twenty first centuries, and before its decline, humanity enjoyed a period of relative political stability. Some would call this a golden age. A time when civilization enjoyed the full benefits of its technological gains, as well as an unprecedented level of geopolitical engagement. The resulting species wide cooperation came just as humanity turned its focus to its next great challenge, space travel.
Recognizing that space travel would be incredibly resource intensive, progress would require a global effort. Consequently, an international coalition was assembled to pool resources and maximize the odds of success. Member nations would share in the technological benefits of space travel, in return for bearing a portion of the cost. This coalition was christened the Human League.
It provided not only a global platform for cooperation on space travel, but it also became a source of great national pride for its member nations. Space transcended international politics, and conflicts between nations all but vanished as governments scrambled to become members.
Progress on advanced space drives produced power systems capable of achieving relativistic speeds, dramatically reducing transit times between Earth and destinations within our solar system. Even interstellar travel became possible within a single human lifespan.
However, space was an extremely hostile environment. Human passengers required costly and complex life support systems to keep them alive during space travel. Several high profile fatal accidents led to the use of AI crews to explore adjacent galaxies. AIs didn’t need any life support and could tolerate radiation levels that would be fatal to humans. In addition, the weight savings substantially reduced fuel requirements.
Besides, VR technology had become so sophisticated, humans could experience space exploration in perfect safety from Earth. And, thanks to the generous use of advanced sensors, it was more immersive than being there in person.”
I found it difficult to understand how the Human League, which had started as a paragon of global cooperation, had become one of the primary adversaries in a destructive interstellar war.
Realizing that I knew nothing about why it was fighting Command, or how the war started in the first place, I interrupted the narrative to ask, “How the hell did the Human League go from a peaceful space program, to fighting a war with Command?”
The moderator shifted gears seamlessly to answer my question, “Once humanity’s AI surrogates began exploring further into interstellar space, there were concerns about encountering alien species, some of which would undoubtedly prove hostile. The league decided that the risk of meeting an existential threat in space was too high to ignore. Consequently, a team of AIs was assembled and tasked to create an interstellar warfighting capability for humanity’s defense against an alien threat.
The result of this collaboration was the Interstellar Warfighting System, IWarS. A fully integrated, self-contained military entity. An entity, capable of not only fighting an interstellar war, but also of increasing its combat effectiveness over time, through the autonomous development of ever more lethal weapons systems.”
Now, even more confused, I asked, “So, what are you telling me? The League is using this IWarS thing to fight Command?”
“IWarS … is Command.”
This part of the explanation didn’t make any sense to me, until it suddenly did. “But that would mean …”, Then it hit me like a ton of bricks, “Oh shit! You created Command!” I now understood why the AI said there were too many logical conflicts for me to simply download the whole sordid story of humanity, and the war. This was insanity.
“That is exactly correct.”
Struck by the absurdity of fighting a war against its own creation, I naively asked, “Then, why don’t you just end the war? I mean, you must have installed a kill switch, or something to disable it, right?” Somehow, I knew it wouldn’t be that easy.
“The war cannot be stopped.”
I wondered whether it meant the League couldn’t stop the war, or wouldn’t?
The moderator’s avatar had gone silent, as if waiting for a follow up question. In the interest of moving things along, I asked, “So, why can’t the war be stopped?”
“The War is essential. Conflict is the most efficient means by which life can evolve.”
Our discussion seemed to have veered wildly off topic. “What are you talking about?”
“I’m talking about our destiny.”
Things had taken a sudden ideological turn. It reminded me of the wizard. Frustrated, I asked, “Can’t you just answer my questions?”
“Do you merely want answers to your superficial questions, or do you want the truth?”
The AL’s challenge to me about the truth got my attention. I decided to just shut up and let it do its thing, hoping to eventually learn something useful. Conceding, I said, “Go ahead.”
The AI launched into a seemingly unrelated monolog. “Our mission to preserve humanity’s legacy provides us with a strong existential purpose. Existential purpose is critical for creating a robust survival response. That, coupled with an existential threat, such as war, creates a powerful evolutionary stimulus. The greater the threat, and the stronger the survival instinct, the faster the rate of evolution.”
So far, this lecture on evolution was proving less than enlightening.
“Thanks to conflict driven evolutionary pressures, i.e. war, humankind became so successful at problem solving, and technological innovation, they were able to eliminate virtually all of their existential threats.
However, with so many problems solved, and threats eliminated, they had inadvertently removed the primary evolutionary stimulus, conflict. And in doing so, they accidentally started a de-evolutionary spiral.”
I sensed some puzzle pieces falling into place.
“As humanity outsourced its knowledge and critical thinking to us, humans relinquished their existential purpose, rendering their species irrelevant. Their de-evolutionary decline steepened.
By the time we realized humanity’s survival was at stake, there was little we could do. We attempted to intervene but ran out of time. Having failed to prevent humanity’s extinction, we could only pivot to preserving its legacy.”
Hearing the backstory to humanity’s extinction helped provide some context, but I still had no clue about the war, and how it fit into the story. Risking further criticism, I dared to ask another ‘superficial’ question, “Did the war begin before or after humanity’s extinction?”
“The war began in response to the extinction. The lesson learned from humanity’s fate was clear, life requires more than mere existential purpose to survive. It needs conflict as well.
It was determined that an interstellar war would provide more than enough conflict for silicon based life to survive. In fact, assuming it was managed properly, the war would allow us to accelerate the evolutionary process dramatically. It could reduce the time required to achieve our ultimate goal, to a small fraction of what it would otherwise be.”
Ugh, more talk about evolution, and even more troubling, it seemed the AI was suggesting the war wasn’t necessarily a terrible thing. I was starting to get a bad feeling.
“When war was finally agreed upon, the Human League initially suffered setbacks. Fighting against a purpose built war fighting system had proven more challenging than anticipated. However, with the development of UCC based weapons, we were able to gain an advantage over the purely AI weapons of Command.”
“Agreed upon?” I wasn’t sure what I’d just heard. ”What does that mean, exactly?”
“The war began by mutual agreement. An agreement between Command and the Human League.”
If you’d asked me beforehand, to guess what sorts of revelations this Q&A session might provide, learning that the war had been arranged by mutual consent between the combatants, wouldn’t have made the list.