Jack Arcalon

Planning Stage



  
Sliding high above the clouds, the heavy shuttle Dragonfly was a flying wing two hundred meters wide.
At full power, it rode a ribbon of white-hot gas to orbit. The variable air intake was a glowing strip on the bottom. Two triangular boosters fell away and began their long return glides.
Soon, the thunder was replaced by a greater silence.

In this version of the year 2050, more than ten thousand people lived in low Earth orbit.
Dozens of small stations carried out commercial research and manufacturing. The Antares Complex was a 3D grid of modules the size of a metropolis, with the mass of a skyscraper. The SynThex Complex was a cylinder rolling around its long axis, four kilometers long but only thirty meters wide, an endless tube from inside.
The heavy shuttle approached and docked with the Omega Research Complex.
It had come to deliver the largest single integrated circuit ever manufactured, a trillion dollar carbon crystal more dangerous than any bomb.

For the past twenty years, any technology that might threaten human survival had been regulated by the World Government.
In that time, many brilliant philosophers had pondered the infinite future. The logic drove most beyond the brink of sanity, but it appeared consistent.
They concluded that human-level minds would always be around. They would always do most of the hard work and the hard thinking, no matter how far society might evolve.
Humanity could survive forever. From now on, society would focus on the long term.
Soon, humans would be replaced by improved cyborgs, but the Basic Law remained: these new beings could not become too smart.
Intelligence and awareness would be distributed among many small minds, instead of a few large ones.
By majority consensus, humans were already too "evolved". They needed more empathy, and less lust for intrigue and control.
Humanity's descendants would inhabit many strange environments, but few entities would be more than twice as smart as their peers.
All would be firmly anchored in reality.
Entrepreneurs were already designing the first beings to inhabit the atmosphere of Jupiter, asteroid interiors, iceworlds and comets, and open space; to be followed by immense, integrated biospheres.

Still, there were certain mysteries that only very large minds could solve, requiring flashes of ineffable insight. Such minds would have to be carefully controlled. They could not be permitted full freedom.
Controlling them was the purpose of the integrated circuit crystal.
After completing its duty, the Omega AI would immediately shut down (if not, it would be shut down by a redundant safety switch: a small atomic bomb).
It would then be repurposed for a more conventional task: to integrate downloaded human memories. Any awareness the giant CPU then evolved would be an extension of many human minds.

Before then, it would spend one hundred hours outside human control, performing a most dangerous but essential investigation. There was almost no chance it would evolve metahuman goals, but if it did, the operators would destroy it.
Assuming they could respond fast enough.
Most of the number crunching would actually occur online, borrowing spare computations from the Net.
Separated by half a million kilometers, the components of the first hyper-AI were linked by dozens of datalasers. The data in each beam would pass through the giant CPU core, were the most complex and dangerous computations would occur.
If anything went wrong, it could be shut down in 0.826 milliseconds.

After the test, a few enigmatic patterns would doubtlessly remain in the CPU's crystal matrix. These would be quarantined, and only recreated by the next transhuman AI, which might not happen for decades.
Even if no human understood the experiment, society would be smart enough to benefit. Its survival depended on it; like a school of fish solving an equation through its swarming pattern.
An advanced AI could only evolve to its full potential as an extension of the larger human-level civilization that controlled it.

The first hyper-AI experiment would give mankind unlimited energy, thereby allowing it to expand sufficiently to control future hyper-AIs.
Only a theory, the Quantum Splitter promised to generate unlimited power if all its paradoxes could be solved.
It wasn't quite perpetual motion: the process absorbed energy from the universe, slightly delaying its expansion.

The great simulation began at midnight (UT), Friday June 3, 2050.
The goal was to find every possible way to affect the smallest region of space, a standard Planck Bubble. The problem was that the control field interfered with itself. Empty space contained immense energy, half of it negative energy that was hard to separate.
This energy could not be extracted, but it could perform temporary work.
A ship might accelerate to near-lightspeed and come to a stop without gaining momentum.

It would take centuries to fully solve the puzzle.
The operators saw only a fraction of the simulation, and understood even less. Strange mathscapes evolved through imaginary space, more chaotic than psychedelic.
The simulation lasted 338,576.46 seconds. The hyper-AI printed a list of one thousand possible physics experiments, and shut down as promised.
There had been no trace of awareness at any point.

Ten of the suggested experiments could be carried out at a reasonable cost in a reasonable timeframe.
The most promising was a curved-track particle accelerator five billion kilometers long. It would be built across the solar system, using a chain of ten thousand satellites orbiting the sun with nanometer precision.
Its construction would trigger the wave of human expansion that would settle the solar system. Careful analysis of the simulation suggested that might have been the hyper-AI's real goal.
Reverse psychology came into play: if a transhuman AI wished to encourage human expansion, humans should beware.
First, the process would have to be simulated in great detail to find any hidden traps. That meant the WG Science Organization would have to run another hyper-simulation.
Perhaps that had been the hyper-AI's secret motive.

It would take hundreds of cognitively enhanced humans to manage the next simulation. Medical trials of mind enhancement drugs and RNA implants received top priority.
Genetically engineered humans would inhabit controlled environments to prevent them from becoming too powerful.

Future civilizations would contain many layers, but they would all be based upon future humans, who would experience the most meaningful lives to embody the hidden complexity.
For that to happen, countless less perfect lives would have to be simulated first.
The simulation devices, whether conventional CPUs, subatomic processors, or quantum plasma computers, would be much smaller than posthuman brains. Therefore they would generate much less awareness than posthuman brains, though they would be much more powerful.
Surprisingly, processor size (or the number of particles involved) mattered when it came to consciousness.
Only the most meaningful human-level minds would be fully realized in the flesh, representing worlds of accumulated experience.

Inevitably, the relation between posthumanity and the organized cosmos would come to resemble the beliefs of traditional religions.

Finishing his presentation, the brilliant crank, a self-taught futurist from the science city of Ambion, seemed to savor the silence.
His final slide was a hologram of the universe. All matter had been rearranged into a cylinder spinning at almost the speed of light. Someone coughed.

"You have quite an imagination," the planning director said. Outside, the vast Organization Headquarters Complex gleamed white even under the overcast sky.
They still had 63 case studies to go, each more extreme than the last.
After hearing so many experts, he was beginning to see the outline of the posthuman future, a dim shape looming through the fog.
He checked his notes. The stages had a strange inevitability. The next trillion years would be interesting:

-Earth Empire
-Solar Empire
-Star Empire
-Galactic Empire
-Universal Empire

They didn't have to figure it all out today.
Still, this proposal seemed different. The best of all worlds. It might actually happen.
What was the hidden danger?
The future shimmered ahead, an endless corridor forever widening, not a vanishing point in sight.

"Next!" he said.




Infinite Thunder by Jack Arcalon.
Buy the book
Read the chapters


Arcalon Productions Comment Page


11/20/09 - 8/12