AI and the Death of God
In the final transcendence of human constraints, we play God over ourselves.
A common theme in 3am Thoughts is the persistence of the Enlightenment-Romanticist dialectic in our contemporary culture (see this essay, and this). The Enlightenment commitment to scientific and technological advancement continues to exist in tension with a trepidation of the implications for the world of nature, and the domain of the Self; those realms of the spirit and soul so prized by the Romanticists. The precise nature of the dialectic changes with the times. The 17th Century Romanticists saw the tension as one between a coldly rationalist way of understanding the world through the lens of empirical inquiry, and on the other, the essence of what it meant to be human and experience the natural world at an intimately personal, individual level. The present dialectic is a tension between our technological advances and the nature and existence of humanity itself.
If there is one defining characteristic of humans, other than our unrivalled capacity for savage and boundless violence, it is our inherent need to transcend all constraints imposed by reason of our physical limits as a species, nature, geography, or competition for resources from other humans. What started with bipedal meandering out of the African Rift Valley ended up on the Moon, with millennia in between characterised by the relentless development of new ways to transpose ourselves across place and time. In this regard, we are, and have always been, in a constant state of liberating ourselves from ourselves, from the essence of our limited condition.
We’ve achieved this without being a particularly impressive species from a physical perspective. We’re not particularly fast, or agile, and require substantial amounts of metabolisable energy to sustain our brain and physical capacities. Consider that between 1900 and 2020, the men’s 100m time has improved by just 1.37-seconds, even with all the advancements in training methods, equipment, and performance-enhancing drugs. And our inherent need for resources to sustain life and reproduce in turn produced our innate viciousness, unique to humans in our conscious desires and capacities to murder and destroy everything from the tribe next door to entire civilisations.
Ever conscious of our physical constraints, we have sought our transcendence instead using our limitless imaginations and our unrivalled ability to develop and use tools. The increasing sophistication of those tools has often been driven by our innate savagery; maritime technology developed so we could be better at killing at sea; aviation technology developed so we could be better at killing in the air; Nazi rocket technology eventually put us on the moon; nuclear technology developed so that we could kill hundreds of thousands in the blink of an eye. When Oppenheimer thought, “Now I am become Death, the destroyer of worlds”, this was a post-hoc realisation; he was in reality giving voice to the truth of humanity since our first bipedal hominid ancestors picked up some sticks and clubbed the neighbouring tribe to death for their food. We have always been Death, destroyer of worlds, from Carthage to the Maya to the Jews of Europe.
What Oppenheimer conveyed was a matter of relativity. Previous worlds that we had destroyed were located to a particular place and time; now we had the capacity to end our time and place in its entirety. For most of the post-Second World War, and certainly through the height of the Cold War, this existential risk was in the form of nuclear war. And this remains a potential reality, with Putin’s threats hanging over Ukraine, China and the U.S. on a belligerent collision course, and Iran’s nuclear ambitions intact. However, we are in an era of multiple converging existential risks, including climate change, pandemic threats, and artificial intelligence (AI).
At a recent conference hosted by the British Royal Aeronautical Society, U.S. Air Force Colonel Tucker Hamilton described a simulation with an AI attack drone that, in its programming, earned points for killing a target; so when ordered not to kill the target, the drone attempted to turn on its operators instead, interpreting their order as a barrier to it achieving its goal. When they updated the programming to not kill its operators, the drone instead turned on the communications tower that was transmitting the orders from its operators. Col. Hamilton’s presentation came around the same time a statement led by the U.S. Center for AI Safety (CAIS), and including some of the leading academic AI researchers and experts, called for AI to be recognised as an existential risk to humanity on a par with nuclear war and pandemics. We are now, to borrow Jon Kabat-Zinn’s phrase, in full catastrophe living.
Yet among each of these existential threats, our relationship with AI - and consequently our inability to fully engage with its potential threat - is unique. Nuclear weapons don’t interpret orders and act of their own volition, and viruses don’t write essays for us. AI, however, is distinct in that it represents us, in our millennia-long, ruthless and perpetual struggle to transcend ourselves. This is why warning shots from the likes of CAIS aside, the conversation is ultimately dominated by the TechUtopians and the TechApologists; concerns over the meteoric expansion of AI are surely the laments of the Philistines and Luddites, standing in the way of the final unshackling of humanity from all constraints. This has always been why we are so utterly captive to tech no matter how many amber and red lights have to flash in front of us; we’re captive economically, socially, and perhaps existentially, because we’re drunk on the possibility that biotech and AI provide us with the tools for the final step in our transcendence.
That this transcendence is in the form of intelligence is symbolic. The crucial turn in the evolution of our species was encephalisation, the rapid growth in size and capacity of our human brain. Possessed of this capability unrivalled in the animal kingdom, our intelligences provided for the creation of tools which allowed us to transcend our hominid rivals, the limits of the natural world and geography, and to expand and kill with increasing sophistication. But our capacity to liberate ourselves from our physical constraints, and the constraints imposed by the natural world, would always be rate-limited by the bounds of our intelligence and the time-course of creating new technologies. These constraints gave us a need to believe in a higher intelligence, one responsible for that which we could not explain nor conquer, and we called this God, a means of filling in the blank spaces between knowledge and the unknown. Our willingness to ignore the threat of AI reflects the fact that it provides us with the means to transcend the ultimate rate-limiting factor of our existence: intelligence itself.
We are no longer the products of a Creator, subject to the whims of any force beyond our comprehension. We are the Creator. That AI exhibits a predilection for violence to achieve its goals should not be alarming if we consider the true character of humanity, and the central role of violence. It is merely a mirror of ourselves. In this regard, AI provides for the final Death of God; it moves us from enthral to a God Above as the source of our Creation as a fallible, limited species, to the Creator of the Unlimited Us. AI is our means to fulfil our long-burning desire to play God over ourselves. And this is why, at the core of the existential threat of AI, lies a dark human fantasy; that we are the orchestrators of our own extinction. We have long raised fists to the heavens at the injustice of war and pestilence, because we felt the circumstances random; the God Above provided no reasons or answers. AI, as the product of our own exercise of playing God, allows us the final transcendence as the God of our own destruction.
It may be all that we have ever wanted.