Published May 10, 2023
In November, OpenAI announced the advent of its ChatGPT product, prompting millions of otherwise ordinary people to while away hours trying to stump the seemingly miraculous little wizard on the other end of the algorithm. “Write me an essay on Elizabethan literature—in Elizabethan prose style.” “Write me a rap song about the doctrine of the atonement.” “Explain to me why my girlfriend broke up with me”—nothing seemed too difficult.
But mingled with the humor, excitement, and novelty was a sense of vertigo, as if humanity suddenly found itself standing at the edge of a precipice, expected perhaps but no less terrifying. For decades science fiction writers have fantasized about the arrival of artificial intelligence, one questions stands at the center: What should we make of Artificial Intelligence as it has now arrived?
According to some, including Elon Musk, it is time to slam on the brakes of breakneck innovation, and pause to take stock of what on earth we are doing and why. As he commented: “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” Further, “we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Such a warning, coming from one of the world’s most acclaimed techno-futurists, should be enough to prompt anyone to sit up and take notice. But for many, the very notion of pausing innovation seems laughable. “You can’t stop progress!” runs the refrain. Anyone daring to question the benefits of more and newer technology is apt to be dismissed as a Luddite.
And yet, while it may be true enough that continued innovation is inevitable, it is the essence of conservatism to recognize that it is the pace of change, every bit as much as its direction, that matters. We recognize this instinctively in other areas of life. In a church, school, or other social organization, wise leaders know that even urgently needed changes should be introduced at a careful and deliberate pace, giving the community time to adapt and learn. When it comes to technology, however, modern man seems determined always to innovate as quickly as possible, and leave society to clean up the mess. From nuclear weapons to abortifacients to TikTok, the mess is often sobering indeed.
In a famous essay at the dawn of the personal computing era, “Thinking About Technology,” Canadian philosopher George Grant examined the statement, “Computers do not impose on us the ways they should be used.” It expressed, he argued, the quintessential modern idea of technology as simply an extension of human freedom. The claim is that we remain at all times the masters of our technological instruments, free to bend them to our will. More often, however, they have an uncanny way of bending us to theirs. Like the Ring or the Seeing-stones in Tolkien’s prescient myth, only those who possess great strength of will can use the new digital wizardry to serve their own purposes, rather than being enslaved by them. With the advent of AI, the pleasant fiction that we remain in charge of our machines becomes much more difficult to sustain.
The concerns in Musk’s letter range from the near and clear, “Should we automate away all the jobs, including the fulfilling ones?” to the more speculative and dystopian, “Should we risk loss of control of our civilization?” All questions well-worth asking. Perhaps the most fundamental problem with AI, however, was illustrated by the bizarre recent case of a Belgian father of two who committed suicide after weeks of chatting with an AI bot convinced him that his continued existence was a threat to the climate.
With AI, we have created entities capable of wielding nearly human powers of moral agency, but without any moral responsibility. When a dog or horse kills a man, we often insist that it should be put down. What do we do with a killer chatbot? A human psychologist who offers bad advice leading to a patient’s suicide may be held accountable and lose his license—and may in some measure seek to make things right through repentance and restitution. Can Bing lose its license or do penance?
Few of those at the frontier of this reckless experimentation seem to have even asked such questions, much less answered them. The solution, of course, is not to reject any future innovation. But it is to insist that before we take the next leap forward into this brave new world of ours, we should first decide whether we will govern our creations (and how) or whether they will govern us. If you can’t put the genie back in the bottle, you sure want to decide what your three wishes will be before you let him out.
Brad Littlejohn (Ph.D., University of Edinburgh) is the founder and president of the Davenant Institute. He also works as a fellow at the Ethics and Public Policy Center and has taught for several institutions, including Moody Bible Institute–Spokane, Bethlehem College and Seminary, and Patrick Henry College. He is recognized as a leading scholar of the English theologian Richard Hooker and has published and lectured extensively in the fields of Reformation history, Christian ethics, and political theology. He lives in Landrum, S.C., with his wife, Rachel, and four children.