Published September 12, 2024
The internet was originally billed as “the information superhighway.” It’s turned out to be more like the LA freeway featured at the beginning of the movie La La Land: gridlock as far as the eye can see—though at least with plenty of good music videos to pass the time. As more and more information has piled up, we’ve increasingly despaired of ever sorting through it or separating fact from fiction. As so often, we’ve hoped to solve our technological woes with more technology—in this case, artificial intelligence bots that are so smart that they can answer any question, solve any problem, research any paper for you. But what if they aren’t so smart after all?
Thus far, while AI has proven disconcertingly good at creative tasks—including some we had thought were most uniquely human—it’s tended to disappoint at what we might have thought the more basic task of telling the truth. Researchers and casual users alike have found top AI platforms like SearchGPT or Perplexity regularly spitting out completely manufactured factoids with serene confidence. Attorneys rashly relying on such models for research have been caught citing cases that never took place. It seems as if AI may be more human even than we wanted it to be—just as prone as we are to embellish stories, imagine memories, and make up evidence to corroborate its claims.
Click here to continue reading.
Brad Littlejohn, Ph.D., is a Fellow in EPPC’s Evangelicals in Civic Life Program, where his work focuses on helping public leaders understand the intellectual and historical foundations of our current breakdown of public trust, social cohesion, and sound governance. His research investigates shifting understandings of the nature of freedom and authority, and how a more full-orbed conception of freedom, rooted in the Christian tradition, can inform policy that respects both the dignity of the individual and the urgency of the common good. He also serves as President of the Davenant Institute.