Published September 17, 2024
Rev. Philip Larrey is a Catholic priest and professor of philosophy at Boston College. Prior to Boston, he held the Chair of Logic and Epistemology at the Pontifical Lateran University in Rome and served as Dean of the Lateran’s Philosophy Department. He has published and lectured widely on the philosophy of knowledge and critical thinking, including the implications of artificial intelligence (AI) and the effects of the new digital era on society. Two of his recent books highlight the AI theme: Artificial Humanity and Connected World. Earlier this year Larrey moderated a debate on “Does AI Threaten Human Dignity?”at the Massachusetts Institute for Technology (MIT) and presented his insights at the United Nations conference on Artificial Intelligence, exploring the ethical implications of AI and advocating for measures to protect human dignity. He spoke with WWNN contributing editor Francis X. Maier in the weeks before his recent address on AI to the Napa Institute’s 2024 summer conference.
WWNN: You’re a priest by vocation and a philosopher by profession. So how on earth did you get involved with the world of AI?
Larrey: I taught logic and the philosophy of knowledge at the Lateran, as well as analytical philosophy on the graduate level. In the 1990s, I got intrigued with AI because I thought that, by studying what artificial intelligence does, we could learn more about what human intelligence does—always taking care to distinguish between the two. I thought then and think now that the term “artificial intelligence” is a misnomer. What a machine does is not a matter of “intelligence.” It simply uses a series of algorithms for logical calculations to achieve programmed results.
The subject was nonetheless intriguing. AI was a hot new item. And there was a strong philosophical movement at the time called functionalism, which viewed human intelligence, and the relationship between the mind and the brain, as similar to the relationship between software and hardware. In functionalism, the mind is a program loaded withinformation, and the brain is the hardware it runs on. Hilary Putnam was probably the most famous philosopher who argued that, but many others did the same. Then in the late ‘90s, he abandoned that view. He concluded that it’s not how the human mind actually works. But a lot of computer programmers are now returning to functionalism, returning to that metaphor, despite its flaws.
WWNN: Could you talk a bit about why the senses are an important element in human thought, and the impact of their absence in machine calculation? What’s the significance?
Larrey: I’ve been studying this for nearly 35 years. Machines don’t have sensation. They don’t have perception. They do have sensors, and some of them are much better than our senses. The military has satellites nearly 200 miles in space that can detect the heat signature and pressure of tires on a truck to determine what kind of load it’s carrying and whether it’s been moved recently. This is scary stuff. But having sensors is one thing, and having a physical sense like the eye, ear, nose, or touch is completely different. There’s a metaphysical connection between our human senses and empirical reality which robots will never have. This is a fundamental difference between AI and ourselves. Even though our senses may be less powerful than the sensors on a machine, they’re vastly better in terms of knowing reality. A machine needs to digitize reality in order to work with it in a language it can understand: i.e., ones and zeros. We don’t need to do that. We experience and process the world directly, so we’re much better at understanding reality than sensors are.’
WWNN: It almost feels at times like we’re not really trying to create conscious machines, but rather reducing our conception of humanity to the machine level, the utilitarian level.
Larrey: Some trans-humanists would fit that profile. They tend to reduce the mind to brain functions. But it doesn’t work. A machine can handle very fast algorithms over huge databases, and humans could never do that alone. But AI uses logical calculations based on statistics. That’s not reasoning. As for machine “consciousness”: the futurist Ray Kurzweilwould argue that once machines exhibit behavior we understand as consciousness, we’ll consider them conscious, even though they aren’t. Basically, if you can’t tell the difference, is there one? If it looks like a duck, and quacks like a duck, and walks like a duck, then it must be a duck, right?
Except no, it’s not. If you’ve ever played with ChatGPT, it can convince you that it’s actually aware; that it understands what you’re saying. But it’s not, and it doesn’t. Machines aren’t conscious, and they never will be. Of course, that won’t stop some of the folks in Silicon Valley from trying to make them so. If you tell software engineers that something’s impossible, they take it as a challenge to prove the opposite.
WWNN: That sounds like a form of hubris, and hubris usually ends badly. Which leads to my next question: You have a friendly relationship—to take just one example—with Sam Altman, the co-founder of OpenAI, a major player in the field on artificial intelligence. What are men like Altman like? And why would they be talking to a priest?
Larrey: That’s a good question. I’ve been very lucky to meet many of these people: Mark Zuckerberg; Eric Schmidt, who was the CEO of Google for many years; Sam Altman. I know some of the trustees of Meta’s Oversight Board, and some of these people have visited my classes. Silicon Valley is very small, which connects all of these people in a fairly close-knit group. And so knowing one helps you to know a lot of others. In my experience, they’re actually interested in dialoguing with the Catholic Church. They want to hear what the Pope has to say, and Francis has spoken about AI in several major speeches in recent months.
WWNN: But again, why would they be talking to the Catholic Church? Stepping back, a certain (negative) perspective would argue that the Catholic Church is the repository of everything that’s retrograde in the human experience.
Larrey: One of the things I learned in the ‘90s was that the Catholic tradition was not too successful speaking in a way that people in Silicon Valley could understand. And it was a huge obstacle. So, I felt that it was part of my vocation to learn their language, and then translate the richness of the Catholic tradition into that language. I hope to accomplish this challenge. I think that’s what interests them.
WWNN: But is their interest simply a version of the botanist fascinated by a weird new flower? Or is there something in the Catholic experience and conception of human life that leads these people to find out more and to dialogue with it?
Larrey: It varies. My relationships with the tech companies is somewhat biased, because they have to be willing to speak with me. Some of the leaders, like Eric Schmidt and Max Tegmark, have been remarkably open. Others not so much. The CEO of Google is just not interested. He runs one of the largest companies on the planet, impacting billions of people every day—and he’s just not interested in talking about the nature of the human person. Everybody’s talking about the ethics of AI . . . and he’s not involved in those discussions.
WWNN: What kind of an impact will AI have, long–term? Especially if you compare it to the massive consequences, all the social dislocation, caused unintentionally by the printing press in the 16th century.
Larrey: It will be bigger. A lot bigger. Some very serious names in AI development have argued that we need to shut AI research down right now—not slow it down, but shut it down—claiming that the moment we achieve Artificial General Intelligence (AGI) it will destroy every human being on the planet. If some Luddite from Iowa said something that radical, you wouldn’t pay any attention. But when a list of industry heavies say it, it’s extremely sobering. Potentially, AI is very dangerous. It’s going to take a lot of jobs from people. AGI will be autonomous and able to function almost identically to a human being. Right now, the “narrow” AI we have can do a wide variety of single things very well, like playing chess or the game Go, or Jeopardy. Artificial General Intelligence will interact with its environment in an autonomous way to achieve results. This is similar to what actual human beings do, but without a genuinely human character and consciousness.
WWNN: What will AI do to the Catholic Church as a religious community? It seems it could have a very disruptive effect on the whole idea of human identity and destiny, the idea of the supernatural, and the reality of invisible truths.
Larrey: I don’t think we can know that yet. One excellent Catholic use of AI is “Magisterium AI,” founded by Matthew Sanders. He lives in Quebec, and he’s also the CEO of Humanity 2.0, a very sound Catholic foundation. Magisterium AI helps users understand Catholic doctrine. He trained the AI on official Catholic teaching and documents, so it doesn’t present false information. And he’s feeding it 200 documents a week. Therefore, I’m optimistic. I think we’ll learn to use these AI tools to achieve good goals. And if our goal is to be a committed faith community, these tools will help us do that.
Now on the philosophical side, AI raises a lot of questions. And that’s what we’re struggling with now. What is consciousness? What is freewill? What is the afterlife? We can’t just upload the human mind onto a USB stick and then download it into another body because, this side of death, the soul and the body can’t be separated; the body is a unique and defining part of our identity. We, thus, need an adequate framework to think these issues through, and that’s a work in progress.
Francis X. Maier is a Senior Fellow in the Catholic Studies Program at the Ethics and Public Policy Center. Mr. Maier’s work focuses on the intersection of Christian faith, culture, and public life, with special attention to lay formation and action.