Made in whose image? Brief reflections on artificial intelligence and the future of ministry – Lutheran Alliance for Faith, Science and Technology


[Editor’s note: This essay originally appeared in the May 2023 issue of SciTech, the newsletter of The Presbyterian Association on Science, Technology, and the Christian Faith. This article is published with the author’s permission.]

In Douglas Adams’ book, Dirk Gently’s Holistic Detective Agency, one of the pivotal characters is a malfunctioning robot known by its brand name: the Electric Monk. “The Electric Monk,” Adams writes, “was a labor-saving device, like a dishwasher or a video recorder . . . Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.”1Douglas Adams, Dirk Gently’s Holistic Detective Agency (New York: Pocket Books, 1988), 4.

The problem with this particular Electric Monk is that it had started to believe too many things, and generally at random. As it made its way through the universe, the Electric Monk starts to make people’s lives better simply by listening to them and then offering three words of profound assurance: “I believe you.” Corrupted though its believing software was, its pastoral skills were unmatched: people want to be heard and to have their perspectives validated.

I’ve been thinking about that Electric Monk since ChatGPT and other AI systems started making the news — and especially since I read one article suggesting that AI may be a boon to the pastoral care industry and another that revealed that AI had a better bedside manner than did human medical doctors.  

When I talk with students about pastoral care (which is rare; it’s not my field and those who know me tend to snicker when they hear about me being asked to provide pastoral care), I remind them of the kindergarten teachers’ adage for dealing with children in meltdown: ask them whether they want to be hugged, helped, or heard. In a world drowning in noise and desperate for signal, lots of people want to be heard. And to the degree that AI passes the Turing Test and that people are willing to get their pastoral care online, it may be a pretty useful listener. Hugs are, of course, another story. 

Recently, I was in a conversation with a several ministers who were making the argument that even if you gave ChatGPT instructions like, “Write a homily for a theologically progressive congregation on the Beatitudes in Matthew 5 using the format described in Tom Long’s The Witness of Preaching,” what it produced still would not and could not be a sermon. Their argument, which I find meaningful, was that a sermon is what happens when text meets context within the actual event of preaching — and that good preachers are so attentive to this meeting that no set of instructional inputs could ever replace their processes of textual and contextual exegesis and the event of the preaching moment. That said, I’ve heard some pretty horrible (and sometimes positively abusive) sermons delivered from various pulpits over the years and, quite frankly, I’d prefer the bot to the blasphemy. After all, if God can speak through Balaam’s ass in Numbers 20, perhaps God can speak through Google AI.       

So I am not nearly so averse to the advent of AI as some of my colleagues. To the degree that it can help people in their struggles, whether with their psychological concerns or their mundane tasks, the advent of AI may be a good thing. Of course, it may be a good thing in the way all potent tools are good: capable of making our lives more meaningful (or at least giving us more time to seek meaning) and also capable of being weaponized towards the production of great harm. In my own work as a professor of ethics who regularly assigns papers, maybe it’s my job to shape assignments that would be difficult for ChatGPT to complete so that my students learn to recognize that mundane labor is usually a precondition to magical epiphany. Or maybe it is even my job to help students think about how to use ChatGPT in ways that promote flourishing rather than offering a facsimile of it.     

Nor, I would add, am I especially worried about contemporary AI leading, inexorably, to Skynet, the misanthropic robotic consciousness of the Terminator movies.  

For one thing, intelligence — at least as we are using it when we talk about “artificial intelligence” — is a long way from human consciousness. As study after study shows, our consciousness isn’t housed wholly within our brains, firing neurons shaping experience, emotion, memory, and hope. Our consciousness is also a bodily thing; it is evidence of the way those experiences, emotions, memories, hopes, etc. are always profoundly somatic. Bodies perceive, react, filter, miss, mistake, react, and adjust to stimuli in ways so profoundly interactive with brains that any suggestion of the legitimacy of mind/body dualism should be discarded into the trash bin labeled “bad ideas from history.” Indeed, it’s possible that your own brain and body (in this case, your eyes) went right through that last sentence without noticing that I’d listed “react” twice among the things that bodies do because collectively, they knew you didn’t need the word both times.

And for another, machine consciousness — should machines ever attain it—may be so different from human consciousness that we might not even be able to recognize it. Other great apes demonstrate consciousness (and even self-consciousness), including tool-making and language-use, but it is clear our nearest cousins do not “think” like we do. And an entire phylum away from us, octopuses demonstrate remarkable levels of consciousness (problem solving, moodiness, using tools, distinguishing between persons, behaving sacrificially) while responding to stimuli through a distributed neural system that lets their arms do their own thinking — an adaptation that helps them be among the world’s greatest camouflage experts.  In a world where many consciousnesses already exist, what makes us think that machines consciousness — even should such a thing even come into existence — will be recognizable as such?

I suspect that part of the blurring of “intelligence” with “consciousness” may have something to do with our tests for machine intelligence. The gold standard of such tests, the Turing Test, imagines a person and a machine physically isolated from each other and in conversation. If the person cannot tell whether they are talking to another person or a machine, the machine passes the test.

Note, though, that the Turing Test is fundamentally a test about language use. “Intelligence,” it invites us to think, “is about language-competency.” Not only does such a test ignore non-language-based expressions of intelligence (see: octopus, above); it ignores the degree to which human language-development happens in interaction with the development of other kinds of intelligence (kinesthetic, emotional, etc.). The math involved in describing (i.e., using mathematical language) what is involved in a shortstop catching a grounder and throwing to first base in time to put out a runner is so unbelievably complex that few people can understand it, let alone speak it— but that doesn’t stop Chicago Cub’s Dansby Swanson from making effortless throws to Eric Hosmer. A good teacher’s ability to read the room and pivot away from a prepared lesson plan in order to engage students who are anxious, bored, lost, or distracted doesn’t even have good language with which to describe it but reveals levels of emotional intelligence that would have baffled Alan Turing, himself, let alone the computers his work has led to.  

One legitimate concern raised by those who are attentive to questions about “intelligence” and “consciousness” is that we are making machines in our own image. The danger of AI is that human behavior, fickle at best, is a poor basis upon which to build astoundingly powerful tools. Programmers with racial biases already input those biases into machines.2See Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (New York: Polity, 2019).  Hackers with nefarious purposes already modify code to pursue ends that involve crime and violence. Preachers who are short on time, insight, or scruple already scour the internet for sermons they can offer as their own. AI makes such persons’ work easier, more potent, and more difficult to contain. It is for such reasons, I think, that so many people with expertise in AI are pleading for the rapid development of regulatory systems to address the power of AI even as that power grows at seemingly exponential rates.

But there is also a second, slightly more epistemological — perhaps even metaphysical — concern to be raised by those who are thinking about “intelligence,” “consciousness,” and AI. It is that we aren’t so much making something in our own image as we are poorly imagining what we are as we make it. Computer scientist and virtual reality pioneer Jaron Lanier argued in his book, You Are Not a Gadget, that people change themselves in order to make a computer’s description of them more accurate.3Jaron Lanier, You Are Not a Gadget: A Manifesto (New York: Vintage, 2011).  If we think that intelligence means the ability to gather and sift through enormous amounts of data in order to create (derivative) art or (clunky) literature, we tacitly favor particular ways of encountering the world and particular people who are good at walking those ways. And we downplay other ways of encountering the world and attending to the complexity of all persons, including those who create such art and literature. “What are humans that you are mindful of them, mortals that you care for them?” the Psalmist asks, “Yet you have made them a little lower than God, and crowned them with glory and honor.”4Psalm 8: 4-5 (NRSV).                  

If the first concern is about the power of the tool, the second is about ignoring the powers of the user. And that second concern, at the end of the day, is the one that needs the most attention because it is going to have a far greater influence on the first tool than the other way around. Who better than ministers — real, live, humans pursuing theological struggles with texts and contexts — to focus our attention on that concern? Because I doubt that, at least right now, ChatGPT or Google AI could have written this essay. But even if one of them could have, I’d still say that I, the writer and you, the reader, are way more interesting and complicated creatures than they are or that we can foresee them becoming. 


Mark Douglas
Mark Douglas

Mark Douglas is an ordained minister in the Presbyterian Church (USA) and Professor of Christian Ethics at Columbia Theological Seminary in Decatur, GA, where he directs the Master of Theology degree program. He is the founding editor of @ this point: theological reflections on church and culture, the seminary’s online journal and the author of numerous books, including Confessing Christ in the 21 st Century (Rowman and Littlefield, 2005), Believing Aloud: Reflections on Being Religious in the Public Sphere (Cascade, 2010), Christian Pacifism for the Environmental Age (Cambridge University Press, 2019), and Modernity, the Environment, and the Just War Tradition, (Cambridge University Press, 2022). He is currently working on a new book, War in a Warming World: Religion, Resources, and Refugees.


Leave a Reply

Your email address will not be published. Required fields are marked *