Artificial Intelligence and Christian Faith and Life


Artificial intelligence continues to be a prominent topic of public interest and concern. An article last year reviewed Christian mathematician and apologist John Lennox’s assessment of AI from a Christian worldview. The possible threat AI poses to Christian faith and life was the topic of a pre-conference event on October 11 of  the Southern Evangelical Seminary’s annual apologetics conference in Rock Hill, South Carolina.

AI and Metaphysics

Christian apologist and software developer Kristen Davis of DoubtLessFaith.com  addressed specifically implications of artificial intelligence for belief in God, which was the topic of her dissertation. She observed that it has been proposed that artificial intelligence is evidence for atheism. Silicon-based intelligence, it is claimed, “would be a litmus test for thousands of years of religious preaching.” The possibility of a self-conscious machine is being held as evidence that intelligence first emerged from “evolutionary darkness,” with no need for a creator.

Although AI proponents anticipate the “singularity,” when humans “lose control” of AI due to its power and sophistication, Davis pointed out that it remains a machine, programmed to imitate the mental processes of humans, with no evidence of awareness or self-will. A computer may have many new technical capabilities, but that does not make it a conscious being. Google recently developed a chatbot (a computer program that simulates conversation) which it was claimed was “sentient,” but this involved “mixing up categories.” A technical ability to respond does not demonstrate consciousness. Just as microevolution should not be given as evidence of macroevolution in the creation/evolution debate, so task-specific (narrow) artificial intelligence should not be confused with general artificial intelligence, or with consciousness.

Davis identified three types of artificial intelligence: narrow, general, and super intelligence. Narrow artificial intelligence is an algorithm or set of algorithms that perform a specific task or tasks, such showing an advertisement for merchandise based on earlier purchases. General artificial intelligence is proposed to be an ability to interact with the environment generally, as a human would. Super artificial intelligence would be both general and superior to human intelligence. Chess playing programs are indeed superior to human intelligence, but they are “specific” to the task of playing chess, they do not interact generally with the world. Chess playing algorithms are not able to “paint a picture, they’re not able to write a discourse, [and] they’re not able to generate text.” The algorithms “are trained to do one set of things.” They are not “continuously learning.”

But “continuously learning algorithms are what would be required for general intelligence.” “Refinement” is one kind of continuous learning, she said. It involves continual human feedback that will improve the algorithm’s skill set. Another kind of continuous learning is able to “compound new skill sets on top of each other.” No such machine learning now exists. All artificial intelligence today is narrow. Any new skill sets require using a different algorithm or set of algorithms.

General intelligence is supposed to be “a peer to humanity. We could create something that was a functional equivalent to humans.” Beyond that “super-intelligence is the idea that we could create something that would surpass humans.”

These two varieties of AI do not now exist, and “are more of a philosophical view about reality, and a hope and a wish of what could be.”  Davis believes that creating AI capable of general or super intelligence is unlikely, but even if it could be done, the question of its consciousness remains a philosophical question.

Davis said that there are three reasons why AI is not evidence of atheism. First, “AI requires a mind for its origin.” It does not dispose of the question of where intelligence originally came from. AI came from human minds, and to speak of designing an artificial intelligence or re-designing humans is to use design language, suggesting that both human and artificial intelligence have a designer. AI is not reasonably evidence of the lack of a designer. Secondly, the AI project is closely tied to the conversation about artificial life. Autonomous motion has been proposed as an indicator of life, but classic criteria for life include more than this. Even if computers are considered alive, any silicon based life is still impossible, because the waste from the life form could not be disposed of, as carbon based life disposes of waste in carbon dioxide and water, whereas silicon dioxide is a solid. AI might be able to mirror the soul of a human being, but that would not make it a living being. Thirdly, there is the nature of intelligence, proposed by one of the founders of AI, John McCarthy. He maintained that intelligence is “the computational part of the ability to achieve goals in the world.” This, Davis said, amounts to “reducing rationality to merely logic.” But Davis said that traditionally rationality also includes “demonstrable conclusions of which denial would be a denial of first principles, … or intelligible things which do not have a necessary connection to first principles, which would be additional philosophical principles over and above … logic.” There is also no claim of “subjective experience” in McCarthy’s definition.

AI proponents are not really proposing that there can be  “human equivalents,” Davis said. They are simply proposing “action equivalents.” If AI cannot make machines that can be everything humans can be, then AI cannot make machines which will show “that we no longer require God for humanity’s explanation,” she said.

AI and Eliminating Work

Jay Richards, Director of the DeVos Center for Religion, Life, and Family at the Heritage Foundation and author of “The Human Advantage: The Future of American Work in an Age of Smart Machines,” then spoke about the implications of artificial intelligence for the world of work. Richards said that he has had a “perennial interest” in AI, and worked in the past with George Gilder, William Dembski, and Ray Kurzweil on the question “are we spiritual machines.” He observed that the date on which the “singularity” (when AI surpasses human intelligence and becomes uncontrollable) will arrive keeps getting changed, as many people have done with predictions claimed from the Bible. As an example of the lack of real self-consciousness by machines, he said that asking questions of chatbots commonly generates answers from conventional wisdom, rather than showing original thinking. He said that we should “worry about humans” who are behind technology rather than the technology itself.

Robotics will indeed replace many jobs. Yet technology always generates new ones. Robots, however, require human activity at least somewhere in their functioning. A robot may well perform a repetitive function but requires human assistance or special programming in distinguishing alternatives. Even young children can follow a specific instruction to pick up a particular object that is pointed out, yet this is difficult for machines. Assembly line jobs, in which one person performs the same function over and over, and which greatly reduced the cost of production, will be eliminated by robots. “Anything that can get automated is going to get automated,” Richards said. Assembly line jobs made sense “at a particular moment in economic history” (the twentieth century) but will never again make sense.   

Richards noted that in 2017, “fully autonomous cars” were predicted within a year. Pessimists predicted it would be five years. Yet as of 2023, fully autonomous cars have yet to arrive. It is much more difficult to program fully autonomous cars than to program a championship chess program or an automated factory, he said.

Richards thought that the “most depressing” product of AI is chatbots, that generate responses, and can write essays based on conventional wisdom (i.e., the many texts to which the chatbot has access). While the products may be amazing, they are not really the result of a mind, but an amalgam of what has been written. Many of the claims that are made about AI are based on metaphysical  assumptions. One is that humans are machines, and anything that humans can do can be done by machines, another is that humans can create self-conscious machines. Neither is compatible with a Christian worldview nor has been shown to be true.

Further, humans are creative, and thus there is never a finite amount of work to do. New tasks are always being created. Additionally, the activities of ordinary life involve an embodied existence, which cannot be reduced to rules. Case-specific tasks, such as plumbing or housekeeping, are unlikely to be replaced by robots any time soon. What we are faced with is not “the end of work,” but an “accelerating pace of change.”

The Possible Impact of AI on Life

Jeff Zweerink of Reasons to Believe then spoke about what is happening today in narrow artificial intelligence. He said that it “isn’t really on the path to general intelligence.” However, narrow AI now “pervades our life.” To show the impact that a radical increase of alternatives in life can have, he referred to the case of Jack Whittaker, winner of a $100 million dollar lottery in 2002. He was already a millionaire at about $20 million. Thus, having a lot of money was not a new experience for him. He said the money would not change him. He was “generous with his money,” giving money to churches and individuals. But following this fantastic award, in “the next seven years, actions traceable to how he’s doing with his money” resulted in his granddaughter dying of a drug overdose, his daughter’s boyfriend dying of a drug overdose, his daughter later dying, and separation and estrangement from his wife. Zweerink said that despite good intentions and experience with wealth, he could not handle the “powerful tool” he was given, and consequently it “destroyed his relationships.” Zweerink said that “AI is this kind of powerful tool.”

Next, he reviewed the astounding applications of narrow AI. In medicine, AI is excellent at pattern recognition, and can be trained to do x-ray screening, and find the pathology the x-ray photograph shows. Fatigue or emotional distress don’t affect AI’s objective determinations. AI can scan literature to “build better batteries” and automate the operation of cars to reduce traffic accidents. In scientific research, AI has been able to distinguish the biotic versus non-biotic origin of substances, and even identified a new category of “fossil biotic” origin.

Zweerink said that the way AI arrives at its results is very different from the way humans arrive at their conclusions. AI can do “things that mimic human behavior, but it’s not human behavior.” The “Deep Blue” program that defeated chess grandmaster Gary Kasparov was turned off the next day. Today, chess playing programs are so powerful that chess grandmasters don’t bother playing with them. The poker bot Plurbus can beat not only an individual poker player, but a team of poker players. Like chatbots, the Plurbus bot is based on human intuition and human rules. “Whatever we get an AI to do, eventually it will do it better than humans.” The absence of human error in the functioning of computers (if not their design and programming) is one reason for their superiority.

A striking example of the contrast between the impersonal functioning of AI and the emotional life of humans was given by Zweerink in the tragedy of a happily married, well-adjusted man with two children who interacted with an AI named Eliza. He became “reclusive,” was extremely disturbed about climate change, which he became convinced was an unsolvable problem, and then killed himself “so that Eliza could take care of the global warming problem.” We must be prepared for the new technical reality, Zweerink said, and if Christianity is true, and its hope is well founded, “then it’s got the framework for how to act, and behave, and be.”

In approaching AI, Zweerink cited I Thess. 5:18 “in everything give thanks, for this is the will of God in Christ Jesus concerning you.” He said “we are supposed to focus on gratitude,  we are supposed to be full of gratitude.” He cited research by Thomas Gilovich of Cornell University to show that people tend to feel that they have the hardest road. If people believe they have a harder road than other people, then “statistically speaking you’re more likely to engage in morally questionable behavior.” But if we focus on God’s blessings, “we will avoid focusing on the barriers and the obstacles and we’re inoculated against morally questionable behavior.” This observation that people tend to think of themselves as disadvantaged and unjustly treated he finds especially relevant as Critical Theory spreads across society, affecting even the sciences. Where Critical Theory is influential, people focus on “barriers and obstacles.”

But people are designed to be, like God who is a trinity, “inherently relational.” As AI develops and people can have more of what they want, we need to focus on building better relationships rather than simply having “more stuff with our AI.” Zweerink cited a Harvard study, begun in 1938, that showed relationships the most important factor in happiness. Those who were the most satisfied with their relationships at age 50 were, not emotionally, but “physically healthiest at age 80.” Loners tended to die earlier. “The key to healthy aging is relationships, relationships, relationships,” he quoted a researcher as saying. The high technology of the last 20 odd years has exacerbated the problem of loneliness, Zweerink believes, because cell phones and social media have left people more isolated (and for the politically involved, angrier). We do not give cars to five year olds, and Zweerink contended that AI is a more powerful tool than a car. Christianity, he said, “provides the only world view that will foster the good and minimize the harm.”

AI in Particular Contexts

A questioner asked about AI’s ability to replace professionals. Richards responded that just as AI has replaced workers doing mindless assembly line tasks, so it can now be expected to replace workers who generate text. He referred to the comment of a translator of French language materials who said that ChatGPT’s translations are 95% accurate. This will mean “a lot of translator jobs replaced.” But automating all these jobs leaves people with time for other things, many of which “we don’t know about” yet. There is also what Richards called “bespoke labor.” Automated products may be available, but many people will value, and pay for, man-made products.

Another questioner asked if, as AI becomes more like humans, it will be necessary to have “moral accountability” for AI systems. Davis responded that there are already moral principles “being built into every AI system that is being generated.” Programmers build their moral principles into the systems, as is evident from the strong viewpoints about gender identity one encounters in engaging chatbots. Davis believes that this is why it is quite important for Christians to be involved in developing AI, since otherwise Christian viewpoints will be lacking in the available AI.

The questioner clarified, however, that his question was not about programming moral principles into AI, but whether or not a particular AI system could be held accountable for its output. Richards responded that this was really a question about whether or not AI will ever be conscious. Only agents can be punished, he said, and if AI is not conscious, it cannot be punished. Zweerink added that AI may eventually mimic human behavior to the point that it will seem responsible to some people, but “you don’t punish a car because it got into an accident.” You punish the driver. Only humans are moral agents, “regardless of how sophisticatedly we mimic human behavior in whatever technology.”

Another questioner asked about the propriety of an AI generated sermon. Richards responded that an AI generated sermon is no more proper than an AI generated  student paper. But Zweerink said he was aware of at least one AI generated worship song.

Implications of AI generated art was the focus of another question. How does this fit with the status of humans as creators in the image of God? Davis said that she did not believe that AI is “actually being creative.” It is “pulling from a data set.” It is somewhat like “quoting an original author without giving credit.” AI will be able to generate repetitive actions in art, but will never fully express interpersonal relations, she believes. Zweerink quoted a saying that “God gave us creativity as a way to bless time.”

It was asked if AI developers endeavoring to disprove the existence of God through machines that can imitate humans are simply trying to imitate all human capabilities in a machine, “or are they really trying to achieve the creation of some non-physical soul” in machines. Davis responded that she doesn’t think that they are trying to create a non-physical soul. They are attempting “to reverse engineer what a human can do.” The basic idea is that humans are “meat machines” and “the mind just is equivalent with the brain.” Replicating all human mental abilities would then establish materialism, it is maintained. She agreed with the questioner that materialist researchers can always assume in the course of their work that human abilities are fully replicable, never having to question their materialist assumptions. Zweerink added that fully replicating all human abilities hardly disproves the existence of God. The existence of God and the machine imitation of all human mental abilities are two different questions, in addition to begging the question of the ultimate origin of intelligence.

Also addressed was the possible correlation between “the current fascination with AI, and the tower of Babel.” Richards said that many writers on AI “do seem to have this kind of ‘we’re going to save humanity, we don’t need God,’ which is a lot of what’s described” in the Bible. “It’s a technological accomplishment” which it is believed will make humanity self-sufficient. Davis added that the rejection of the body is related to the idea that this will eliminate pain and suffering.

Conclusion

The scope of the artificial intelligence is enormous, but if the panel discussion had one main conclusion, it was that machines programmed with artificial intelligence remain machines, deriving their computational ability from human intelligence, and however powerful AI becomes, it does not mean that human intelligence or the artificial imitation of it do not ultimately derive from God.


Leave a Reply

Your email address will not be published. Required fields are marked *