Review | ChatGPT is nothing new. Machines have helped us write for centuries.


Before blocked writers could turn to ChatGPT for help, they might have consulted Wycliffe Hill’s 1930s manual, “The Plot Genie.” The genie’s magic was held in a series of numbered lists that, according to Hill, collectively contained every story ever told. Simply spin the cardboard “Plot Robot” wheel (sold separately) and generate “a complete plot framework every five minutes.” Perhaps a coffee taster (Unusual Male Character No. 148) and a dope addict’s sister (Usual Female Character No. 50) desire “vengeance against a rival in love [but are] opposed by inclement weather” (Problems List Nos. 5, 11). Or maybe a lighthouse tender (Usual Male Character No. 81) and an attorney (Unusual Female Character No. 23) are “about to permit an unrecognized sister to perish in a fire” (Crises No. 159), when all of a sudden “advantage is threatened by a race riot” (Predicament No. 126).

The Plot Genie is just one of modern AI’s many distant cousins described by Dennis Yi Tenen, a professor of comparative literature at Columbia University and a former Microsoft engineer, in his peculiar history of the modern chatbot, “Literary Theory for Robots: How Computers Learned to Write.” Instead of drawing a line from GPT-3 to GPT-4 to the robot apocalypse (as today’s AI alarmists tend to do), Tenen stretches much further into the past and beyond the boundaries of computer science, going from medieval Arabic divination circles to 17th-century German cabinetmakers to Boeing aviation incident reports.

Sign up for the Book World newsletter

Tenen’s goal is not only for readers to understand the contemporary chatbot’s predecessors but for them to reimagine their relationship to the written word. The rise of chatbots, he argues, is just the latest in a centuries-long trend of the line blurring between authors and the tools they use, between individual and collective intelligence. “The mistake,” Tenen writes, “was ever to imagine intelligence in a vat of private exceptional achievement. Thinking and writing happen through time, in dialogue with a crowd,” and we have built technologies to mediate that conversation since writing began.

Tenen does not go about making this argument the way one might expect, drawing direct lineages from, say, Noah Webster’s dictionary to auto-correct. Instead, he assembles a dollhouse of obscure “literary robots” throughout history, discussing them and their lovable weirdo inventors to trace parallels with our current AI moment. Tenen never totally justifies his larger worldview that authors are indistinguishable from the tools they use to write. But if you take the effort to wade through this sometimes confusing, erratic book, you will be rewarded with something else: the cool, soothing relief that these times are, in fact, precedented.

As Tenen shows throughout, ours is far from the first moment when scholars have attempted to uncover universal rules of language and then use them to make predictions, answer questions and write stories. Without computers to crunch through data to uncover these rules, however, early author-engineers had to make up their own. Take for instance the zairajah, a 14th-century Arabic divination circle and algorithmic soothsayer. The curious could ask the zairajah a question, and their wording would be converted into numbers and looked up in a web of connected tables to generate a vague but intelligible answer. As with modern chatbots, the futures it imagined for you depended on how you described your problems to it.

As new literary robots came onto the scene, they inspired familiar debates, including a heated exchange of letters between two 17th-century Germans: Polymath Athanasius Kircher had just invented the Mathematical Organ, a large wooden cabinet with movable slats that, depending on the manual (or “application”) consulted, could be used to compose music, encrypt secret messages or write poetry. Kircher sought to convince bad-boy poet Quirinus Kuhlmann that his invention was a boon to society. Imbuing this cabinet with intelligence, Kircher argued, had made the knowledge of the ages more accessible. Kuhlmann, however, argued that intelligence without understanding was worth nothing. Under the tutelage of the Mathematical Organ, Kuhlmann argued, a child could grow up to become only an “idiotic parrot.” More than 350 years later, similar conversations swirl around the educational applications of artificial intelligence.

Of course, the advent of the computer eventually led literary robots to read and write on their own. At last, they could infer the rules of language themselves, rather than merely encoding the worldview of a lone tinkerer. And the first thing computers read, Tenen says, was literature. Ada Lovelace, the first computer programmer, described that work as a “poetical science.” Much later, the Markov Chain, a language-generation algorithm whose descendants include ChatGPT and Google search, was first created to emulate poetry in the style of Alexander Pushkin’s epic poem “Eugene Onegin. Even when the U.S. military sought to use auto-generated text as part of its command-and-control approach to managing the Cold War, it trained the technology to create sentences using Lois Lenski’s 1940 children’s book “The Little Train.” From the beginning, computer science and literature have been mirrored arts, one allowing programmers to express logic with symbols, the other allowing writers to use symbols to create meaning.

It’s reassuring to know that while what we consider “intelligence” has always shifted in response to technological developments, creativity has never been extinguished. “Literary Theory for Robots” helps us recognize that over time, the seemingly extraordinary fades into the ordinary, becoming yet another tool through which we think and write in conversation with others. As Tenen writes: “Dictionaries, grammars, thesauruses, and encyclopedias were once hailed as monumental national achievements. Today, they are silently integrated into digital autocompletion or autocorrection tools.”

But that feeling of reassurance is fleeting. While Tenen’s book helps us understand how we got here, it doesn’t take stock of where “here” is and whether it’s where we want to be. Despite the long arc of technological history, it is hard to feel positive about chatbots’ effect on creative work, especially as once-trusted media outlets fill with AI-written sludge and coaxing ChatGPT becomes the central task of more creative jobs. It can be comforting to zoom out from the day-to-day tumult, the better to see our place as part of a larger historical pattern. But should we call that comfort a kind of wisdom? Or is it just complacency?

Gabriel Nicholas is a research fellow at the Center for Democracy & Technology and a nonresident fellow at the NYU Information Law Institute.

Literary Theory for Robots

How Computers Learned to Write

By Dennis Yi Tenen

W.W. Norton. 158 pp. $22


Leave a Reply

Your email address will not be published. Required fields are marked *