Chaos in the Cradle of A.I.


Annals of Artificial Intelligence

Chaos in the Cradle of A.I.

Sam Altman former CEO of OpenAI testifying at a Senate Judiciary Subcommittee oversight hearing in May 2023. He wears a...

More than seven hundred employees of OpenAI have signed a letter demanding that Sam Altman be reinstated as C.E.O. and that the organization’s board of directors resign.Photograph by Haiyun Jiang / NYT / Redux

In the 1991 movie “Terminator 2: Judgment Day,” a sentient killer robot travels back in time to stop the rise of artificial intelligence. The robot locates the computer scientist whose work will lead to the creation of Skynet, a computer system that will destroy the world, and convinces him that A.I. development must be stopped immediately. Together, they travel to the headquarters of Cyberdyne Systems, the company behind Skynet, and blow it up. The A.I. research is destroyed, and the course of history is changed—at least, for the rest of the film. (There have been four further sequels.)

In the sci-fi world of “Terminator 2,” it’s crystal clear what it means for an A.I. to become “self-aware,” or to pose a danger to humanity; it’s equally obvious what might be done to stop it. But in real life, the thousands of scientists who have spent their lives working on A.I. disagree about whether today’s systems think, or could become capable of it; they’re uncertain about what sorts of regulations or scientific advances could let the technology flourish while also preventing it from becoming dangerous. Because some people in A.I hold strong and unambiguous views about these subjects, it’s possible to get the impression that the A.I. community is divided cleanly into factions, with one worried about risk and the other eager to push forward. But most researchers are somewhere in the middle. They’re still mulling the scientific and philosophical complexities; they want to proceed cautiously, whatever that might mean.

OpenAI, the research organization behind ChatGPT, has long represented that middle-of-the-road position. It was founded in 2015, as a nonprofit, with big investments from Peter Thiel and Elon Musk, who were (and are) concerned about the risks A.I. poses. OpenAI’s goal, as stated in its charter, has been to develop so-called artificial general intelligence, or A.G.I., in a way that is “safe and beneficial” for humankind. Even as it tries to build “highly autonomous systems that outperform humans at most economically valuable work,” it plans to insure that A.I. will not “harm humanity or unduly concentrate power.” These two goals may very well be incompatible; building systems that can replace human workers has a natural tendency to concentrate power. Still, the organization has sought to honor its charter through a hybrid arrangement. In 2019, it divided itself into two units, one for-profit, one nonprofit, with the for-profit part overseen by the nonprofit part. At least in theory, the for-profit part of OpenAI would act like a startup, focussing on accelerating and commercializing the technology; the nonprofit part would act like a watchdog, preventing the creation of Skynet, while pursuing research that might answer important questions about A.I. safety. The profits and investment from commercialization would fund the nonprofit’s research.

The approach was unusual but productive. With the help of more than thirteen billion dollars in investment from Microsoft, OpenAI developed DALL-E, ChatGPT, and other industry-leading A.I. products, and began to turn GPT, its powerful large language models, into the engine of a much larger software ecosystem. This year, it started to seem as though OpenAI might consolidate a lead ahead of Google, Facebook, and other tech companies that are building capable A.I. systems, even as its nonprofit portion launched initiatives focussed on reducing the risks of the technology. This centaur managed to gallop along until last week, when OpenAI’s four-person board of directors, which has been widely seen as sensitive to the risks of A.I., fired its C.E.O., Sam Altman, who came to OpenAI after running the startup accelerator Y Combinator. By way of explanation, the board alleged that Altman had failed to be “consistently candid in his communications”; in an all-hands meeting after the firing, Ilya Sutskever, a board member and OpenAI’s chief scientist, reportedly said that the board had been “doing its duty.” But OpenAI employees were not convinced, and chaos has ensued. More than seven hundred of them signed a letter demanding the board’s resignation and Altman’s reinstatement; meanwhile, Altman and Greg Brockman, a co-founder of OpenAI and a member of its board, were offered positions leading an A.I. division at Microsoft. The employees who signed the letter have threatened to follow them there; if enough of them do, then OpenAI—the most exciting company in tech, recently valued at eighty-six billion dollars—could be toast.

Today’s A.I. systems often work by noticing resemblances and drawing analogies. People think this way, too: in the days after Altman’s termination, observers compared it to the firing of Steve Jobs by Apple’s board, in 1985, or to “Game of Thrones.” When I prompted ChatGPT to suggest some comparable narratives, it nominated “Succession” and “Jurassic Park.” In the latter case, it wrote, “John Hammond pushes to open a dinosaur park quickly, ignoring warnings from experts about the risks, paralleling Altman’s eagerness versus the caution urged by others at OpenAI.” It’s not quite a precise analogy: although Altman wants to see A.I. become widely used and wildly profitable, he has also spoken frequently about its dangers. In May, he told Congress that rogue A.I. could pose an existential risk to humanity. Hammond never told park-goers that they ran a good chance of getting eaten.

In truth, no one outside of a small inner circle knows what really motivated Altman’s firing. Still, on X (formerly known as Twitter) and Substack, speculative posts have multiplied on an industrial scale. At first, many characterized the move as a coup by Sutskever. But this came to seem less likely on Monday, when Sutskever tweeted his remorse: “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company,” he wrote, and, in a startling about-face, signed the letter demanding Altman’s reinstatement.

Altman’s firing, then, wasn’t a power grab, exactly; still, it could be plausibly interpreted as the consequence of long-simmering tensions between “accelerationists” and “doomers” within OpenAI. With basic questions about A.I. safety still unanswered, did it make sense to launch a “GPT Store,” where developers would be able to sell GPTs that could “take on real tasks in the real world”? Some observers, like the veteran tech reporter Eric Newcomer, raised the possibility that Altman might have been fired for cause. “We shouldn’t let poor public messaging blind us from the fact that Altman has lost [the] confidence of the board that was supposed to legitimize OpenAI’s integrity,” Newcomer wrote. “Once you add the possibility of existential risk from a super powerful artificial intelligence . . . that only amplifies the potential risk of any breakdown in trust.” Perhaps the board felt itself excluded, by Altman, from updates about OpenAI’s rapidly advancing technology, or came to believe that he was ignoring safety concerns, and decided to use the only truly powerful tool at its disposal—dismissal—to put on the brakes. It has now appointed Emmett Shear, the former C.E.O. of the live-streaming site Twitch, as interim chief executive; Shear has said that he wants to drastically slow the speed of A.I. research. (“If we’re at a speed of 10 right now . . . I think we should aim for a 1-2 instead,” he tweeted, in September.)

When ChatGPT doesn’t know the details of a story, it makes them up. That’s also a human tendency. More details may emerge about what motivated OpenAI’s board. The most extreme possibility may be that OpenAI was on the verge of inventing A.G.I.—artificial general intelligence, an all-purpose, potentially autonomous, form—in a way that the board deemed unsafe. (“Why did you take such a drastic action?” Elon Musk tweeted, at Sutskever. “If OpenAI is doing something potentially dangerous to humanity, the world needs to know.”) The bottom line, for now, is that the story is murky. Meanwhile, the longer-term consequences for OpenAI may not be so easy to discern, either. On X, OpenAI’s employees have taken to tweeting, en masse, that “OpenAI is nothing without its people,” and seem ready to leave. But many of them own OpenAI stock, and were, presumably, looking forward to seeing it go up in value as the tech got commercialized. There are also reports that Altman, with Sutskever’s support, is still trying to return. OpenAI is a unique and valuable organization, to which many proudly belonged; it’s not inconceivable that those involved could find a way to save it.

It’s also unclear what a large transfer of personnel to Microsoft would mean. The company was already a key player in OpenAI’s work; it provided OpenAI’s researchers with computing power and possessed a perpetual license for almost everything OpenAI might invent. It’s possible that Microsoft, unencumbered by OpenAI’s nonprofit mission, could push for faster progress on A.I. But, as a company, Microsoft is hardly known for speed; although it has lately grown more “agile,” it still possesses one of tech’s biggest org charts. If Cyberdyne Systems had been acquired by Microsoft in 1991, it might have helped design Clippy, the company’s “office assistant.” Microsoft is now adapting OpenAI’s tech into something called Copilot—Clippy with a mind and a gift for conversation.

There’s something a little absurd about the saga. It’s remarkable to see so many prominent people in A.I. acting so human—being impulsive, enraged, and confused. The scary part is that the confusion has deep roots. It’s real, and inherent to the field. How dangerous is A.I.? How close are we to inventing A.G.I.? Who should be trusted to keep it safe, and how should they go about doing that? No one really knows the answers to those questions, and, as a result, some of the most qualified people in the world are fighting among themselves.

In the movies—from “The Terminator” to “The Creator”—reining in A.I. is as simple as blowing it up. But in the real world, control is exercised through corporate and financial means, and so the interests of humanity are inevitably tangled up with commercial concerns. OpenAI’s board of directors has chosen to wield its inhibitory power, but its members are not A.I. sovereigns; their employees can always go elsewhere. Now they no longer trust the board charged with overseeing them. Further revelations may deepen or complicate the story. But one certainty is that no one is going to arrive and tell us how to control artificial intelligence. This is us, muddling through. ♦


Leave a Reply

Your email address will not be published. Required fields are marked *