An Innovation System that Works


ARPAs are everywhere. The granddaddy of them all is the sexagenarian Defense Advanced Research Projects Agency (DARPA), which claims credit for the development of stealth technology, the Internet, and the GPS receivers embedded in our cell phones. But the Department of Homeland Security and the CIA launched their own ARPAs in the wake of 9/11, when national security considerations loomed large in Washington. New ARPAs were established at the Department of Energy in 2009 and the National Institutes of Health (NIH) in 2022, when the country confronted economic, environmental, and public health challenges. The Biomedical Advanced Research and Development Authority, which works on the medical response to bioterrorism and pandemic disease threats, has been portrayed as a “BioDARPA.” The Democrats are pushing for a climate ARPA.

For proponents of the ARPA model, it is something of a panacea: the solution to a host of infrastructure, education, and workforce challenges, among others. It is seen as a way out of a conservative approach to innovation that strangles development. Advocates tend to share a skeptical attitude toward traditional research funding, including the “investigator-initiated” grants administered by the NIH, National Science Foundation (NSF), and various federal agencies since the end of World War II. They hold that traditional approaches are overly cautious; that their review practices and metrics reward low-risk, low-return investments; and that they are ill-suited to the “grand challenges” of the twenty-first century, including global poverty, pandemics, and climate change.

What’s needed, ARPA supporters argue, are “mission-oriented” alternatives: programs where program managers drawn from industry or academia mimic venture capitalists by funding a “portfolio” of high-risk, high-return projects. The idea is that the enormous payoffs to a few wins will more than cover the many losses. “What matters is how the overall portfolio does, and not how individual projects do,” explains Dani Rodrik. “This is, of course, a point that every investor operating in a high-uncertainty environment, such as venture capital, understands well.”     

There’s much that is true in this argument and the critique upon which it’s based. The traditional approach to research funding is indeed risk-averse. Peer reviewers tend to minimize the likelihood of false positives and, in so doing, tolerate an unknown number of false negatives. But there’s something to be said for risk aversion, especially in a world of hard budget constraints. And insofar as the new ARPAs draw resources from existing agencies and programs—and justify their own growth by attacking traditional funding models—they may well do more harm than good. What the country needs instead is a balanced portfolio of high- and low-risk investments, not a civil war in an already fragile innovation ecosystem.


We would do well to remember that DARPA came to prominence in an era of growing inequality and insecurity. The VC model invoked by proponents of new ARPA—and the Silicon Valley Consensus on which they’re both based—are linked to a winner-take-all economy in which the losers not only grow in number but grow increasingly poor, resentful, and suspicious of the very science and expertise on which the ARPA model is based, perhaps understandably. Where was DARPA when the deindustrialization of the country took hold in the early 1970s, one might ask? Or when the rise of the environmental movement more or less created a “policy window” that allowed climate scientists to secure more funding from federal agencies? Or when the NIH sponsored a high-profile conference on emerging viruses in 1989, long before DARPA began to take the issue seriously? DARPA wasn’t a leader on these issues. It was at best a follower, deciding that health, climate change, and supply chain disruption constituted national security concerns after others had shown the way. The growing list of new ARPAs therefore testifies, in some sense, to the original’s myopia.

Take, for example, the recent Nobel Prize in Medicine, which was awarded to Katalin Karikó and Drew Weissman for their contributions to the development of mRNA vaccines. While Karikó’s struggle to secure funding has been invoked by critics of the traditional model, she and Weissman—who served as a post-doctoral fellow at the NIH—actually received millions of dollars in NIH funding beginning in the late 1990s, and acknowledge many of these grants in the “key publications” cited by the Nobel Committee. DARPA didn’t get into the mRNA game until much later, when NIH had already laid the foundations. Its relative contributions—or value added—is therefore hard to assess. 

The problem isn’t just that programs designed to stimulate “one-in-a-thousand ideas, much less one-in-a-million ideas” are all but immune to cost-benefit analysis, as Pierre Azoulay and his colleagues explain. It’s that we have no way to “compare DARPA to other funding agencies with a different organizational structure and approach,” in the words of Michael Piore, Phech Colatat, and Elisabeth Beck Reynolds, let alone to evaluate the broader opportunity costs of DARPA’s budget. How would DARPA’s resources have been spent, and what would they have bought, had they been allocated to different agencies? Would they have been allocated toward different goals, toward similar goals but in a different fashion, or no differently at all? And how would one know? The answer isn’t obvious, especially given the changing historical context.

One thing that’s certain is that DARPA’s successes have always involved collaborators. The agency is part of a broader innovation ecosystem made up of firms, universities, and public programs, including those that back the nominally low-risk, low-return investments that are currently out of favor. When DARPA began to take infectious diseases seriously, for instance, it was at the behest of participants in the earlier NIH conference—not least of all Nobel Laureate Josh Lederberg, who convinced DARPA director Larry Lynn “that it was necessary to get into biology and seriously consider biological threats.” When Intel and the California Institute of Technology developed a computer fast enough to model climate change, their support came from more than a dozen institutions and multiple federal agencies, including not only DARPA but NASA, the NSF, and most of the DOE’s national laboratories. Similar stories can be told for artificial intelligence, semiconductors, and the Internet. “DARPA programs across these areas drew on basic research supported by other agencies such as the National Science Foundation,” explains Arati Prabhakar, “which pumped more funding in as that research’s potential was revealed.”

And if traditional funding agencies are really complements to the ARPAs rather than their competitors, efforts to prioritize new ARPAs over the incumbents could backfire in several ways.

First, and most obvious, is the budget constraint. Barring a dramatic increase in science funding, every dollar that goes to a new ARPA has to come from somewhere else in the budget, and “delicate trade-offs” are therefore an inevitable aspect of program design. If the new ARPAs cannibalize basic research and training in their effort to fund transformative projects, they may easily wind up doing more harm than good.

Second, and less obvious, are human resource constraints. Effective program managers are already hard to recruit given their family obligations, salaries, prior commitments, and competing opportunities. The new ARPAs may therefore have trouble finding capable managers, or they may find themselves competing with each other for the best talent.

Third, I fear, are territorial conflicts. When DARPA came of age it had broad jurisdiction and no competitors. Imitation may be the sincerest form of flattery, but it’s also a source of jurisdictional conflict and confusion.

Finally, and most insidious, is the cultural tension between the new ARPAs and traditional institutions and models. Insofar as proponents of the former tend to deride the latter in an effort to justify their own existence and growth, they tend to bring unnecessary and unproductive tension into the innovation ecosystem.  


The NSF’s new Directorate for Technology, Innovation and Partnerships (TIP) has the potential to bring these tensions to a head. The architects of the 2021 Endless Frontier Act, which gave birth to TIP, envisioned a nimble, fleet-footed organization that would endow program managers with “DARPA-like” autonomy. But it’s hard to “instill a wholly new mission culture in an organization that not only dates to the 1950s but is fundamentally oriented toward basic R&D,” as Harry Broadman argues. The NSF has therefore adopted a more cautious approach. In fact, it now goes out of its way to assert that the TIP “does not aim to replicate the DARPA model, nor the approach employed by NSF’s existing directorates,” but will instead “pursue a model that builds vibrant public and private partnerships in alignment with NSF’s longstanding mission and allows for transformative advances.”

But just what that model looks like remains to be seen. It will develop over the next few years as the NSF builds what it calls “a roadmap to guide TIP research and development and workforce investments,” and will continue to evolve going forward. This is hardly surprising. Any new organization will take time to take shape. But as TIP does so the NSF, its supporters in Washington, and stakeholders in the broader U.S. innovation ecosystem should remember that the federal government is neither a venture capitalist, who invests on behalf of a small number of profit-maximizing investors, nor the investors themselves, who in reality have diversified portfolios of high- and low-risk investments. It is instead the representative of American taxpayers, voters, and citizens who have disparate beliefs, priorities, and values. Insofar as policymakers in Washington ignore their many differences—by treating resource allocation as a purely technocratic exercise in cost-benefit analysis—they will put their own success, and perhaps even their political survival, at risk, especially in an era of profound scientific skepticism.

It’s also worth keeping in mind that, unlike their private counterparts, public investments are inherently interdependent. If one succeeds enormously but the rest fail miserably, the former is unlikely to compensate for the latter. The success story will eventually fall victim to the troubles produced by the failures. If the NSF and NIH can’t produce basic research and human capital, the new ARPAs will have trouble producing transformative research. Unlike high-risk investors in the private sector, after all, the federal government is a price-maker. It sets the stage on which the other actors in the economy carry out their roles. And naive analogies to those actors—whether engineers a generation ago or VCs today—offer no basis for effective public policy.

What the government needs instead is a broader stocktaking of individual agencies, programs, and their interconnections. It should recognize that the mere existence of an agency or program does not imply it is efficient, or even potentially efficient. Some organizations have been established or designed for maladroit political reasons, or because they’re in vogue at a given moment in history. Others really are established to meet a particular goal, and designed with that goal in mind. And their goals need not be measured—or measurable—in monetary terms. But the relative weight of efficiency, politics, values, and style in science policymaking is an empirical question, not a baseline assumption.

If policymakers do carry out a stocktaking exercise, moreover, they’ll need to develop a common language. In the existing literature on innovation policy, experts use different terms to mean the same thing and the same terms to mean different things. “Industrial policy” can mean everything from tariffs to tax breaks, procurement contracts to public ownership, and has recently been re-branded as “industrial strategy” in any event.

The same holds for the “ARPA model.” Some portray it as “top-down,” others label it “bottom-up,” and some position themselves in between. It’s possible, of course, that the different authors are explicitly or implicitly comparing DARPA to different organizations, or ideal-types, and it may be top-down relative to one and bottom-up relative to another. Or it could just be a difference of opinion or interpretation. But in the absence of a common language, at least insofar as that’s possible, it’s hard to have a productive debate or dialogue at all.

We’ll also need a complete map of U.S. innovation programs and agencies. The system is currently so disjointed that few, if any, have a complete vision of all the actors involved—let alone their responsibilities, relationships, capabilities, and budgets. A comprehensive but concise handbook of these efforts would go a long way toward addressing this muddle, minimizing the risk of redundancy and conflict as policymakers try to address gaps in the ecosystem.

None of this would address the significant challenge of program evaluation, and the fact that we have little idea of what works and what doesn’t—let alone the precise returns given the diffuse goals at stake and the complexity and interdependency of the ecosystem as a whole. But program evaluation of this sort is likely to be impossible without first carrying out a thorough stocktaking exercise to clear the way. Rather than rushing to build a host of new—and potentially competitive or conflictual—programs, we need to make sense of the old ones.


Leave a Reply

Your email address will not be published. Required fields are marked *