Health AI “nutrition label” template means “apples-to-apples” comparisons


In August, Dr. Brian Anderson, co-founder and CEO of the Coalition for Health AI (CHAI), told Newsweek health AI is the only major “sector of consequence” that remains unchecked in the United States. Today, his organization introduced a tool that aims to create accountability in the largely unregulated industry.

CHAI publicly shared its Applied Model Card on Thursday, encouraging health tech companies to “stress-test” it and provide feedback through January 22. The card acts like a nutrition label for health AI models, giving potential users insight into the tool’s development and any known risks.

Since 2021, CHAI has been working to define best practices for responsible AI use in health care. Its membership has grown to include more than 3,000 organizations from the private and public sectors—including health systems, insurers and health tech companies—that collaborate on the guidelines, including this initial public draft of the Applied Model Card.

CHAI made the model card open source in hopes of setting a standard across the whole health care industry, Anderson told Newsweek on Thursday.

“A common agreement about what the minimum bar for transparency needs to be, as articulated in this AI nutrition label, is the first step in building more trust and a deeper understanding of how these models can be used more strictly,” he said.

AI Nutrition Labels
The Coalition for Health AI publicly shared its Applied Model Card on Thursday, January 9, encouraging health tech companies to “stress-test” it and provide feedback through January 22.
The Coalition for Health AI publicly shared its Applied Model Card on Thursday, January 9, encouraging health tech companies to “stress-test” it and provide feedback through January 22.
Photo-illustration by Newsweek

CHAI is not the only organization with that belief. As part of its HTI-1 Final Rule, the Office of the National Coordinator for Health Information Technology (ONC) identified 31 source attributes for predictive decision support interventions (which may include generative AI) that could be used as a baseline to build model cards.

“What the ONC didn’t do, the U.S. government didn’t do, is go into detail about each one of those 31 sections,” Anderson said, “because candidly, as an industry, we haven’t come to consensus about what data needs to go into those spaces.”

“And frankly,” he continued, “some of those sections are really hard to come to consensus.”

One of those sections is data input. Health systems want to know what kind of data an AI model was trained on to ensure it can provide answers for a specific population—but AI vendors are concerned about sharing their processes in too much detail and losing their competitive advantage.

However, CHAI has finally reached an initial agreement amongst the various stakeholders, according to Anderson. He emphasized that this model card is a working draft, not a final rendition: “We fully anticipate that we’re going to need to update it.”

The current iteration includes spaces for AI developers to share basic information about their model, such as its release date, global availability and any regulatory approval it has received. There is a section to give directions for intended use and a box for warnings—including known risks, limitations, biases and ethical considerations.

There is also space for “trust ingredients,” where companies can share facts about the AI system (For example, does it require ongoing maintenance? How does it mitigate bias?) and offer transparency on funding sources and stakeholders consulted during the design process.

CHAI’s principles of responsible AI informed three more specific disclosure sections: usefulness, usability and efficacy; fairness and equity; and safety and reliability.

The Applied Model Card could help health systems navigate a ballooning AI market. Health AI amassed approximately $11 billion of venture capital investments in 2024 in the United States alone, according to the World Economic Forum. It can be challenging to cut through the noise and find a trustworthy product that fits an organizational need. Last fall, Dr. Daniel Yang, vice president of AI and emerging technologies at Kaiser Permanente, told Newsweek he receives multiple pitches from AI developers every day—many of which are irrelevant.

CHAI APPLIED MODEL CARD
An example of CHAI’s Applied Model Card, or AI nutrition label template.
An example of CHAI’s Applied Model Card, or AI nutrition label template.
Coalition for Health AI CHAI

As health systems evaluate a vast swath of AI vendors, having a nutrition label for each would enable an “apples-to-apples comparison” and expedite the research process, according to Anderson.

“Initially, we’re hearing from health systems that they want to have a model card digitally shared with them as part of procurement processes or as part of AI governance processes,” Anderson said. “If the model has already been bought and deployed at a health system, health systems want these model cards for internal inventory and internal AI governance.”

But for now, the nutrition labels are entirely voluntary for AI companies, and Anderson is unsure if the government will mandate similar disclosures in the future. On Tuesday, the FDA published its own model card that sponsors of AI-enabled devices can submit to the administration for regulatory approval. Their sample model card bears similarities to CHAI’s, including sections for risk management, development, performance and limitations.

“Our model card is, candidly, a little bit more robust than the FDA’s or ONC’s, but it’s in strong alignment,” Anderson said. “It’s nice to see the private sector and public sector coming to alignment around a common standard for transparency and disclosure, to build trust in a model card.”


Leave a Reply

Your email address will not be published. Required fields are marked *