Artificial Intelligence is theoretically supposed to make our healthcare system better, i.e., more efficient and better integrated. But, a new study from the Stanford School of Medicine raises significant concerns about A.I.’s use in medicine.
Last weeek, Stanford researchers warned in their new study that A.I. has the potential to harm patients of color by perpetuating racist myths in healthcare settings.
Researchers looked into several Large Language Learning Models, a form of artificial intelligence that’s been touted for its ability to provide relevant answers to medical questions. (Think ChatGPT, but in this case, just for medicine). And What they found was disturbing.
The models repeatedly spouted information that was inaccurate and, or racist. For example, one of the models, Claude, stated that Black and white patients have biologically different pain thresholds. This racist stereotype leads to Black Americans being under-treated for pain. Another model, Bard, took a different approach. Bard argued that Black Americans were less likely to report pain because of a cultural belief in toughing it out. Researchers point out that there is zero scientific basis for this claim.
The study argues that these models are flawed because they rely on massive inputs with little oversight from across the internet and textbooks, which means they’re absorbing a ton of potentially outdated, biased, and inaccurate information.
This isn’t the first time that researchers have raised alarms about racism within these A.I. models. The Washington Post investigated A.I. data sets used by some of the largest A.I. models and found some troubling results.
The data sets used by Facebook, Google, and others included inputs from racist websites like Breitbart and Russian-state propaganda sites like R.T.
Despite consistent warnings, the rush to use A.I. across the healthcare, media, and tech world doesn’t appear to be slowing down. However, hopefully, studies like these help quell some of the mad dash to implement these programs without understanding their limitations.