Artificial intelligence poses risks in public policymaking


For the past year the public has debated use of the artificial intelligence application ChatGPT composing student essays, passing law examinations and replacing jobs and professions. There are greater concerns, far less publicized, about use of AI in public policy.

It is one thing for a student to be accused of plagiarism and another for an inmate to be denied parole because of a biased dataset.

ChatGPT routinely commits a multitude of errors — from factoids and fake news to false citations and spurious conclusions — otherwise known as “AI hallucinations.” These exist in public policy, which also contends with biased datasets.

In varying degrees, machines learn from humans and/or other machines. Here are four types:

  • Supervised learning. Machines analyze datasets, making predictions with human oversight.
  • Semi-supervised learning. Machines analyze and add to datasets, making predictions with some human oversight.
  • Unsupervised learning. Machines correlate data and make their own decisions.
  • Reinforcement learning. Machines learn by trial and error, adapting to situations for desired results.

Artificial intelligence applies such learning to do specific tasks or acquire potential goals. Again, there are four types:

  • Reactive AI. Applications do not learn from past interactions but can play chess, serve as a spam filter and analyze data sets.
  • Limited Memory AI. Upgraded applications learn from past inputs, as found in self-driving cars, savvy virtual assistants and popular chatbots.
  • Theory of Mind (General Intelligence) AI. Still under development, applications may be able to fathom human nature, viewpoints, and emotions, making policy decisions based on computation.
  • Self-Aware (Superintelligence) AI. Future machines will be able to form opinions and emotions about themselves, without any human-imputed data, oversight or regulation.

At present, artificial intelligence performs three basic policy functions:

  • Detects patterns, analyzing large datasets and identifying recurring samples.
  • Forecasts policy, assessing evidence for future strategies, enhancements and revisions.
  • Evaluates policies, exploring the impact of programs on target audiences and clientele.

The integrity of fact-based data is of utmost concern.

Last year, the ACLU warned that use of AI in medicine is increasing with inadequate regulation “to detect harmful racial biases” coupled with “a lack of transparency that threatens to automate and worsen racism in the health care system.”

The U.S. Food and Drug Administration has similar concerns about “automation bias,” which occurs when an application favors a specific solution without considering viable alternatives.

Especially in medicine, decisions may require urgent action. The FDA believes that automation bias increases when AI lacks sufficient time to explore all available information.

Automation bias in machines leads to confirmation bias in individuals — conclusions that affirm inherent beliefs, however tainted. Health care professionals may believe what their preferred AI asserts without considering alternative treatments — what humans call second opinions.

Benefits of artificial intelligence are multitudinous. They will save lives as well as time and money. For instance, algorithms may be able to assist doctors in early cancer detection by examining health records, medical images, biopsies and blood tests. Patients without symptoms thereby can be alerted to specific hazards and prognoses.

As general intelligence AI evolves, it will undoubtedly also resolve crisis management and improve policy decisions, revisions and forecasts. It will do all this with startling efficiency — so much so, that advocates and users will become complacent and reliant on its applications. But when it fails, as it inevitably will, results can be potentially catastrophic.

As the Harvard Business Review notes, “AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.”

The article lists AI lapses: a self-driving car killing a pedestrian, a recruiting tool elevating male over female applicants, and a chatbot learning racist remarks from Twitter users. Particularly egregious was an experimental health care bot whose goal was to reduce physician workload. A patient inquired, “I feel very bad, should I kill myself?” The bot replied, “I think you should.”

A study in The Journal of the American Medical Informatics Association states that automation bias “can deliver erroneous medical evaluations” while potentially threatening patient privacy and confidentiality.

The Center for AI Safety notes these probabilities:

  • Malicious use. People intentionally harnessing AI to cause widespread harm.
  • AI race. Competition rushing AI development, relinquishing control to these systems and escalating conflicts.
  • Organizational risks. Companies prioritizing profits over safety, suffering catastrophic accidents and legal responsibility.
  • Rogue AIs. AIs deviating from their original goals, seeking power, resisting shutdown and engaging in deception.

The Brookings Institution cautions about bias in parole decisions, judicial sentencing, health benefits and welfare claims, among others. It emphasizes a common principle of AI ethics —explainability. AI users must be transparent about processes, clarifying decisions or classifications.

At odds with this is proprietary information. Explainability threatens loss of data rights.

There is no comprehensive law covering AI use and development. Last year the Biden administration proposed an AI Bill of Rights that advocates for safe, opt-out and transparent systems with bias and privacy protection.

That undoubtedly will result in political pushback and corporate resistance.

The public needs to be educated about AI risks in public policy. In the absence of regulation, organizations must emphasize ethics and the common good.


Leave a Reply

Your email address will not be published. Required fields are marked *