Social scientist raises concerns over AI defining or mischaracterizing individuals


Artificial intelligence is being weaved into more aspects of everyday life.

And Douglas Yeung, a senior behavioral and social scientist at RAND, wrote about his concerns in an opinion article that appeared this week in the San Francisco Chronicle.

Yeung raised concerns about AI intruding on our lives, defining us, mischaracterizing us, or generalizing us.

He also raised questions about whether AI systems will properly protect our information.

Yeung said some of these AI concerns stem from digital “tradeoffs” to get something we need, which could be as simple as using facial recognition to unlock our smartphone.

“Even before the internet, this trade might have included placing our inked index finger onto a notary’s notepad or reciting our mother’s maiden name to a government official or a bank teller,” he wrote. “But powerful technologies like artificial intelligence could make such trades too lopsided, forcing us to give up too much of who we are to get the things we need.”

Yeung, in an interview Wednesday with The National Desk, said he noticed research about the public losing trust in AI. And that got him thinking about digital security, ultimately motivating him to write his commentary article.

“So, I started thinking there’s this possible future where a lot of us as humans are trying to convince machines all the time basically that we’re human,” he said.

He said AI developers have a general recognition of the concerns.

But he’s concerned flawed AI systems could be implemented more and more places, potentially limiting our ability to grow as individuals or access what we need if the systems generate inaccurate or biased feedback.

“Given how prevalent AI already is and how it’s going to be even more integrated in everything, all of us need to have a say in how it’s done,” he said.

Anton Dahbura, an AI expert and the co-director of the Johns Hopkins Institute for Assured Autonomy, said AI bias and AI hallucinations are separate issues, though they are both real concerns.

AI will never get it right all the time, nor should it be expected to, Dahbura said.

“We’re throwing very complex problems at AI and in the process are accepting a tradeoff of the potential of achieving orders of magnitude increases in what we can achieve at the expense of sometimes having AI not quite do the right thing,” Dahbura said via email. “We can chip away at the corners by different means, but the AI-specific issues won’t go away. In fact, in some sense they’re a feature, not a bug.”

Dahbura said there are a number of things that can be done to assuage these concerns.

And it starts with research — a lot of it.

We should take caution before jumping into the implementation of AI where there is excessive risk.

We need increased awareness of the benefits, risks and inherent trade-offs, he said.

And we need informed policymaking by the government.

President Joe Biden has called AI “the most consequential technology of our time.”

AI is accelerating at “warp speed,” holds “incredible opportunities” and “must be governed” in order to reap the rewards while minimizing the risks, the president said last fall.

An expert in how technology affects education previously told The National Desk that schools risk harming student learning if they don’t ask the right questions before taking the leap on “shiny new, very promising” generative AI tools.

And government officials are trying to crack down on the emerging threat of AI-powered robocalls.

That’s just a sampling of the concerns over AI, which has been top of mind for many Americans after ChatGPT ushered in the technology’s breakout year.

AI, however, has been around in some form for decades.

Dahbura previously said that AI has “been very evolutionary” in its development, which has usually taken place behind corporate, government or university walls.

ChatGPT brought “visibility” and “wide-ranging utility” to the table, he said.

ChatGPT’s maker, OpenAI, has established a “Preparedness Framework” to guide its safe development of increasingly powerful systems. The company said its framework is a “living document” that will help it “to track, evaluate, forecast, and protect against catastrophic risks” AI breakthroughs could bring.


Leave a Reply

Your email address will not be published. Required fields are marked *