EDINBURGH, Scotland — In a world increasingly dominated by artificial intelligence (AI), a group of experts is calling for a shift towards more human-centered technology. The international team argues against creating new AI technologies simply for the sake of making new and more advanced computers. Instead, they urge developers to focus on tech that genuinely meets human needs and enhances the human experience.
Their new book, “Human-Centered AI,” which includes contributions from 50 experts from 12 countries and various disciplines, including computer science, law, and sociology, delves into the concept of shifting AI from tech-driven to human-focused improvements. This would ensure that technology aligns with the well-being of people, rather than replacing or devaluing human workers.
Shannon Vallor, a leading expert from the University of Edinburgh, emphasizes that human-centered AI aims to support and empower humans, contrasting sharply with technology developed merely to showcase its power. She highlights generative AI’s rise, critiquing its development as driven by corporate desire rather than human necessity, leading to technology that people must adapt to and compete with, rather than technology designed make someone’s life easier.
“What we get is something that we then have to cope with as opposed to something designed by us, for us, and to benefit us. It’s not the technology we needed,” Vallor explains in a media release. “Instead of adapting technologies to our needs, we adapt ourselves to technology’s needs.”
The book raises concerns about the current trajectory of AI development, including systemic biases and privacy concerns. Malwina Anna Wójcik points out that marginalized communities are often excluded from the AI design process, resulting in technologies that reinforce existing power structures and discrimination. Matt Malone discusses how AI challenges privacy, with many unaware of how their data is collected and used, threatening individuality as technology becomes more integrated into our lives.
“These consent and knowledge gaps result in perpetual intrusions into domains privacy might otherwise seek to control,” Malone explains. “Privacy determines how far we let technology reach into spheres of human life and consciousness. But as those shocks fade, privacy is quickly redefined and reconceived, and as AI captures more time, attention and trust, privacy will continue to play a determinative role in drawing the boundaries between human and technology.”
The international team also explored the behavioral impacts of AI, with the authors showing how platforms like Google can alter core human aspects such as rationality and memory, diminishing personal control over our lives. They scrutinized the use of AI in social media for potentially narrowing users’ interests and pushing them towards extremism through biased content recommendations.
The experts propose practical solutions for integrating a human-centered approach to AI, including diversity in research, interdisciplinary collaborations, and transparent data practices. They also stress the importance of applying existing laws to AI rather than seeking entirely new regulations, encouraging policymakers to confidently regulate AI to prevent irresponsible innovation.
“Nobody has a magic wand. So, I’d say the following to policymakers: Take the issue seriously. Do the best you can. Invite a wide range of perspectives—including marginalized communities and end users—to the table as you try to come up with the right governance mechanisms. But don’t let yourself be paralyzed by a handful of voices pretending that governments can’t regulate AI without stifling innovation. The European Union could set an example in this respect, as the very ambitious AI Act, the first systemic law on AI, should be definitively approved in the next few months,” concludes Benjamin Prud’homme, the Vice-President, Policy, Society and Global Affairs at Mila – Quebec Artificial Intelligence Institute.
You might also be interested in: