‘We don’t have the guardrails’ for companies to rush into deploying AI, experts warn


  • Biden follows up on AI company voluntary commitments with sweeping executive order
  • G7 agrees guiding principles and voluntary code of conduct to complement EU’s AI Act
  • Ethics experts warn of ‘catastrophic risk’ and call for immediate measures like watermarking
  • But many western governments including UK and Germany fear regulation will stifle innovation
  • View different in Global South where ‘trustworthy’ AI deemed necessary for fair access to data

November 28 – ChatGPT’s arrival last year both amazed and shocked. And it threw lawmakers into a spin.

The EU, which has been working on AI legislation since 2021, was forced to think afresh, while others have been scrambling to catch up. AI researchers and entrepreneurs have urged governments to regulate the technology.

The abrupt sacking and ultimate reinstatement of OpenAI’s founder Sam Altman in late November seemed to point to tensions between ensuring the safety of AI and profiting from it. The chaos also demonstrated how few people have any say in how the technology develops.

We now have a raft of principals and codes. Having extracted a series of voluntary commitments this summer from some of the big AI companies as to how they would develop and test their models, President Biden followed up with a sweeping executive order. It addresses every aspect of government, from procurement to standards-setting, and covers not just generative AI, but any system that’s making decisions or recommendations.

The G7 (which includes the EU) agreed international guiding principles and a voluntary code of conduct for AI developers that will complement the EU’s AI Act. The UK held an AI safety summit that brought China into the discussion, but Rishi Sunak, the country’s prime minister, is wary of regulating AI for fear of stifling innovation.

Policymakers are wrestling with where to draw lines and how extensively they should cast their net. The challenge is that AI is a whole field, one that uses computing power and data to solve problems, ranging from machine learning models that are trained to recognise a ship (and only ships) all the way to much more general systems that can produce images or speech or text.

It holds the promise of a cure for cancer or for tackling climate change, but some fear it could potentially threaten democracy or even the future of our species.

'Oversight of A.I.: Rules for Artificial Intelligence' Subcommittee hearing in Washington

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled ‘Oversight of A.I.: Rules for Artificial Intelligence’ on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frantz Acquire Licensing Rights

Leading AI expert Yoshua Bengio told a US Senate committee hearing in the summer that “we have a moral responsibility to mobilise our greatest minds and make major investments in a bold and internationally coordinated effort” to both reap the benefits and protect society against its potential perils.

The question is whether the restraints should be on the AI itself, or its uses. “You can’t really legislate the training of an AI model, but you could damn well legislate the deployment of AI models,” suggests Sasha Luccioni, an AI ethics researcher and climate lead at Hugging Face.

The New York-based company offers a platform to allow users to share AI models and datasets, and has launched an open-source alternative to ChatGPT. “There are certain applied aspects that are very, very enforceable, but people have gotten too caught up on trying to enforce the actual models themselves,” she says.

“If in Canada, we wanted to make sure that people couldn’t use AI chatbots to provide mental health therapy, which we’re seeing happen, that is something that (can be legislated).”

Legislators should also tackle the here and now, she says: copyright infringement, biases in models that influence recruitment or can lead to wrongful arrest. “Conceivably, catastrophic risk could be one of the long-term impacts of AI, but there’s so many things going on now that are not existential and that are not hypothetical: they’re present. We should be regulating them. (For example) requiring companies (to) embed watermarks in their system so that, at a minimum, you can detect whether a content has been generated by a for-profit model.”

Business can also demand more transparency from AI developers, but while there are ways of restricting models, “the gist of it is not to rush headfirst into sticking generative AI into everything, because we don’t have the guardrails for deploying it in business in a way that’s consistent with the constraints of doing business.”

REAIM summit on responsible use of military artificial intelligence, in The Hague

A robotic dog is shown at the Responsible Artificial Intelligence in the Military (REAIM) summit, on responsible use of military artificial intelligence, in The Hague, Netherlands, February 15, 2023. REUTERS/Toby Sterling Acquire Licensing Rights

The Chinese government now requires watermarking of AI generated output, but this may not be entirely reliable when it comes to text-based watermarks. The EU, too, is pressing for watermarking and content-labelling, as part of its proposed AI Act. It wants to classify AI applications according to the potential risks they pose – so the greater the risk, the greater the level of regulation. That would place restrictions on so-called foundation models on which applications like ChatGPT are built.

But in early November, Germany, France and Italy began questioning the proposals, instead preferring self-regulation through codes of conduct. The argument, it seems, is fear of stifling innovation, and it could wreck any chance of tying up the legislation before next year’s EU elections and delaying its implementation.

In the developing world, the view is very different. “The only sustainable AI is lawful AI, trustworthy AI, responsible AI. It has nothing to do with stopping innovation,” says Emma Ruttkamp-Bloem, who, among her many hats, led a UNESCO expert group that drafted the 2021 Recommendation on the Ethics of AI. She now sits on the UN’s High-Level Advisory Body on Artificial Intelligence, which is looking at how to leverage AI to accelerate delivery of the sustainable development goals and to oversee the governance of AI.

In the Global South the concerns centre on how to ensure fair access to, and ownership of, data. “It’s an issue that we consistently raise from the African side, and it’s an issue that is almost as consistently ignored in global conversations,” she says.

Nor does the continent have the computing infrastructure to immediately become a big player in generative AI, she says, which “means that our legislation must have a different focus … it should protect against manipulation from the north”.

The African Union is working on a continent-wide AI strategy, and there’s a commitment to implement the UNESCO recommendation, which sets out core principles of accountability and human oversight, and provides strategies for policy implementation.

In the end, AI affects all humans, says Ruttkamp-Bloem. That necessitates protecting “the most vulnerable groups, at least at a minimum level. It will be the case that in many countries in the Global South, this is the only protection people will have. And that in itself is enough reason to push for global governance of AI.”

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Ethical Corporation Magazine, a part of Reuters Professional, is owned by Thomson Reuters and operates independently of Reuters News.

Acquire Licensing Rights, opens new tab


Leave a Reply

Your email address will not be published. Required fields are marked *