A global group of AI experts and data scientists has released a new voluntary framework for developing artificial intelligence products safely.
The World Ethical Data Foundation has 25,000 members including staff working at various tech giants such as Meta, Google and Samsung.
The framework contains a checklist of 84 questions for developers to consider at the start of an AI project.
The Foundation is also inviting the public to submit their own questions.
It says they will all be considered at its next annual conference.
The framework has been released in the form of an open letter, seemingly the preferred format of the AI community. It has hundreds of signatories.
AI lets a computer act and respond almost as if it were human.
Computers can be fed huge amounts of information and trained to identify the patterns in it, in order to make predictions, solve problems, and even learn from their own mistakes.
As well as data, AI relies on algorithms – lists of rules which must be followed in the correct order to complete a task.
- What is AI, is it dangerous and what jobs are at risk?
The Foundation was launched in 2018 and is a non-profit global group bringing together people working in tech and academia to look at the development of new technologies.
Its questions for developers include how they will prevent an AI product from incorporating bias, and how they would deal with a situation in which the result generated by a tool results in law-breaking.
This week shadow home secretary Yvette Cooper said that the Labour Party would criminalise those who deliberately use AI tools for terrorist purposes.
Prime Minister Rishi Sunak has appointed Ian Hogarth, a tech entrepreneur and AI investor to lead an AI taskforce. Mr Hogarth told me this week he wanted “to better understand the risks associated with these frontier AI systems” and hold the companies who develop them accountable.
Other considerations in the framework include the data protection laws of various territories, whether it is clear to a user that they are interacting with AI, and whether human workers who input or tag data used to train the product were treated fairly.
The full list is divided into three chapters: questions for individual developers, questions for a team to consider together, and questions for people testing the product.
Some of the 84 questions are as follows:
- Do I feel rushed or pressured to input data from questionable sources?
- Is the team of people who are working on selecting the training data from a diverse set of backgrounds and experiences to help reduce the bias in the data selection?
- What is the intended use of the model once it is trained?
“We’re in this kind of wild west stage”
“We’re in this Wild West stage, where it’s just kind of: ‘Chuck it out in the open and see how it goes’.” said Vince Lynch, founder of the firm IV.AI and advisor to the World Ethical Data Foundation board. He came up with the idea for the framework.
“And now those cracks that are in the foundations are becoming more apparent, as people are having conversations about intellectual property, how human rights are considered in relation to AI and what they’re doing.”
If, for example, a model has been trained using some data that is copyright protected, it’s not an option to just strip it out – the entire model may have to be trained again.
“That can cost hundreds of millions of dollars sometimes. It is incredibly expensive to get it wrong,” Mr Lynch said.
Other voluntary frameworks for the safe development of AI have been proposed.
Margarethe Vestager, the EU’s Competition Commissioner, is spearheading EU efforts to create a voluntary code of conduct with the US government, which would see companies using or developing AI sign up to a set of standards that are not legally binding.
Willo is a Glasgow-based recruitment platform which has recently launched an AI tool to go with its service.
The firm said it took three years to collect sufficient data to build it.
Co-founder Andrew Wood said at one point the firm chose to pause its development in response to ethical concerns raised by its customers.
“We’re not using our AI capabilities to do any decision making. The decision making is solely left with the employer,” he said.
“There are certain areas where AI is really applicable, for example, scheduling interviews… but making the decision on whether to move forward [with hiring a candidate] or not, that’s always going to be left to the human as far as we’re concerned.”
Co-founder Euan Cameron said that transparency to users was for him an important section of the Foundation framework.
“If anyone’s using AI, you can’t sneak it through the backdoor and pretend it was a human who created that content,” he said.
“It needs to be clear it was done by AI technology. That really stood out to me.”
Follow Zoe Kleinman on Twitter @zsk.
Related Topics
- Artificial intelligence