Living guidelines for generative AI — why scientists must oversee its use


Nearly one year after the technology firm OpenAI released the chatbot ChatGPT, companies are in an arms race to develop ‘generative’ artificial-intelligence (AI) systems that are ever more powerful. Each version adds capabilities that increasingly encroach on human skills. By producing text, images, videos and even computer programs in response to human prompts, generative AI systems can make information more accessible and speed up technology development. Yet they also pose risks.

AI systems could flood the Internet with misinformation and ‘deepfakes’ — videos of synthetic faces and voices that can be indistinguishable from those of real people. In the long run, such harms could erode trust between people, politicians, the media and institutions.

The integrity of science itself is also threatened by generative AI, which is already changing how scientists look for information, conduct their research and write and evaluate publications. The widespread use of commercial ‘black box’ AI tools in research might introduce biases and inaccuracies that diminish the validity of scientific knowledge. Generated outputs could distort scientific facts, while still sounding authoritative.

The risks are real, but banning the technology seems unrealistic. How can we benefit from generative AI while avoiding the harms?

Governments are beginning to regulate AI technologies, but comprehensive and effective legislation is years off (see Nature 620, 260–263; 2023). The draft European Union AI Act (now in the final stages of negotiation) demands transparency, such as disclosing that content is AI-generated and publishing summaries of copyrighted data used for training AI systems. The administration of US President Joe Biden aims for self-regulation. In July, it announced that it had obtained voluntary commitments from seven leading tech companies “to manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety”. Digital ‘watermarks’ that identify the origins of a text, picture or video might be one mechanism. In August, the Cyberspace Administration of China announced that it will enforce AI regulations, including requiring that generative AI developers prevent the spread of mis-information or content that challenges Chinese socialist values. The UK government, too, is organizing a summit in November at Bletchley Park near Milton Keynes in the hope of establishing intergovernmental agreement on limiting AI risks.

In the long run, however, it is unclear whether legal restrictions or self-regulation will prove effective. AI is advancing at breakneck speed in a sprawling industry that is continuously reinventing itself. Regulations drawn up today will be outdated by the time they become official policy, and might not anticipate future harms and innovations.

In fact, controlling developments in AI will require a continuous process that balances expertise and independence. That’s why scientists must be central to safeguarding the impacts of this emerging technology. Researchers must take the lead in testing, proving and improving the safety and security of generative AI systems — as they do in other policy realms, such as health. Ideally, this work would be carried out in a specialized institute that is independent of commercial interests.

However, most scientists don’t have the facilities or funding to develop or evaluate generative AI tools independently. Only a handful of university departments and a few big tech companies have the resources to do so. For example, Microsoft invested US$10 billion in OpenAI and its ChatGPT system, which was trained on hundreds of billions of words scraped from the Internet. Companies are unlikely to release details of their latest models for commercial reasons, precluding independent verification and regulation.

Society needs a different approach1. That’s why we — specialists in AI, generative AI, computer science and psychological and social impacts — have begun to form a set of ‘living guidelines’ for the use of generative AI. These were developed at two summits at the Institute for Advanced Study at the University of Amsterdam in April and June, jointly with members of multinational scientific institutions such as the International Science Council, the University-Based Institutes for Advanced Study and the European Academy of Sciences and Arts. Other partners include global institutions (the United Nations and its cultural organization, UNESCO) and the Patrick J. McGovern Foundation in Boston, Massachusetts, which advises the Global AI Action Alliance of the World Economic Forum (see Supplementary information for co-developers and affiliations). Policy advisers also participated as observers, including representatives from the Organisation for Economic Co-operation and Development (OECD) and the European Commission.

Here, we share a first version of the living guidelines and their principles (see ‘Living guidelines for responsible use of generative AI in research’). These adhere to the Universal Declaration of Human Rights, including the ‘right to science’ (Article 27). They also comply with UNESCO’s Recommendation on the Ethics of AI, and its human-rights-centred approach to ethics, as well as the OECD’s AI Principles.

Living guidelines for responsible use of generative AI in research

A first version of the guidelines and their underlying principles.

Researchers, reviewers and editors of scientific journals

1. Because the veracity of generative AI-generated output cannot be guaranteed, and sources cannot be reliably traced and credited, we always need human actors to take on the final responsibility for scientific output. This means that we need human verification for at least the following steps in the research process:• Interpretation of data analysis;• Writing of manuscripts;• Evaluating manuscripts (journal editors);• Peer review;• Identifying research gaps;• Formulating research aims;• Developing hypotheses.

2. Researchers should always acknowledge and specify for which tasks they have used generative AI in (scientific) research publications or presentations.

3. Researchers should acknowledge which generative AI tools (including which versions) they used in their work.

4. To adhere to open-science principles, researchers should preregister the use of generative AI in scientific research (such as which prompts they will use) and make the input and output of generative AI tools available with the publication.

5. Researchers who have extensively used a generative AI tool in their work are recommended to replicate their findings with a different generative AI tool (if applicable).

6. Scientific journals should acknowledge their use of generative AI for peer review or selection purposes.

7. Scientific journals should ask reviewers to what extent they used generative AI for their review.

LLM developers and companies

8. Generative AI developers and companies should make the details of the training data, training set-up and algorithms for large language models (LLMs) fully available to the independent scientific organization that facilitates the development of an auditing body (see ‘An auditor for generative AI’) before launching it to society.

9. Generative AI developers and companies should share ongoing adaptations, training sets and algorithms with the independent scientific auditing body.

10. The independent scientific auditing body and generative AI companies should have a portal where users who discover biased or inaccurate responses can easily report them (the independent scientific auditing body should have access to this portal and actions taken by the company).

Research funding organizations

11. Research (integrity) policies should adhere to the living guidelines.

12. Research funding organizations should not (completely) rely on generative AI tools in evaluating research funding proposals, but always involve human assessment.

13. Research funding organizations should acknowledge their use of generative AI tools for evaluating research proposals.

Guidelines co-developed with Olivier Bouin, Mathieu Denis, Zhenya Tsoy, Vilas Dhar, Huub Dijstelbloem, Saadi Lahlou, Yvonne Donders, Gabriela Ramos, Klaus Mainzer & Peter-Paul Verbeek (see Supplementary information for co-developers’ affiliations).

Key principles of the living guidelines

First, the summit participants agreed on three key principles for the use of generative AI in research — accountability, transparency and independent oversight.

Accountability. Humans must remain in the loop to evaluate the quality of generated content; for example, to replicate results and identify bias. Although low-risk use of generative AI — such as summarization or checking grammar and spelling — can be helpful in scientific research, we advocate that crucial tasks, such as writing manuscripts or peer reviews, should not be fully outsourced to generative AI.

Transparency. Researchers and other stakeholders should always disclose their use of generative AI. This increases awareness and allows researchers to study how generative AI might affect research quality or decision-making. In our view, developers of generative AI tools should also be transparent about their inner workings, to allow robust and critical evaluation of these technologies.

Independent oversight. External, objective auditing of generative AI tools is needed to ensure that they are of high quality and used ethically. AI is a multibillion-dollar industry; the stakes are too high to rely on self-regulation.

Six steps are then needed.

Set up a scientific body to audit AI systems

An official body is needed to evaluate the safety and validity of generative AI systems, including bias and ethical issues in their use (see ‘An auditor for generative AI’). It must have sufficient computing power to run full-scale models, and enough information about source codes to judge how they were trained.

The auditing body, in cooperation with an independent committee of scientists, should develop benchmarks against which AI tools are judged and certified, for example with respect to bias, hate speech, truthfulness and equity. These benchmarks should be updated regularly. As much as possible, only the auditor should be privy to them, so that AI developers cannot tweak their codes to pass tests superficially — as has happened in the car industry2.

The auditor could examine and vet training data sets to prevent bias and undesirable content before generative AI systems are released to the public. It might ask, for example, to what extent do interactions with generative AI distort people’s beliefs3 or vice versa? This will be challenging as more AI products arrive on the market. An example that highlights the difficulties is the HELM initiative, a living benchmark for improving the transparency of language models, which was developed by the Stanford Center for Research on Foundation Models in California (see go.nature.com/46revyc).

Certification of generative AI systems requires continuous revision and adaptation, because the performance of these systems evolves rapidly on the basis of user feedback and concerns. Questions of independence can be raised when initiatives depend on industry support. That is why we are proposing living guidelines developed by experts and scientists, supported by the public sector.

The auditing body should be run in the same way as an international research institution — it should be interdisciplinary, with five to ten research groups that host specialists in computer science, behavioural science, psychology, human rights, privacy, law, ethics, science of science and philosophy. Collaborations with the public and private sectors should be maintained, while retaining independence. Members and advisers should include people from disadvantaged and under-represented groups, who are most likely to experience harm from bias and misinformation (see ‘An auditor for generative AI’ and go.nature.com/48regxm).

An auditor for generative AI

This scientific body must have the following characteristics to be effective.

1. The research community and society need an independent (mitigating conflicts of interest), international (including representatives of the global south) and interdisciplinary scientific organization that develops an independent body to evaluate the generative AI tools and their uses in terms of accuracy, bias, safety and security.

2. The organization and body should at least include, but not be limited to, experts in computer science, behavioural science, psychology, human rights, privacy, law, ethics, science of science and philosophy (and related fields). It should assure, through the composition of the teams and the implemented procedures, that the insights and interests of stakeholders from across the sectors (private and public) and the wide range of stakeholder groups are represented (including disadvantaged groups). Standards for composition of the team might change over time.

3. The body should develop quality standards and certification processes for generative AI tools used in scientific practice and society, which cover at least the following aspects:• Accuracy and truthfulness;• Proper and accurate source crediting;• Discriminatory and hateful content;• Details of the training data, training set-up and algorithms;• Verification of machine learning (especially for safety-critical systems).

4. The independent interdisciplinary scientific body should develop and deploy methods to assess whether generative AI fosters equity, and which steps generative AI developers can take to foster equity and equitable uses(such as inclusion of less common languages and of diverse voices inthe training data).

See ‘Living guidelines for responsible use of generative AI in research’ for a list of guideline co-developers.

Similar bodies exist in other domains, such as the US Food and Drug Administration, which assesses evidence from clinical trials to approve products that meet its standards for safety and effectiveness. The Center for Open Science, an international organization based in Charlottesville, Virginia, seeks to develop regulations, tools and incentives to change scientific practices towards openness, integrity and reproducibility of research.

What we are proposing is more than a kitemark or certification label on a product, although a first step could be to develop such a mark. The auditing body should proactively seek to prevent the introduction of harmful AI products while keeping policymakers, users and consumers informed of whether a product conforms to safety and effectiveness standards.

Keep the living guidelines living

Crucial to the success of the project is ensuring that the guidelines remain up to date and aligned with rapid advances in generative AI. To this end, a second committee composed of about a dozen diverse scientific, policy and technical experts should meet monthly to review the latest developments.

Much like the AI Risk Management Framework of the US National Institute of Standards and Technology4, for example, the committee could map, measure and manage risks. This would require close communication with the auditor. For example, living guidelines might include the right of an individual to control exploitation of their identity (for publicity, for example), while the auditing body would examine whether a particular AI application might infringe this right (such as by producing deep fakes). An AI application that fails certification can still enter the marketplace (if policies don’t restrict it), but individuals and institutions adhering to the guidelines would not be able to use it.

These approaches are applied in other fields. For example, clinical guidelines committees, such as the Stroke Foundation in Australia, have adopted living guidelines to allow patients to access new medicines quickly (see go.nature.com/46qdp3h). The foundation now updates its guidelines every three to six months, instead of roughly every seven years as it did previously. Similarly, the Australian National Clinical Evidence Taskforce for COVID-19 updated its recommendations every 20 days during the pandemic, on average5.

Another example is the Transparency and Openness Promotion (TOP) Guidelines for promoting open-science practices, developed by the Center for Open Science6. A metric called TOP Factor allows researchers to easily check whether journals adhere to open-science guidelines. A similar approach could be used for AI algorithms.

Obtain international funding to sustain the guidelines

Financial investments will be needed. The auditing body will be the most expensive element, because it needs computing power comparable to that of OpenAI or a large university consortium. Although the amount will depend on the remit of the body, it is likely to require at least $1 billion to set up. That is roughly the hardware cost of training GPT-5 (a proposed successor to GPT-4, the large language model that underlies ChatGPT).

U.S. President Joe Biden, Gavin Newsom and Dr. Arati Prabhakar at an artificial intelligence meeting

US President Joe Biden (centre) at a US panel discussion on artificial intelligence in June.Credit: Carlos Avila Gonzalez/Polaris/eyevine

To scope out what’s needed, we call for an interdisciplinary scientific expert group to be set up in early 2024, at a cost of about $1 million, which would report back within six months. This group should sketch scenarios for how the auditing body and guidelines committee would function, as well as budget plans.

Some investment might come from the public purse, from research institutes and nation states. Tech companies should also contribute, as outlined below, through a pooled and independently run mechanism.

Seek legal status for the guidelines

At first, the scientific auditing body would have to operate in an advisory capacity, and could not enforce the guidelines. However, we are hopeful that the living guidelines would inspire better legislation, given interest from leading global organizations in our dialogues. For comparison, the Club of Rome, a research and advocacy organization aimed at raising environmental and societal awareness, has no direct political or economic power, yet still has a large impact on international legislation for limiting global warming.

Alternatively, the scientific auditing body might become an independent entity within the United Nations, similar to the International Atomic Energy Agency. One hurdle might be that some member states could have conflicting opinions on regulating generative AI. Furthermore, updating formal legislation is slow.

Seek collaboration with tech companies

Tech companies could fear that regulations will hamper innovation, and might prefer to self-regulate through voluntary guidelines rather than legally binding ones. For example, many companies changed their privacy policies only after the European Union put its General Data Protection Regulation into effect in 2016 (see go.nature.com/3ten3du).However, our approach has benefits. Auditing and regulation can engender public trust and reduce the risks of malpractice and litigation.

These benefits could provide an incentive for tech companies to invest in an independent fund to finance the infrastructure needed to run and test AI systems. However, some might be reluctant to do so, because a tool failing quality checks could produce unfavourable ratings or evaluations leading to negative media coverage and declining shares.

Another challenge is maintaining the independence of scientific research in a field dominated by the resources and agendas of the tech industry. Its membership must be managed to avoid conflicts of interests, given that these have been demonstrated to lead to biased results in other fields7,8. A strategy for dealing with such issues needs to be developed9.

Address outstanding topics

Several topics have yet to be covered in the living guidelines.

One is the risk of scientific fraud facilitated by generative AI, such as faked brain scans that journal editors or reviewers might think are authentic. The auditing body should invest in tools and recommendations to detect such fraud10. For example, the living guidelines might include a recommendation for editors to ask authors to submit high-resolution raw image data, because current generative AI tools generally create low-resolution images11.

Another issue is the trade-off between copyright issues and increasing the accessibility of scientific knowledge12. On the one hand, scientific publishers could be motivated to share their archives and databases, to increase the quality of generative AI tools and to enhance accessibility of knowledge. On the other hand, as long as generative AI tools obscure the provenance of generated content, users might unwittingly violate intellectual property (even if the legal status of such infringement is still under debate).

The living guidelines will need to address AI literacy so that the public can make safe and ethical use of generative AI tools. For example, a study this year demonstrated that ChatGPT might reduce ‘moral awareness’ because individuals confuse ChatGPT’s random moral stances with their own13.

All of this is becoming more urgent by the day. As generative AI systems develop at lightning speed, the scientific community must take a central role in shaping the future of responsible generative AI. Setting up these bodies and funding them is the first step.


Leave a Reply

Your email address will not be published. Required fields are marked *