How Europe’s AI convention balances innovation and human rights


The story so far: The global governance of artificial intelligence (AI) is becoming more complex even as countries try to govern AI within their borders in various ways, ranging from acts of law to executive orders. Many experts (as well as the Pope) have articulated a global treaty to this effect, but the obstacles in its path are daunting.

What is Europe’s AI convention?

Although there are many ethical guidelines, ‘soft law’ tools, and governance principles enshrined in many documents, none of them are binding or are likely to result in a global treaty. There are also no ongoing negotiations for an AI treaty at the global or regional levels anywhere.

Against this background, the Council of Europe (COE) took a big step by adopting the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law — a.k.a. the ‘AI convention’ — on May 17. The COE is an intergovernmental organisation formed in 1949, with 46 members today, including the Holy See, Japan, and the U.S., plus countries of the EU bloc and others.

The agreement is a comprehensive convention covering AI governance and links to human rights, democracy, and the responsible use of AI. The framework convention will be opened for signature in Vilnius, in Lithuania, on September 5.

What is a framework convention?

A ‘framework convention’ is a legally binding treaty that specifies the broader commitments and objectives under the Convention, and sets mechanisms to achieve them. The task of setting specific targets, if required, is left to subsequent agreements.

Those agreements that are negotiated under the framework convention will be called protocols. For example, the Convention on Biological Diversity is a framework convention while the Cartagena Protocol on Biosafety is a protocol under it that deals with living modified organisms. Similarly, in future, there may be a ‘Protocol on AI Risk’ under Europe’s AI convention.

The framework convention approach is useful because it allows flexibility even as it encodes the core principles and processes by which the objectives are to be realised. Parties to the Convention have the discretion to decide the ways in which to achieve the objectives, depending on their capacities and priorities.

The AI convention can catalyse the negotiation of similar conventions at the regional level in other places. Then again, as the U.S. is also a member of the COE, the convention can indirectly affect AI governance in the U.S. as well, which matters because the country is currently a hotbed of AI innovation. A related disadvantage (of sorts) of the AI convention is it could be perceived as being influenced more by European values and norms in technology governance.

What is the scope of the convention?

Article 1 of the convention states:

“The provisions of this Convention aim to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law”.

The definition of AI is similar to the one in the EU AI Act, which is based on the OECD’s definition of AI: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Article 3 states:

“The scope of this Convention covers the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy, and the rule of law as follows:

a. Each Party shall apply this Convention to the activities within the lifecycle of artificial intelligence systems undertaken by public authorities or private actors acting on their behalf.

b. Each Party shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors to the extent not covered in subparagraph a, in a manner conforming with the object and purpose of this Convention.”

How does the text address national security?

Excepting the private sector from the scope of the convention was a contentious issue, and the text reflects the compromise that had to be struck between two contrasting positions: total exemption for the private sector and no exemption. Article 3(b) allows Parties the flexibility in this matter but without allowing them to completely exempt the private sector.

Further, the exemptions in Articles 3.2, 3.3, and 3.4 are broad and pertain to the protection of national security interests, research, development and testing, and national defence, respectively. As a result, military applications of AI are not covered by the AI convention. While this is a matter of concern, it’s a pragmatic move given the lack of consensus on regulating such applications. In fact, the exemptions in Articles 3.2 and 3.3 — while broad — don’t completely rule out the convention’s applicability vis-a-vis national security and for testing, respectively.

Finally, the ‘General Obligations’ in the convention pertain to the protection of human rights (Article 4), the integrity of democratic processes, and respect for the rule of law (Article 5). While disinformation and deep fakes haven’t been addressed specifically, Parties to the convention are expected to take steps against them under Article 5 — just as they are expected to assess relating to the use of AI and their mitigation.

In fact, the convention also indicates (in Article 22) that Parties can go beyond the commitments and obligations specified.

Why do we need the AI convention?

The AI convention doesn’t create new and/or substantive human rights specific to AI. Instead, it asserts that existing human and fundamental rights that are protected by international and national laws will need to stay protected during the application of AI systems as well. The obligations are primarily directed towards governments, which are expected to install effective remedies (Article 14) and procedural safeguards (Article 15).

In all, the convention takes a comprehensive approach to mitigating risks from the application and use of AI systems for human rights, democracy, and the rule of law. There are bound to be many challenges to implementing it, particularly at a time when AI regulation regimes are yet to be fully established and technology continues to outpace law and policy.

However — and although the European notion of the rule of law can be discussed — the convention itself is the need of the hour because of the balance it codifies between innovation in AI and risks to human rights.

Krishna Ravi Srinivas is Adjunct Professor of Law, NALSAR University of Law, Hyderabad, and Associate Faculty Fellow, CeRAI, IIT Madras.


Leave a Reply

Your email address will not be published. Required fields are marked *