The EU Artificial Intelligence (Al) Act: A Pioneering Framework for AI Governance


Similar to GDPR, the AI Act’s scope is broad, covering all AI systems that are sold, offered, put into service or used within the EU. Providers or deployers of AI systems outside of the EU are captured by the AI Act if the results of their system are used in the EU. Companies based in the EU that provide AI systems are captured even if they do not deploy their systems in the EU. There are certain limited exceptions for personal and research use.

Key aspects of the AI Act include:

1. Broad Definition of Regulated AI Systems

The AI Act broadly defines a regulated AI System:

A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

2. A Tiered, Risk-Based Approach to Regulation

The AI Act categorizes AI systems based on four tiered risk levels: unacceptable, high, limited, and minimal or no risk. 

As risks associated with a particular category of system rises, stricter rules apply. 

Unacceptable AI Systems:  Certain models that violate fundamental EU rights are banned. 

Examples of prohibited AI systems include:

AI Systems That Deploy Subliminal and Manipulative Techniques

  • Systems that subtly influence behavior or decision-making fall into this category. Such techniques can be harmful and undermine individual autonomy.

Leave a Reply

Your email address will not be published. Required fields are marked *