Skip to main content

Commission welcomes political agreement on Artificial Intelligence Act

The Commission welcomes the political agreement reached between the European Parliament and the Council on the Artificial Intelligence Act (AI Act), proposed by the Commission in April 2021.

Ursula von der Leyen, President of the European Commission, said: “Artificial intelligence is already changing our everyday lives. And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society. Therefore, I very much welcome today's political agreement by the European Parliament and the Council on the Artificial Intelligence Act. The EU's AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era. By focusing regulation on identifiable risks, today's agreement will foster responsible innovation in Europe. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI.” 

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States, based on a future-proof definition of AI. They follow a risk-based approach:

Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens' rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.

High-risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.

Examples of such high-risk AI systems include certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk. 

Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

Specific transparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

Fines

Companies not complying with the rules will be fined. Fines would range from €35 million or 7% of global annual turnover (whichever is higher) for violations of banned AI applications, €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for supplying incorrect information. More proportionate caps are foreseen for administrative fines for SMEs and start-ups in case of infringements of the AI Act.

General purpose AI

The AI Act introduces dedicated rules for general purpose AI models that will ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing. These new obligations will be operationalised through codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the Commission.

In terms of governance, national competent market surveillance authorities will supervise the implementation of the new rules at national level, while the creation of a new European AI Office within the European Commission will ensure coordination at European level. The new AI Office will also supervise the implementation and enforcement of the new rules on general purpose AI models. Along with the national market surveillance authorities, the AI Office will be the first body globally that enforces binding rules on AI and is therefore expected to become an international reference point. For general purpose models, a scientific panel of independent experts will play a central role by issuing alerts on systemic risks and contributing to classifying and testing the models.

Next Steps

The political agreement is now subject to formal approval by the European Parliament and the Council and will entry into force 20 days after publication in the Official Journal. The AI Act would then become applicable two years after its entry into force, except for some specific provisions: Prohibitions will already apply after 6 months while the rules on General Purpose AI will apply after 12 months.

To bridge the transitional period before the Regulation becomes generally applicable, the Commission will be launching an AI Pact. It will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines.

To promote rules on trustworthy AI at international level, the European Union will continue to work in fora such as the G7, the OECD, the Council of Europe, the G20 and the UN. Just recently, we supported the agreement by G7 leaders under the Hiroshima AI process on International Guiding Principles and a voluntary Code of Conduct for Advanced AI systems.

Background

For years, the Commission has been facilitating and enhancing cooperation on AI across the EU to boost its competitiveness and ensure trust based on EU values. 

Following the publication of the European Strategy on AI in 2018 and after extensive stakeholder consultation, the High-Level Expert Group on Artificial Intelligence (HLEG) developed Guidelines for Trustworthy AI in 2019, and an Assessment List for Trustworthy AI in 2020. In parallel, the first Coordinated Plan on AI was published in December 2018 as a joint commitment with Member States.

The Commission's White Paper on AI, published in 2020, set out a clear vision for AI in Europe: an ecosystem of excellence and trust, setting the scene for today's political agreement. The public consultation on the White Paper on AI elicited widespread participation from across the world. The White Paper was accompanied by a ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics' concluding that the current product safety legislation contains a number of gaps that needed to be addressed, notably in the Machinery Directive.

Independent, evidence-based research produced by the Joint Research Centre (JRC) has been fundamental in shaping the EU's AI policies and ensuring their effective implementation. Through rigorous research and analysis, the JRC has supported the development of the AI Act, informing AI terminology, risk classification, technical requirements and contributing to the ongoing development of harmonised standards.   

For More Information:

SOURCE: EUROPEAN COMMISSION

Tags