CURRENT LEGAL ARTICLES

Succession - Conveyancing - Medical Negligence - Personal Injury - Solicitors

Navigating the EU AI Act: What Businesses Need to Know.

Silhouette with digital globe over cityscape.

Artificial Intelligence Corporate Law

The EU AI Act, the world’s first comprehensive legal framework for artificial intelligence (AI), is set to transform how businesses approach AI. This landmark regulation harnesses AI’s potential and mitigates risks across critical areas such as health, safety, fundamental rights, democracy, and the environment. Additionally, the Act seeks to drive innovation, growth, and competitiveness within the EU’s AI market.

As businesses increasingly turn to AI, particularly Generative AI (GenAI), they must adopt a responsible and controlled approach to AI implementation to boost efficiency. However, determining where to begin can be daunting.

Understanding AI exposure is crucial for businesses to manage associated risks effectively. Robust AI governance is essential to comply with the EU AI Act and safeguard against these risks. In this insight, we outline who the EU AI Act applies to and what you need to know to manage AI-related risks effectively.

Who Does the AI Act Apply To?

The EU AI Act applies to businesses that create, use, sell, distribute, or import AI systems within the EU and to entities outside the EU if their AI systems produce outputs within the EU. The Act employs a risk-based approach, classifying AI systems into categories based on their potential impact on individuals and society.

Providers of general-purpose AI models, including large models like ChatGPT and Bing Chat, have specific obligations under the Act. However, providers of free and open-source models are largely exempt unless their models pose systemic risks. For instance, if a Generative AI model is used in a high-risk process, it will be treated as high-risk.

The Act’s obligations do not apply to research, development, and prototyping activities before market release nor to AI systems used exclusively for military, defence, or national security purposes.

Understanding the Risk Categories

The European Commission has established a risk-based framework to create effective and proportionate rules for AI systems. There are four risk categories:

  • Unacceptable Risk
  • High Risk
  • Limited Risk
  • Minimal Risk

These categories are based on the AI system’s intended purpose, potential harm to fundamental rights, severity of possible damage, and likelihood of occurrence. The Act also highlights specific transparency requirements and systemic risks. Below are examples of use cases within each risk category.

Unacceptable Risk

AI systems that fall into this category are prohibited due to their potential to cause significant harm. Examples include:

  • Social scoring for both public and private purposes.
  • Exploiting individuals’ vulnerabilities and using subliminal manipulative and deceptive techniques distort behaviour and impair informed decision making causing harm.
  • Real-time remote biometric identification in public spaces by law enforcement, with limited exceptions. (abduction victims)
  • Biometric categorisation to infer race, political views, union membership, religious beliefs, or sexual orientation (with some exceptions for law enforcement).
  • Predictive policing on an individual level.
  • Emotion recognition in workplaces and educational institutions, except for medical or safety reasons (e.g., monitoring a pilot’s fatigue).
  • Untargeted scraping of the internet or CCTV footage for facial images to create or expand databases.

High Risk

AI systems in this category require strict regulatory oversight due to their potential impact on individuals and society. Examples include:

  • Essential private and public services (e.g., financial institutions using credit scoring models that could deny citizens loans).
  • Employment,  worker management, and access to self-employment (e.g., CV-sorting software for recruitment).
  • Critical infrastructures (e.g., transport systems) that could endanger citizens’ lives and health.
  • Educational or vocational training that determines access to education and career paths (e.g., exam scoring).
  • Safety components of products (e.g., AI in robot-assisted surgery).
  • Law enforcement activities that may impact fundamental rights (e.g., evaluating evidence reliability).
  • Systems are used to decide or influence eligibility for health and life insurance.
  • Migration, asylum, and border control management (e.g., verifying travel document authenticity).
  • Administration of justice and democratic processes (e.g., applying laws to specific cases).

Limited Risk

AI systems classified under limited risk have less stringent compliance requirements, focusing primarily on transparency. Users must be informed when interacting with an AI system unless it is evident that the outputs are AI-generated. Examples include:

  • Informing users that they are communicating with a chatbot.
  • Using deep-fakes that are easily recognisable as such.

Minimal Risk

AI systems not falling into the three specified categories are exempt from compliance under the EU AI Act. Providers should primarily focus on high-risk and limited-risk categories. Other AI systems, such as those used in video games, can be developed and used under existing laws without additional legal obligations.

However, some other risks must be considered:

  • Specific Transparency Risk: Certain AI systems, such as chatbots, have specific transparency requirements due to the risk of manipulation. Users should be informed before interacting with a chatbot.
  • Systemic Risks: General-purpose AI models, including large GenAI models, can pose systemic risks. These versatile models form the basis for many AI systems in the EU and could cause serious accidents or be misused for extensive cyberattacks. If these models propagate harmful biases, many individuals could be affected.

What are the obligations of providers of high-risk AI systems?

Service providers must conduct a conformity assessment before introducing or using a high-risk AI system in the EU market. This ensures the system meets mandatory requirements for trustworthy AI, such as data quality, documentation, transparency, human oversight, accuracy, cybersecurity, and robustness. This assessment must be repeated if the system or its purpose changes significantly.

Providers of high-risk AI systems must also establish robust AI governance focusing on quality control and risk management to ensure ongoing compliance and minimise risks for users and affected individuals, even after the product is on the market.

High-risk AI systems used by public authorities or their representatives must be registered in a public EU database.

When Will the AI Act Be Fully Applicable?

Following approval by the European Parliament and European Council, the EU AI Act will take effect 20 days after its publication in the Official Journal. It will be fully applicable 24 months after coming into force, with a phased implementation:

  • Six months: Prohibited systems must be phased out within six months of the legislation’s enactment.
  • 12 months: Obligations for general-purpose AI governance will take effect.
  • 24 months: The AI Act’s rules will apply, including those for high-risk systems listed in Annex III.
  • 36 months: Obligations for high-risk systems listed in Annex II (EU harmonisation legislation) will apply.

What Are the Penalties for Infringement?

Penalties for non-compliance with the EU AI Act will be stringent and enforced by the designated AI authority in each EU member state. The Act specifies the following thresholds:

  • Up to €35 million or 7% of the total worldwide annual turnover of the previous financial year (whichever is higher) for violations of prohibited practices or non-compliance with data requirements.
  • Up to €15 million or 3% of the total worldwide annual turnover of the previous financial year for non-compliance with other requirements or obligations, including rules on general-purpose AI models.
  • Up to €7.5 million or 1.5% of the total worldwide annual turnover of the previous financial year for providing incorrect, incomplete, or misleading information to notified bodies and national authorities.

The threshold for each category of infringement will be lower for SMEs and higher for other companies. With advice from the EU AI Board, the Commission will develop guidelines to harmonise national rules and practices in setting administrative fines.

We Are Here to Help You

The EU AI Act is a complex regulation, but our Trust in AI team is here to guide you. We assist in creating your AI exposure register and establishing strong AI governance. This helps you manage AI risks effectively while reaping the benefits of AI. Contact our experts to learn more about how we can support you.

Notes on biometric identification.

Using AI-powered real-time RBI is permissible only when not using the tool would result in significant harm, and the rights and freedoms of affected individuals must be considered.

Prior to deployment, the police must complete a fundamental rights impact assessment and promptly register the system in the EU database. In cases of urgency, deployment may commence without registration, but it must be registered without delay.

Before deploying, they need authorisation from a judicial or administrative authority. In urgent cases, deployment can begin without authorisation, but it must be requested within 24 hours. If authorisation is denied, the deployment must stop immediately, and all data must be deleted.

Disclaimer:

The information shared in this article is for general guidance and informational purposes only. It does not constitute legal, financial, or professional advice. Always consult with qualified experts or professionals in specific fields for personalised recommendations.