How the EU AI Act Classifies AI Systems: Prohibited, High‑Risk, Limited‑Risk and Minimal‑Risk Explained

How the EU AI Act Classifies AI Systems: Prohibited, High‑Risk, Limited‑Risk and Minimal‑Risk Explained

How the EU AI Act Classifies AI Systems: Prohibited, High‑Risk, Limited‑Risk and Minimal‑Risk Explained

⚖️ EU AI Act · Risk Classification Guide · 2025–2026

How the EU AI Act Classifies AI Systems: Prohibited, High‑Risk, Limited‑Risk and Minimal‑Risk Explained

The EU AI Act does not regulate all artificial intelligence the same way. Instead, it uses a four-tier risk-based framework — ranging from outright prohibition to entirely voluntary compliance — to match the intensity of regulatory requirements to the severity of potential harm. Understanding which tier applies to your AI system is the first and most consequential step in any EU AI Act compliance programme.

EU AI Act
Prohibited AI
High-Risk Classification
Annex II & III
Limited-Risk
Compliance Framework

Section 1 — The Risk-Based Architecture: Purpose and Legal Design

The EU AI Act's central innovation is its rejection of blanket AI regulation in favour of a graduated, risk-proportionate framework. Rather than imposing the same obligations on an AI-powered chess app and a facial recognition system used by border authorities, the Act calibrates the intensity of regulation to the potential severity of harm to fundamental rights, health, safety, and democratic processes. This architecture — built on four tiers ranging from absolute prohibition to entirely voluntary compliance — is what makes the AI Act structurally different from every prior attempt at AI governance.

The Four Tiers at a Glance

🚫
Tier 1: Prohibited AIArticle 5 — in force Feb 2025

Absolute ban. Eight specific AI practices are prohibited regardless of claimed purpose or benefit. No conformity assessment, no exemption process, no transition period beyond the initial implementation window. Any system that falls within Article 5 must be discontinued or never placed on the market.

Examples: social scoring by public authorities, subliminal manipulation causing harm, exploitation of vulnerabilities, most real-time biometric identification in public spaces for law enforcement.

⚠️
Tier 2: High-Risk AIAnnex II & III — applies Aug 2026

Permitted but heavily regulated. High-risk AI systems may be placed on the EU market but only after satisfying a comprehensive set of pre-market and ongoing obligations — conformity assessment, technical documentation, CE marking, EU database registration, QMS, and post-market monitoring.

Covers AI embedded in safety-critical products (Annex II) and AI in eight sensitive sectors including biometrics, employment, law enforcement, and access to essential services (Annex III).

💬
Tier 3: Limited-Risk AIArticle 50 — applies Aug 2026

Permitted with specific transparency obligations. Systems in this tier interact directly with people (chatbots, emotion-detection tools, synthetic media generators) but do not carry the systematic risks of high-risk AI. The single mandatory obligation is transparency: affected persons must be informed they are interacting with, or subject to, an AI system.

Penalties for non-disclosure are lower than Tier 2 (up to €15M or 3% global turnover) but remain significant.

Tier 4: Minimal-Risk AINo mandatory obligations

Permitted with no mandatory AI Act obligations. The vast majority of AI applications in current commercial use fall in this tier — spam filters, recommendation engines, AI-assisted content creation tools, predictive text, and most AI in video games and productivity software.

Providers and deployers may voluntarily adhere to codes of conduct (Article 95) and the EU AI Pact, but there is no legal compulsion to do so.

⚖️
The proportionality principle: the EU AI Act's recitals make clear that the risk-based model is grounded in the proportionality principle enshrined in Article 5(4) TEU. Regulatory measures must not go beyond what is necessary to achieve the legitimate objective — in this case, protecting fundamental rights and safety without unnecessarily restricting the development and deployment of beneficial AI. The result is that most AI systems are entirely unregulated by the Act, and even high-risk AI is not prohibited — it is subject to requirements designed to make it trustworthy, not to eliminate it.

How the AI Act Interacts with Existing EU Product Law

The AI Act is built on top of the EU's existing New Legislative Framework (NLF) for product safety — the same legal architecture used for medical devices, machinery, toys, and construction products. This integration is deliberate: many high-risk AI systems are embedded in physical products already regulated under sectoral EU law, and the AI Act avoids creating parallel obligations that would duplicate that existing framework.

🏗️ New Legislative Framework integration — key features
How the AI Act builds on the NLF
CE marking carried forward: high-risk AI systems use the CE marking system already familiar from NLF product legislation. For Annex II product-embedded AI, the CE mark for the product and the AI system are consolidated.
Notified bodies adopted: where an Annex II product already uses a third-party notified body for conformity assessment, the AI Act routes high-risk AI in that product through the same notified body — no new body required.
Market surveillance integrated: national market surveillance authorities (MSAs) responsible for NLF products take on enforcement of the AI Act for Annex II AI systems — the same authority regulates both the product and its AI component.
Where the AI Act creates new requirements
Annex III standalone systems: AI systems that do not sit inside a physical product and are classified as high-risk under Annex III (e.g. CV-screening software, credit-scoring AI) follow a purely AI Act conformity pathway — no NLF product legislation applies.
EU AI database: a new centralised registration requirement with no NLF equivalent — providers must register all high-risk AI systems before market placement.
GPAI model obligations: general-purpose AI model regulation under Chapter V is entirely novel — no equivalent exists in NLF product law. These obligations apply to AI model providers, not finished-product providers.

Which Actors Are Most Affected by Which Tiers

The four tiers do not affect all actors equally. Providers of high-risk AI systems bear the heaviest burden. Deployers of the same systems carry a secondary but independent set of obligations. Importers and distributors have lighter duties, primarily related to verification and compliance checking. The table below summarises which tier creates significant obligations for which supply chain actor.

📊 Obligations by tier and supply chain role
Tier
Provider
Deployer
Importer
Distributor
Prohibited (Art. 5)
Full prohibition — must not place on market
Full prohibition — must not use
Full prohibition — must not import
Full prohibition — must not distribute
High-Risk (Annex II/III)
Most obligations: conformity, QMS, docs, CE, registration, PMM
Significant: human oversight, logs, FRIA, incident reporting
Verify CE, Declaration of Conformity, and registration before import
Verify CE and documentation before distribution; report serious incidents
Limited-Risk (Art. 50)
Design transparency features into system; provide disclosures in instructions
Inform users of AI interaction and emotion-recognition use
No specific obligations
No specific obligations
Minimal-Risk
No mandatory AI Act obligations
No mandatory AI Act obligations
No mandatory AI Act obligations
No mandatory AI Act obligations

Section 2 — Prohibited AI: Article 5's Hard Limits

🚨
In force since 2 February 2025. The eight Article 5 prohibitions apply without transitional relief. Any AI system or product feature that falls within the prohibited categories was required to be discontinued by this date. National competent authorities are empowered to investigate and penalise violations from 2 February 2025 onwards.

Article 5 of the EU AI Act establishes the absolute floor of AI regulation: eight categories of AI practice so harmful to fundamental rights, human dignity, and democratic values that no justification — commercial, scientific, or governmental — can outweigh the prohibition. Unlike every other tier in the Act, there is no conformity pathway, no exemption process, and no derogation available under the general safety framework. A system falling within Article 5 may not be placed on the market, put into service, or used in the European Union.

Art. 5(1)(a)
Subliminal, Manipulative, or Deceptive Techniques

AI systems that deploy subliminal techniques beyond a person's consciousness, or other manipulative or deceptive techniques that exploit psychological weaknesses, to distort a person's behaviour in a way that causes or is likely to cause significant harm are prohibited. The prohibition covers both direct harm to the individual and harm to third parties.

This extends beyond purely subliminal (sub-perceptual) techniques to include any approach that bypasses rational agency through deception or exploitation of cognitive bias.

Art. 5(1)(b)
Exploitation of Vulnerabilities

AI systems that exploit the vulnerabilities of a specific group of persons — due to their age, disability, or social or economic situation — through techniques that distort their behaviour in a way that causes or is likely to cause significant harm to that person or another person are prohibited.

This prohibition is distinct from the general manipulation ban: it specifically addresses targeted exploitation of structural vulnerability rather than generic deceptive AI. An AI marketing system that specifically targets persons with identified cognitive disabilities to drive purchases would fall within this prohibition.

Art. 5(1)(c)
Social Scoring by Public Authorities

AI systems used by public authorities — or on their behalf — to evaluate or classify natural persons based on social behaviour or personal characteristics over a period of time, where the scoring leads to detrimental treatment that is either disproportionate to the original behaviour or unjustifiably applied in unrelated social contexts, are prohibited.

This prohibition targets China-style social credit systems applied at population scale. It does not prohibit standard risk assessment tools used in regulated contexts (credit scoring by private lenders, fraud detection by banks) provided these are not used to make broader social determinations unrelated to the specific regulated activity.

Art. 5(1)(d)
Assessing Risk of Criminal Offending Based on Profiling

AI systems used by law enforcement to assess the risk of a natural person committing a criminal offence based solely on profiling or personality trait assessment, rather than on objective and verifiable facts directly linked to criminal activity, are prohibited. This prohibition specifically covers "pre-crime" prediction tools.

The prohibition does not extend to AI systems that support human assessment of offending risk where that assessment is grounded in documented behavioural history and verified facts — the specific target is automated personality-based prediction without factual grounding.

Art. 5(1)(e)
Untargeted Scraping of Facial Images for Biometric Databases

AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage are prohibited. This prohibition applies regardless of the stated purpose of the database.

This provision is a direct response to the databases built by companies such as Clearview AI, which scraped billions of facial images from social media to create commercially available biometric identification databases. The prohibition extends to public authority databases built the same way.

Art. 5(1)(f)
Emotion Recognition in the Workplace and Educational Settings

AI systems used to infer the emotional states of natural persons in the workplace and educational institutions are prohibited. This applies to employers using AI to monitor employee emotions through facial analysis, voice analysis, or physiological signals, and to educational institutions monitoring student engagement or emotional states through similar means.

Exception: AI systems used for medical or safety reasons are carved out — for example, AI monitoring driver fatigue for safety purposes on commercial vehicles, or AI used in medical diagnosis of emotional disorders by qualified healthcare providers.
Art. 5(1)(g)
Biometric Categorisation to Infer Protected Characteristics

AI systems that use biometric data — including physiological, behavioural, or psychological signals — to categorise natural persons to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation are prohibited.

This prohibition extends to any system that uses observable physical characteristics as a proxy for protected class membership, regardless of claimed accuracy or stated purpose. It covers both direct and indirect inference methods.

Art. 5(1)(h)
Real-Time Remote Biometric Identification in Public Spaces (Law Enforcement)

AI systems used by law enforcement for real-time remote biometric identification — typically facial recognition in live CCTV feeds — in publicly accessible spaces are prohibited as a general rule. "Real-time" means that identification and searching happen before, during, or shortly after the biometric capture, without a significant delay that allows for human review before action.

Three narrow exceptions (Article 5(2)): real-time RBI is permitted for: (1) targeted searches for specific missing persons, victims of trafficking, and missing children; (2) prevention of specific, substantial, and imminent threat of a terrorist attack; (3) identification of suspects in connection with offences listed in the Framework Decision on the European Arrest Warrant where the offence is punishable by a custodial sentence of at least 4 years. Each use requires prior judicial or independent administrative authorisation except in cases of urgency. Post-use notification to the relevant supervisory authority is required in all cases.
⚖️ Penalty for Article 5 violations
€35,000,000
or
7%
of worldwide annual turnover for the preceding financial year — whichever is higher. This is the highest penalty tier in the AI Act. It applies to providers, deployers, importers, and distributors. For SMEs and startups, fines are capped at the lower threshold. Each prohibited practice violation is a separate infringement under Article 99.

Section 3 — High-Risk AI: Annex II, Annex III, and the Classification Rules

The high-risk tier of the EU AI Act is defined by Article 6, which sets out two distinct routes through which an AI system becomes subject to the full set of provider and deployer obligations. An AI system is high-risk if it is a safety component of a regulated product listed in Annex II — or if it independently falls within one of the eight sectors and use cases enumerated in Annex III. These two routes follow different conformity assessment pathways and interact differently with existing EU product law.

🏭
Route 1 · Annex II · Article 6(1)
AI as Safety Component in Regulated Products

An AI system is automatically classified as high-risk under Article 6(1) if it is itself a product covered by, or a safety component of a product covered by, one of the EU harmonisation legislation instruments listed in Annex II — and that product is required to undergo a third-party conformity assessment by a notified body.

This route does not require analysis of risk to fundamental rights — classification is determined solely by the product category. If a medical device, a vehicle, or a piece of industrial machinery incorporates AI as a safety-critical component, that AI component is high-risk regardless of how it is designed or what it does.

Machinery (Regulation (EU) 2023/1230)
Toys (Directive 2009/48/EC)
Recreational craft & personal watercraft
Lifts (Directive 2014/33/EU)
Equipment for explosive atmospheres (ATEX)
Radio equipment (Directive 2014/53/EU)
Pressure equipment (Directive 2014/68/EU)
Medical devices (MDR, IVDR)
Civil aviation safety (various regulations)
Motor vehicles (type-approval)
Agricultural tractors
Marine equipment (Directive 2014/90/EU)
🎯
Route 2 · Annex III · Article 6(2)
AI in Eight Sensitive Sectors and Use Cases

Under Article 6(2), an AI system used in one of the eight sectors or use cases listed in Annex III is classified as high-risk — subject to the Article 6(3) filter. Unlike Annex II, classification here requires a contextual assessment: not every AI system used in these sectors is high-risk. The classification depends on what the system does and who it affects.

The eight Annex III categories are: (1) biometric identification and categorisation; (2) critical infrastructure management; (3) education and vocational training; (4) employment, workers management and access to self-employment; (5) access to essential private services and public services and benefits; (6) law enforcement; (7) migration, asylum and border control management; and (8) administration of justice and democratic processes.

A key difference from Annex II: Annex III systems do not automatically require a notified body. Most Annex III high-risk systems follow a self-assessment conformity pathway using the Annex IX checklist — the provider conducts the conformity assessment internally and signs the EU Declaration of Conformity. Only biometric identification systems used by law enforcement require a notified body under Annex III.

The Article 6(3) Filter — When Annex III Systems Are Not High-Risk

Article 6(3) introduces an important escape hatch: an AI system listed in Annex III is not classified as high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. The provider must assess whether this filter applies and, if so, document that assessment — and notify the European Commission of the determination.

🔎 Article 6(3) — Criteria for determining a system is NOT high-risk

The AI Act specifies that an Annex III AI system is not high-risk if it satisfies at least one of the following conditions:

1
The AI system is intended to perform a narrow procedural task — for example, formatting, categorising, or routing data — with no significant impact on the outcome of any decision affecting a natural person.
2
The AI system is intended to improve the result of a previously completed human activity that does not directly affect natural persons — for example, an AI tool that post-processes data generated by a human operator, where the human output (not the AI output) is what reaches the affected person.
3
The AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the human assessment that matters — it is purely an anomaly-detection layer feeding into a human review process.
4
The AI system is intended to perform a preparatory task to an assessment relevant for the purposes of Annex III use cases but is not itself making or significantly influencing the outcome of that assessment — for example, an AI tool that organises application documents before a human case worker reviews them.

Key procedural requirement: a provider who concludes their Annex III system is not high-risk because it satisfies one of these conditions must document the reasoning and notify the European Commission. This notification requirement means the Article 6(3) filter is not a quiet internal decision — it creates a regulatory record that national competent authorities can scrutinise. Providers who rely on the filter without adequate documentation face the same enforcement risk as those who fail to classify a high-risk system correctly.

Conformity Assessment Pathways — Annex II vs. Annex III

📋 How conformity assessment differs between the two routes
Annex II — Product-Embedded AI
Third-party notified body required — the same notified body that assesses the regulated product (e.g. a medical device notified body) also assesses the AI component
Conformity follows the sectoral product legislation (MDR, Machinery Regulation, etc.) with AI Act requirements layered on top
CE marking consolidates both the product CE mark and the AI Act CE requirement in a single mark on the product
National market surveillance authority competent for the product also enforces AI Act obligations for that product's AI component
Application date follows the sectoral product legislation timeline, which may differ from the AI Act's 2 August 2026 general date
Annex III — Standalone High-Risk AI
Self-assessment for most systems — provider conducts the conformity assessment internally using the Annex IX checklist; no notified body required
Exception: real-time and post-remote biometric identification systems used by law enforcement always require a notified body even under Annex III
Provider signs the EU Declaration of Conformity (Annex V) and affixes CE marking — these are provider-generated documents, not notified body certificates
Registration in the EU AI public database is mandatory before market placement — all Annex III high-risk systems must be registered
National AI supervisory authorities (not sectoral product authorities) are competent for enforcement of standalone Annex III systems

Section 4 — The Eight Annex III Sectors: What High-Risk Looks Like in Practice

Annex III of the EU AI Act identifies eight sectors in which AI systems are presumptively high-risk — subject to the Article 6(3) filter discussed in Section 3. Within each sector, the Annex defines specific use cases rather than sweeping category-level inclusion. This means the classification question is not merely "is this AI used in healthcare?" or "is this AI used by law enforcement?" — it is "does this specific AI system perform the particular functions enumerated in Annex III for this sector?" The sector-by-sector breakdown below maps each category to what is in scope, what is out of scope, and the examples most likely to present classification difficulty.

1
Biometric Identification, Categorisation, and Emotion Recognition
Annex III, § 1
High-risk (in scope)
Remote biometric identification systems used in publicly accessible spaces by or on behalf of law enforcement or public authorities
Biometric categorisation systems that assign individuals to categories based on biometric data — including race, sex, political opinion, religion
Emotion recognition systems used in any context where the output influences decisions about natural persons
Not high-risk (examples)
Biometric verification (1:1 matching) for authentication where only the person themselves initiates the check — e.g. smartphone face unlock
Liveness detection tools used solely to prevent spoofing in digital identity checks
AI-based age estimation at physical retail points of sale where no personal data is retained
Classification grey zone
Workplace time-and-attendance systems using facial recognition — likely in scope as biometric identification in employment context
Customer satisfaction tools using facial analysis in retail — depends on whether emotional categorisation influences individual treatment
2
Critical Infrastructure Management and Operation
Annex III, § 2
High-risk (in scope)
AI used as a safety component in the management of critical digital infrastructure — road traffic management, railway, water supply, gas, electricity, and heating networks
AI making or influencing automated operational decisions in critical infrastructure where failure could affect significant populations
Not high-risk (examples)
AI-based monitoring dashboards used purely for human situational awareness, with no automated control authority over infrastructure systems
Predictive maintenance tools that flag potential failures for human engineers to review — where human decision intervenes before any action is taken
Classification grey zone
AI-assisted grid balancing for electricity networks — likely high-risk if the system makes near-autonomous load-shedding decisions
AI cybersecurity tools protecting critical infrastructure networks — may qualify if they autonomously block or reroute traffic
3
Education and Vocational Training
Annex III, § 3
High-risk (in scope)
AI used to determine access to or admission into educational and vocational training institutions — including AI-based admissions scoring tools
AI used to evaluate and assess students — including automated grading systems that determine academic qualifications or progression
AI used to detect prohibited behaviour during assessments — exam proctoring AI that makes or influences decisions about student misconduct
Not high-risk (examples)
AI-powered tutoring and learning support tools that personalise content delivery but do not determine grades or qualifications
Administrative AI tools used for scheduling, timetabling, or student record management with no impact on educational outcomes
Spell-check, grammar correction, or writing assistance tools used in educational settings
Classification grey zone
AI tools that score student essays and provide teacher recommendations — high-risk if the score directly influences progression; not high-risk if purely advisory
Learning analytics platforms that flag at-risk students for intervention — depends on whether flagging leads to automatic restriction of access to resources
4
Employment, Workers Management and Access to Self-Employment
Annex III, § 4
High-risk (in scope)
AI used for recruitment and selection — including CV screening, automated shortlisting, and interview scoring tools that influence hiring decisions
AI used to make or significantly influence decisions on promotion, task allocation, termination, and performance evaluation of workers
AI used to monitor and evaluate employee performance and behaviour
Not high-risk (examples)
HR chatbots that answer employee questions about policies and benefits, with no involvement in performance or employment decisions
AI tools used for workforce planning at an aggregate level, without making individual employment decisions
Automated payroll and time-tracking tools where the AI role is purely computational and process-administrative
Classification grey zone
AI-assisted job matching platforms — likely high-risk for the platform if its ranking algorithm determines which candidates employers see
Algorithmic work scheduling systems for gig workers — may qualify as task allocation monitoring under Annex III § 4(b)
5
Access to Essential Private and Public Services and Benefits
Annex III, § 5
High-risk (in scope)
AI used to evaluate creditworthiness of natural persons or establish their credit score — mortgage lending AI, consumer credit scoring
AI used for risk assessment and pricing in life, health, and sickness insurance, affecting individual premiums or eligibility
AI used by public authorities to evaluate eligibility for and grant, reduce, revoke, or reclaim public benefits and social services
AI used to dispatch or prioritise emergency services — police, fire, ambulance dispatch systems
Not high-risk (examples)
AI fraud detection tools that flag transactions for human review, where no automated decision to deny service is made without human sign-off
Customer segmentation AI used for marketing purposes — provided it does not influence access to a financial product or service
Classification grey zone
AI-based anti-money laundering / KYC systems — potentially high-risk if they make or influence decisions to deny account opening or terminate banking relationships
Insurtech pricing AI for property and casualty insurance — may be in scope if pricing determinations constitute individual risk assessment
6
Law Enforcement
Annex III, § 6
High-risk (in scope)
AI used to assess the risk of a natural person becoming a victim of crime — victimisation prediction tools used by police services
AI used as polygraph and similar truth-testing tools in law enforcement interrogations or interviews
AI used to evaluate the reliability of evidence in criminal proceedings — evidence assessment tools used in investigations
AI used to predict offence recidivism or the future criminal behaviour of individuals
Not high-risk (examples)
Administrative AI tools used by police for scheduling, resource planning, or internal communications without affecting any individual's rights
Data management systems used to maintain and search criminal records databases where the AI performs only indexing and retrieval
Classification grey zone
Predictive policing tools that identify geographic "hotspots" — border cases: if the output is area-level (not individual), the Article 6(3) filter may apply
AI tools used in investigative journalism to analyse crime data — not law enforcement unless used by or on behalf of a competent authority
7
Migration, Asylum, and Border Control Management
Annex III, § 7
High-risk (in scope)
AI used for lie detection and similar tools during border checks on third-country nationals
AI used to assess risk and security risks posed by persons seeking to enter EU territory
AI used to assist in examination of applications for asylum, visa, and residence permits — including credibility assessment of applicants
AI used to detect, recognise, or identify natural persons in the context of border management
Not high-risk (examples)
Language translation tools used to facilitate communication with applicants — where the translation is purely informational and does not influence any determination
Administrative case management AI used to route and schedule applications without assessing their merits
Classification grey zone
AI tools used by immigration lawyers to assess the likely outcome of applications — not high-risk if used purely as legal advice tools not submitted to authorities; context-dependent
Document verification AI at e-gates — likely high-risk as it performs biometric identification in a border management context
8
Administration of Justice and Democratic Processes
Annex III, § 8
High-risk (in scope)
AI used to assist judicial authorities in researching and interpreting facts and the law, and in applying the law to specific facts — judicial decision-support AI
AI used to influence the outcome of elections and democratic processes — including AI that targets individuals with political messaging based on profiling
Not high-risk (examples)
Legal research tools used by private lawyers to identify relevant case law — where the tool assists the lawyer's research but has no influence on judicial proceedings
Court administrative management systems for scheduling hearings and managing filings — where the AI performs only process administration
Classification grey zone
AI-assisted dispute resolution platforms (ODR) — potentially high-risk if the AI makes outcome recommendations that are formally binding or carry significant weight in settlement processes
Voter modelling and demographic analytics tools used by political campaigns — likely high-risk if used to target and influence individual voters based on profiling

Section 5 — Limited-Risk and Minimal-Risk: Transparency and the Voluntary Framework

The lower two tiers of the AI Act's risk framework reflect a deliberate policy choice: not every AI system that interacts with people warrants the heavy compliance burden imposed on high-risk systems. Limited-risk AI systems — those that create specific transparency risks without posing systematic safety or fundamental rights concerns — are subject to targeted disclosure obligations under Article 50. Minimal-risk AI systems, which represent the large majority of AI in commercial use today, face no mandatory obligations at all under the AI Act.

Limited-Risk AI — Article 50 Transparency Obligations

Article 50 of the AI Act creates four specific transparency obligations, each tied to a distinct interaction type. These obligations are primarily aimed at ensuring that people who interact with AI — or whose behaviour or state is assessed by AI — are aware they are doing so. The underlying rationale is informational autonomy: people should be able to choose how to engage with AI-generated or AI-mediated content and experiences once they know it is present.

Art. 50(1)
AI Systems Interacting with Natural Persons — Chatbots and Conversational AI
The obligation
Providers of AI systems designed to interact directly with natural persons must ensure that those systems disclose to the natural person that they are interacting with an AI system. The disclosure must be given before the interaction begins or at the latest at the start of it — it cannot be buried in terms of service.
Key limits and exceptions
Exception — obvious context: the obligation does not apply where it is obvious from the circumstances and context that the person is interacting with an AI system. A clearly robotic voice response system, for instance, may not require an explicit "I am an AI" disclosure.
Exception — law enforcement and authorised AI: the obligation does not apply to AI systems that have been authorised by law for lawful purposes, including for the purposes of detecting, preventing, investigating, or prosecuting criminal offences.
Art. 50(3)
Emotion Recognition and Biometric Categorisation Systems
The obligation
Providers and deployers of AI systems that recognise or infer the emotions or intentions of natural persons, or that categorise natural persons based on biometric data into groups such as race, ethnicity, political opinion, or sexual orientation, must inform the persons exposed to these systems of the operation of the system.
The disclosure obligation applies to any context where the system is used — not just high-risk deployment contexts. Even a low-stakes emotion-sensing retail system that tracks customer mood to adjust in-store music falls within this provision.
Relationship to Article 5 prohibition
Important: Article 50(3) operates alongside — not instead of — the Article 5 prohibition on emotion recognition in the workplace and educational institutions. An employer using an emotion-detection tool does not comply with the AI Act simply by disclosing its use; the prohibition under Art. 5(1)(f) continues to apply in those contexts (subject to the medical/safety exception).
Art. 50(2) & (4)
AI-Generated Content (AIGC) and Deepfakes
AIGC disclosure — Art. 50(2)
Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deepfake must disclose that the content has been artificially created or manipulated. The disclosure must be machine-readable as well as visible to viewers where technically feasible.
Deployers of AI systems generating text published to inform the public on matters of public interest must disclose that the text was AI-generated. This is particularly relevant for news media, public communications, and political content.
Exceptions and carve-outs
Creative and artistic exemption: the deepfake disclosure obligation does not apply where the content forms part of an evident artistic, creative, satirical, or fictional work — provided it does not risk seriously misleading the public about a real person.
Authorised AI systems: the disclosure obligations do not apply to AI systems authorised by law for criminal investigation purposes, or where disclosure would obstruct a lawful activity.
⚠️ The deepfake disclosure obligation and political content

The Article 50 deepfake and AIGC disclosure requirements carry particular significance for political communications. AI systems used to generate or edit audio, video, or images of real political figures — or to generate persuasive political text — trigger mandatory disclosure requirements regardless of whether the content would otherwise fall within the limited-risk tier. Providers of general-purpose AI models used to generate such content are required to ensure their systems technically support the labelling of AI-generated outputs (Article 50(5)). This places an obligation on foundation model providers — not only on the downstream deployers who create the actual content.

There is no minimum harm threshold for the disclosure to apply: any AI-generated deepfake requires disclosure unless it clearly falls within the artistic/satirical carve-out. The practical challenge for media organisations, political campaigns, and communications agencies is designing workflows that automatically apply the required machine-readable labels to all AI-generated or AI-edited content before publication.

Minimal-Risk AI — No Mandatory AI Act Obligations

The vast majority of AI systems currently in commercial use fall into the minimal-risk tier. There are no AI Act compliance obligations — no conformity assessment, no registration, no technical documentation, no transparency disclosure — for these systems. The Act deliberately avoids imposing costs on beneficial, low-risk AI to preserve the EU's competitiveness and innovation capacity.

🎮
AI in Video Games
NPC behaviour, procedural generation, difficulty scaling, and game AI that does not interact with players in ways that could affect their rights or wellbeing
📧
Spam Filters
Email classification, content filtering, and anti-phishing AI — standard inbox protection tools used by businesses and consumers
🔍
Search and Recommendation Engines
General web search ranking algorithms and content recommendation systems — including streaming service recommenders and e-commerce product suggestions
✍️
AI Writing Assistance
Grammar correction, autocomplete, predictive text, and style suggestions — productivity tools that assist users in drafting content
📊
Inventory and Operations AI
Demand forecasting, logistics optimisation, and supply chain AI that operates at a business operations level without making decisions about natural persons
🎨
Creative AI Tools
Image generation, music composition, video editing assistance, and design AI — where the output is clearly artistic and the system is not used to create deepfakes of real persons

Voluntary Codes of Conduct and the EU AI Pact

🤝 Voluntary commitment framework — Article 95 and EU AI Pact
Article 95 — Voluntary Codes of Conduct
The AI Act encourages providers of non-high-risk AI systems to voluntarily adopt codes of conduct aligned with the mandatory obligations that apply to high-risk systems — risk management, data governance, transparency, human oversight, and accuracy
Codes of conduct are developed by providers themselves with Commission facilitation — they are not government-imposed but are designed to create industry-level best practice standards above the minimal-risk floor
Participation in an Article 95 code is voluntary and does not create legal liability for the high-risk obligations — but may be taken into account by national authorities in enforcement discretion
Codes are particularly encouraged for AI systems that interact with natural persons or that process large volumes of personal data, even where no mandatory Article 50 obligation applies
EU AI Pact — Voluntary Early Compliance
The EU AI Pact is a Commission-led voluntary commitment programme launched in 2024, inviting AI providers and deployers to commit to early compliance with key AI Act obligations ahead of the legal application dates
Pact signatories commit to: implementing internal AI governance policies; mapping and classifying AI systems; applying the prohibited practices framework from 2 February 2025 (ahead of or in line with the legal requirement); and contributing to code of practice development
The Pact is open to all organisations — not only EU-based companies. Non-EU providers and deployers with EU market exposure can sign up and use it as a structured early compliance roadmap
Participation is tracked and published by the Commission — reputational signalling to enterprise customers and regulators that the organisation is engaging proactively with AI Act obligations

Section 6 — Classifying Your AI System: Step-by-Step Decision Framework

Risk classification under the EU AI Act is not a single question with a binary answer. It is a sequential decision process that must be applied system-by-system, use-case by use-case, across every AI system your organisation develops, provides, or deploys. Getting the classification right matters enormously: under-classification exposes your organisation to enforcement risk and penalties; over-classification wastes resources on compliance work that is not legally required. The framework below provides a structured, article-by-article classification methodology.

The 7-Step Classification Framework

1
Define the system's intended purpose and determine whether it is an "AI system" under Article 3(1)
Not every automated or algorithmic tool is an "AI system" under the Act. The statutory definition requires inference capability — the system must generate outputs (predictions, recommendations, decisions, content) from inputs using machine learning, logic-based or knowledge-based approaches, or statistical methods. Simple rule-based automation, deterministic decision trees, and look-up-table systems are generally not AI systems under the Act and are excluded from scope entirely. Start here to avoid misclassifying systems that do not need classification.
Not AI system → no AI Act obligations
Is an AI system → proceed to Step 2
2
Check Article 2 territorial scope — does the AI Act apply to this system at all?
Even for a confirmed AI system, the AI Act only applies if the system is placed on the EU market, put into service in the EU, or its output is used in the EU. For non-EU providers, the key question is whether a deployer in the EU uses the system's output — if yes, the Act applies. Pure third-country use with no EU nexus falls outside scope. Also confirm your role: are you the provider (developer and market placer), or the deployer (user under your own authority)?
No EU nexus → no AI Act obligations
EU nexus confirmed → proceed to Step 3
3
Check Article 5 — does the system fall within any prohibited practice?
Before any other classification analysis, check whether the system or any of its features falls within the eight prohibited categories. Review each of Art. 5(1)(a)–(h) against the system's actual functionality and use context — not just its stated purpose. Pay particular attention to: manipulation or deceptive techniques affecting behaviour; exploitation of vulnerability; real-time biometric identification in public spaces; emotion recognition in employment or education; and biometric categorisation for protected characteristics. If any prohibition applies, the system cannot be placed on the market, full stop.
Prohibition applies → discontinued immediately
No prohibition applies → proceed to Step 4
4
Check Annex II — is the system a safety component of a regulated product?
Is the AI system a product regulated under Annex II harmonisation legislation (medical devices, machinery, vehicles, aviation safety, etc.)? Or is it a safety component of such a product — meaning that if the AI component failed, it could endanger the safety of the product or its users? If yes to either, the system is automatically high-risk under Article 6(1) — regardless of what the system actually does. The conformity assessment follows the sectoral legislation with AI Act obligations layered on top, using the relevant notified body.
Annex II product or safety component → High-risk (Route 1)
Not an Annex II product → proceed to Step 5
5
Check Annex III — does the system fall within one of the eight high-risk use case categories?
Read Annex III carefully. Match the system's actual use case — not its general sector or technology — against the specific enumerated uses in each of the eight categories. A healthcare AI system that recommends treatment is not automatically in scope; an AI system used by a credit institution to determine whether to grant a consumer loan is. The match must be specific: the system must actually perform the function described in Annex III, not merely operate in the same industry. Sector adjacency is not classification.
Annex III use case matched → proceed to Step 6 (filter check)
No Annex III match → proceed to Step 7 (lower tiers)
6
Apply the Article 6(3) filter — is the system excluded from high-risk despite the Annex III match?
Even if the system falls within an Annex III category, it is not high-risk if it satisfies the Article 6(3) filter: narrow procedural task only; or it improves a previously completed human activity without direct impact on natural persons; or it detects patterns without replacing human assessment; or it performs only preparatory tasks for an Annex III assessment. Document the reasoning thoroughly and notify the Commission if you rely on this filter. If the filter applies, the system falls into the limited-risk or minimal-risk tier. If the filter does not apply, the system is confirmed high-risk under Route 2 — provider obligations (Art. 9–17, 43–49, 72) apply in full from 2 August 2026.
Filter does not apply → High-risk (Route 2) — full obligations
Filter applies → proceed to Step 7 (lower tiers) + notify Commission
7
Check Article 50 — does the system trigger any limited-risk transparency obligation?
For systems confirmed as not prohibited and not high-risk, check whether any Article 50 transparency obligation applies: chatbot/conversational AI (Art. 50(1) — disclose AI identity); emotion recognition or biometric categorisation (Art. 50(3) — inform affected persons); deepfake generation (Art. 50(2) and (4) — machine-readable disclosure); or AI-generated text for public information (Art. 50(4) — disclosure required). If none of these apply, the system is minimal-risk — no AI Act obligations. Optionally consider voluntary codes of conduct under Article 95.
Art. 50 trigger present → Limited-risk — transparency obligations only
No Art. 50 trigger → Minimal-risk — no mandatory obligations

Common Misclassification Traps

⚠️
Trap 1: Classifying by sector instead of by specific use case

The most common misclassification error. Annex III classifies by specific use case functions, not by industry. An AI system used by a hospital is not automatically high-risk; an AI system that performs triage scoring that influences admission decisions likely is. Always identify the precise function the AI performs and match it to the specific text of Annex III — not the general sector heading.

⚠️
Trap 2: Assuming "human in the loop" removes high-risk classification

Having a human review AI output does not automatically remove a system from the high-risk classification. Annex III captures systems that "make or significantly influence" decisions — even advisory or recommender systems can meet this threshold if their recommendations are routinely followed or if the human reviewer lacks meaningful ability to assess or override them. Human oversight is a compliance obligation for high-risk systems, not a classification escape hatch.

⚠️
Trap 3: Misapplying the Article 6(3) filter without documentation

The Article 6(3) "not high-risk" filter requires documented reasoning and Commission notification — it is not a quiet internal decision. Providers who rely on the filter without a documented assessment and notification are effectively self-classifying as lower than high-risk without the procedural protections that make that determination defensible. Best practice: treat the filter analysis as a mini-conformity assessment — document why each of the four criteria is or is not satisfied, and keep the record available for national authority inspection.

⚠️
Trap 4: Overlooking the Article 25 reclassification as a provider

A deployer that substantially modifies a high-risk AI system, or that puts a general-purpose AI system to a high-risk use case not covered by the original provider's conformity assessment, is reclassified as a provider under Article 25. This catches companies that fine-tune, retrain, or materially adapt third-party AI systems and then deploy them in high-risk contexts — the downstream company inherits all provider obligations. This is particularly relevant for enterprise deployers using third-party AI models and customising them for regulated industries.

Classification Grey Areas — GPAI, Bundled AI, and SaaS Deployment

🔘 Hard classification cases requiring legal analysis
GPAI dual use
General-purpose AI models — large language models, multimodal models, foundation models — are regulated under Chapter V of the AI Act as GPAI models, not under the high-risk framework. However, a GPAI model that is fine-tuned or integrated by a downstream provider for a specific high-risk use case (e.g. a legal AI assistant used to support court proceedings) is subject to high-risk provider obligations for that downstream system. The GPAI model provider and the high-risk system provider may be different legal entities, each with separate obligations.
Bundled AI features
Software products that include AI as one feature among many — an enterprise HR platform, an ERP system, or a CRM tool that includes an AI-powered analytics module — require feature-level classification. The product as a whole is not high-risk; the specific AI feature must be assessed. If that feature performs CV-screening or employee performance scoring, it is high-risk regardless of the platform it is delivered through. The provider of the product is the provider of the AI feature for classification purposes.
SaaS and API delivery
AI systems delivered as a service (SaaS or API) reach EU deployers without a physical product placement. The AI Act applies regardless of delivery model — cloud-based, API-based, or on-premises delivery does not affect classification. For non-EU providers delivering to EU deployers via API, the service is considered "placed on the EU market" when the API is made available to EU customers. The EU authorized representative requirement under Article 22 applies equally to SaaS and API-delivered high-risk systems.
Post-market reclassification
AI systems that are initially classified as not high-risk may become high-risk if their intended purpose materially changes — either through provider update, marketing expansion, or deployer modification under Article 25. Providers must monitor how their systems are used in practice and reassess classification when use patterns diverge from the originally assessed intended purpose. This is particularly relevant for general-purpose tools whose user base evolves to include regulated sector deployers.
⚖️ AI Governance & Risk Legal Advice

Need expert help classifying your AI systems under the EU AI Act?

AI Act classification is the foundation of every compliance programme — and the analysis is rarely straightforward. Our AI law team advises technology providers, enterprise deployers, and non-EU organisations on system-by-system classification assessments, Article 6(3) filter documentation, Commission notification procedures, and the full high-risk compliance pathway from technical documentation to CE marking and EU database registration. We work across all eight Annex III sectors and across both Annex II product-embedded and standalone AI systems.

Speak to our AI Law Team →

Oleg Prosin is the Managing Partner at WCR Legal, focusing on international business structuring, regulatory frameworks for FinTech companies, digital assets, and licensing regimes across various jurisdictions. Works with founders and investment firms on compliance, operating models, and cross-border expansion strategies.