The EU Artificial Intelligence Act: What Non‑EU Providers and Deployers Must Know About Placing AI Systems on the EU Market

The EU Artificial Intelligence Act: What Non‑EU Providers and Deployers Must Know About Placing AI Systems on the EU Market

The EU Artificial Intelligence Act: What Non‑EU Providers and Deployers Must Know About Placing AI Systems on the EU Market

🇪🇺 EU AI Act Compliance Guide · 2025–2027

The EU Artificial Intelligence Act: What Non-EU Providers and Deployers Must Know About Placing AI Systems on the EU Market

Regulation (EU) 2024/1689 reaches far beyond Europe's borders. Any company whose AI system is placed on the EU market, put into service in the EU, or whose AI output is used within the EU is subject to the AI Act — regardless of where that company is incorporated. This guide covers extraterritorial scope, the risk classification system, prohibited practices, high-risk obligations, and the compliance timeline that every non-EU provider and deployer needs to understand.

📋 Regulation (EU) 2024/1689 ⛔ Prohibited practices: in force Feb 2025 🏛 High-risk obligations: Aug 2026 🌍 Extraterritorial reach

Section 1 — What the EU AI Act Is and Who the Key Actors Are

Regulation (EU) 2024/1689 — the EU Artificial Intelligence Act — is the world's first comprehensive binding legal framework for artificial intelligence. It entered into force on 1 August 2024 and applies in full from 2 August 2026, with earlier dates for prohibited practices (February 2025) and general-purpose AI (August 2025). Unlike most EU regulations, the AI Act is designed with explicit extraterritorial reach: it applies to any AI system that is placed on the EU market or whose outputs are used within the EU, regardless of where the provider is established.

The AI Act does not regulate AI systems in the abstract — it regulates actors in the AI value chain according to their role and the risks their AI systems present. Understanding which role your organisation plays is the starting point for any compliance analysis. The obligations, liability exposure, and timelines differ significantly depending on whether you are a provider, deployer, importer, or distributor.

📋 Legal Basis — Article 3 AI Act

The AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment — and which, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

"An AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." — Article 3(1), Regulation (EU) 2024/1689

This definition is intentionally broad and technology-neutral. It captures machine learning models, generative AI systems, expert systems with adaptive elements, and AI components embedded in products. Software that processes inputs deterministically according to fixed rules — without inference — is generally not an AI system under the Act.

The Four Key Actors — Definitions and Obligations

The AI Act creates distinct obligations for each actor in the AI supply chain. The same organisation may occupy more than one role simultaneously — for example, a company that develops an AI model (provider) and also uses it internally to make decisions about employees (deployer) holds obligations in both capacities.

🏭
Provider
Article 3(3) AI Act

A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model and places it on the market or puts it into service under their own name or trademark — whether for payment or free of charge.

Providers bear the heaviest obligations under the AI Act: conformity assessments, technical documentation, quality management systems, EU declarations of conformity, CE marking, and post-market monitoring. For non-EU providers, the obligations apply the moment a system reaches the EU market.

Highest obligation tier
🖥
Deployer
Article 3(4) AI Act

A natural or legal person, public authority, agency or other body that uses an AI system under its authority — except where the system is used in the course of personal, non-professional activity.

Deployers are typically enterprise customers of AI providers — businesses that integrate a third-party AI system into their operations. Deployers of high-risk AI systems have significant independent obligations: human oversight, use in accordance with instructions, fundamental rights impact assessments, and incident reporting.

Significant obligations for high-risk AI
📦
Importer
Article 3(6) AI Act

A natural or legal person established or located in the EU that places on the EU market an AI system bearing the name or trademark of a natural or legal person established outside the EU.

Importers function as a compliance gateway for non-EU-provider products entering the EU market. They must verify that the provider has conducted conformity assessments, that documentation is complete, and that the provider has appointed an EU authorized representative. If a provider fails its obligations, the importer may become liable.

EU-established; gateway role
🔗
Distributor
Article 3(7) AI Act

A natural or legal person in the supply chain — other than the provider or importer — that makes an AI system available on the EU market without modifying its properties.

Distributors must verify that the AI system bears the CE marking (for high-risk systems), that it is accompanied by the required documentation, and that it has not been modified in a way that may affect its compliance. A distributor that modifies an AI system or places it on the market under its own name may be reclassified as a provider.

Supply chain compliance role
⚖️
When a deployer becomes a provider: the AI Act reclassifies a deployer as a provider — with full provider obligations — in three situations: (1) the deployer places an AI system on the market under their own name or trademark; (2) the deployer makes a substantial modification to a high-risk AI system; or (3) the deployer modifies the intended purpose of a system in a way that requires a new conformity assessment. Businesses that fine-tune foundation models and deploy the resulting system to customers should assess carefully whether they have crossed into provider status under Article 25.

Section 2 — Extraterritorial Scope: How Non-EU Providers Are Caught

The AI Act's extraterritorial reach is one of its most commercially significant features. Article 2 establishes that the regulation applies not on the basis of where an AI system is developed or where its provider is established, but on the basis of where the system enters the EU market or where its output is used. A company headquartered in the United States, Singapore, or the United Kingdom that provides AI systems to EU customers, or whose AI outputs are used by EU-located parties, is subject to the AI Act in the same way as an EU-based company.

This approach mirrors the EU's GDPR extraterritorial model — and like GDPR, it is not merely theoretical. The AI Act establishes enforcement mechanisms specifically designed to reach non-EU entities: the EU authorized representative requirement, market surveillance authority powers, and a cross-border enforcement framework coordinated by the European AI Office.

The Four Article 2 Triggers — When Non-EU Entities Are Caught

1

Placing an AI system on the EU market

"Placing on the market" means making an AI system available for the first time on the EU market — whether for payment or free of charge. A non-EU provider that offers its AI system to EU businesses or consumers through a website, API, or distribution channel has placed its system on the EU market and is within the regulation's scope. Physical presence in the EU is not required.

2

Putting an AI system into service in the EU

"Putting into service" means supplying an AI system directly to a deployer or user for first use in the EU. Where a non-EU company installs or configures an AI system for use by an EU-located business, it has put the system into service in the EU. This covers bespoke deployments, enterprise software integrations, and SaaS platforms configured for EU use.

3

Output of the AI system is used in the EU

This is the broadest trigger: even if an AI system is never placed on the EU market and is operated entirely outside the EU, if its output — predictions, decisions, content, recommendations — is used within the EU, the regulation applies to the provider and deployer. A non-EU AI system generating credit decisions, medical diagnoses, or hiring recommendations that affect EU-located individuals triggers the regulation at the output-use stage.

4

Non-EU deployers established or located in the EU

Where a company established outside the EU has a subsidiary, branch, or representative office in the EU and uses an AI system through or from that EU presence, the deployer obligations under the AI Act apply to the EU-established entity. Multi-national groups with EU subsidiaries that use AI systems developed or procured from outside the EU cannot use the non-EU parent structure to avoid deployer compliance obligations.

Supply Chain Obligations — How Responsibility Flows

The AI Act distributes compliance obligations across the supply chain according to actor role. The following maps how obligations flow for a common scenario: a non-EU provider whose AI system is imported and distributed to EU deployers.

Non-EU Provider → EU Importer → EU Deployer (common structure)
Non-EU provider
Must comply with all provider obligations (conformity, documentation, QMS). Must appoint EU authorized representative. Remains primarily responsible for AI Act compliance of the system.
EU importer
Must verify provider compliance before placing on market. Cannot place non-compliant system on market. Bears liability if provider obligations are unmet and importer fails its verification duty.
EU deployer
Must use system within intended purpose per provider instructions. Independent obligations for high-risk AI: human oversight, impact assessment, monitoring, incident reporting.
Non-EU Provider → EU-Based Direct Customer (no importer)
Non-EU provider
Directly responsible for all provider compliance obligations vis-à-vis the EU market. The absence of an importer does not reduce obligations — it increases the importance of the EU authorized representative.
EU authorized representative
Mandatory appointment for high-risk AI providers not established in EU. Acts as the NCA's primary contact point. Responsible for maintaining documentation and cooperating with enforcement.
EU deployer (direct customer)
Takes on deployer obligations immediately upon use. For high-risk systems, must have written mandate or agreement with provider covering the AI Act compliance framework, instructions for use, and data governance.
Non-EU Provider → Distributor → EU Deployer (multi-step distribution)
Non-EU provider
Remains responsible for system compliance. Any modification of the system by the distributor that affects its properties may shift provider liability to the distributor. Contractual protection is essential.
Distributor
Must verify CE marking and documentation before making system available. Must notify provider and authorities of non-compliance discovered after distribution. Cannot modify system without consequence analysis.
EU deployer
Same independent deployer obligations apply regardless of the number of supply chain steps between provider and end use. The deployer's obligations are determined by the AI system's risk classification, not by how it was acquired.
🛂 EU Authorized Representative — Article 22 Requirement for Non-EU Providers
Mandatory for high-risk AI providers: non-EU providers placing high-risk AI systems on the EU market must appoint, in writing, an EU-authorized representative before the system is placed on the market. The representative must be established in a member state where the system is placed on the market or put into service.
Role and responsibilities: the authorized representative acts as the legal contact point for national competent authorities and market surveillance authorities. They must maintain a copy of the EU Declaration of Conformity and technical documentation for 10 years after the last system has been placed on the market — and must provide this to authorities upon request.
Mandate requirements: the written mandate must empower the representative to cooperate with NCAs, provide documentation on request, and take corrective action as directed. A representative without an adequate mandate cannot effectively fulfil the Article 22 requirement — NCAs may challenge inadequate mandates as non-compliance with the appointment obligation.
Representative liability: the authorized representative may be held liable alongside the non-EU provider for violations of the AI Act. This creates a strong incentive for EU representatives to conduct due diligence on providers before accepting mandates — the market for AI Act authorized representatives will develop similarly to the GDPR EU representative market.
Situations excluded from Article 2 scope — AI Act does not apply
AI systems used exclusively for military, national security, defence, or national security purposes
AI systems used solely for research and development not yet placed on the market
AI used by natural persons for purely personal, non-professional purposes
Open-source general-purpose AI models released under open licences (with limitations — transparency obligations still apply)
AI systems covered by Union legislation on vehicle type approval where AI-specific requirements already apply
Third-country public authorities using AI under international law enforcement cooperation agreements

Section 3 — The Risk Classification System: Four Tiers and GPAI

The AI Act is a risk-based regulation: obligations are calibrated to the potential harm an AI system can cause. The classification system has four tiers — prohibited practices, high-risk AI systems, limited-risk AI systems, and minimal-risk AI systems — plus a distinct track for general-purpose AI (GPAI) models that cuts across the risk hierarchy. Correctly classifying an AI system is the foundational step of any AI Act compliance programme: classification determines which obligations apply, when they apply, and what enforcement consequences attach to non-compliance.

Classification is determined by the intended purpose of the AI system — the use for which a system is specifically designed according to its provider's instructions for use, technical documentation, and marketing materials. A system used for a purpose other than its intended purpose may need to be re-assessed against the classification criteria applicable to its actual use. Deployers that repurpose AI systems beyond their intended use take on heightened compliance obligations.

The Four Risk Tiers

⛔ Prohibited
Unacceptable risk — banned outright
Article 5
Eight specific AI practices are banned under Article 5 because their risk to fundamental rights, human dignity, or democratic values is deemed unacceptable regardless of the use case. Providers and deployers whose AI systems fall within any prohibited practice category face immediate enforcement liability — there is no conformity process or authorisation that permits a prohibited practice. The prohibition applies from 2 February 2025. See Section 4 for a detailed breakdown of each prohibition.
In force from2 February 2025
Max penalty€35M or 7% global turnover
Authorisation availableNo — banned outright
⚠ High risk
Significant potential harm — full compliance required
Annexes I & III
High-risk AI systems are permitted but subject to the most comprehensive pre-market and ongoing compliance obligations in the AI Act: conformity assessments, technical documentation, quality management systems, human oversight mechanisms, and registration in the EU database. High-risk systems are defined by reference to two annexes: Annex I (AI as a safety component of a product covered by EU product safety legislation) and Annex III (eight stand-alone categories of high-impact use cases, detailed below). Obligations become applicable on 2 August 2026 (Annex III) and 2 August 2027 (Annex I products).
In force from2 Aug 2026 (Annex III) · 2 Aug 2027 (Annex I)
Max penalty€15M or 3% global turnover
Conformity assessmentRequired before market placement
ℹ Limited risk
Transparency obligations only
Articles 50, 53
Limited-risk AI systems face only transparency obligations — primarily disclosure requirements so users know they are interacting with an AI. Chatbots and conversational AI must disclose their non-human nature at the outset of interaction. AI-generated content, deepfakes, and synthetic audio or video must be labelled as artificially generated. General-purpose AI models that generate content must implement technical solutions to mark their outputs as AI-generated in a machine-readable format. No conformity assessment or pre-market registration is required.
Key obligationDisclosure / labelling of AI nature
Pre-market requirementsNone beyond disclosure design
ExamplesChatbots, deepfakes, AI-generated text
✓ Minimal risk
No mandatory obligations — voluntary codes of conduct
Article 95
The vast majority of AI applications fall into the minimal-risk category — spam filters, AI-based video games, AI-enabled product recommendations, basic process automation. No mandatory compliance obligations apply. Providers and deployers of minimal-risk AI are encouraged but not required to adhere to voluntary codes of conduct developed under Article 95 of the AI Act. Classification as minimal-risk does not exempt an AI system from other applicable EU laws — GDPR, consumer protection law, and sector-specific regulations continue to apply.
Mandatory obligationsNone
Voluntary measuresArticle 95 codes of conduct
ExamplesSpam filters, recommendation engines, basic chatbots

High-Risk AI — The Eight Annex III Categories

Any AI system whose primary intended purpose falls within one of these eight categories is classified as high-risk under the AI Act and subject to the full provider and deployer obligation framework from 2 August 2026.

1

Biometric identification and categorisation

Remote biometric identification systems; biometric categorisation systems attributing natural persons to specific categories

2

Critical infrastructure management

AI used as safety components in management and operation of critical digital infrastructure, road traffic, water, gas, heating, and electricity supply

3

Education and vocational training

AI determining access to educational institutions, assessing learners, evaluating learning outcomes, monitoring student behaviour

4

Employment and workers management

AI for recruitment, candidate selection, task allocation, performance monitoring, promotion and termination decisions, access to self-employment

5

Essential private and public services

AI determining access to healthcare, social benefits, creditworthiness assessment, insurance risk assessment, emergency services dispatch

6

Law enforcement

AI used to assess individual risk of criminal offending or reoffending, polygraphs, crime analytics, evidence reliability assessment, facial recognition by law enforcement

7

Migration, asylum, and border control

AI assessing risk of irregular migration, verifying authenticity of travel documents, processing asylum applications, screening at borders

8

Administration of justice and democratic processes

AI assisting courts in legal research or fact-finding; AI used in electoral campaigns; AI influencing elections or referendums

🤖 General-Purpose AI (GPAI) — A Separate Compliance Track

General-purpose AI models — large language models, multimodal foundation models, and other AI models capable of performing a wide range of tasks — are subject to a dedicated compliance framework under Chapter V of the AI Act (Articles 51–56), applicable from 2 August 2025. GPAI obligations apply at the model level, not the application level, and are directed at GPAI model providers — companies that develop and make GPAI models available to downstream providers and deployers.

All GPAI Model Providers

Technical documentation, transparency to downstream providers, copyright policy for training data, summary of training data used. Open-source GPAI models: reduced obligations (documentation and copyright policy still required).

GPAI with Systemic Risk (>10²⁵ FLOPs)

All base obligations plus: model evaluation against state-of-the-art benchmarks, adversarial testing, incident and serious malfunctions reporting to the European AI Office, cybersecurity measures, and energy efficiency reporting.

How to classify your AI system — a practical starting sequence

1
Does the system's intended purpose fall within any Article 5 prohibited practice? If yes → prohibited regardless of other classification.
2
Is the AI system a GPAI model (trained on broad data, capable of diverse tasks, made available to others)? If yes → GPAI obligations apply independently of product-level classification.
3
Does the AI system serve as a safety component of a product covered by Annex I EU product safety legislation (medical devices, machinery, aviation, vehicles)? If yes → Annex I high-risk from 2 August 2027.
4
Does the system's intended purpose fall within any of the eight Annex III categories? If yes → high-risk from 2 August 2026. Check against actual use in practice — deployers repurposing a system may trigger high-risk classification even if the provider did not intend it.
5
Does the system involve interaction with humans that could mislead them about its AI nature, or does it generate synthetic content? If yes → limited risk transparency obligations under Article 50.
6
If none of the above: the system is minimal risk. No mandatory AI Act obligations apply — but other EU law continues to apply based on the specific use case.

Section 4 — Prohibited AI Practices: What the AI Act Bans Outright

Article 5 of the AI Act prohibits eight categories of AI practices that the EU legislature determined present risks so severe that no conformity process, authorisation, or justification can make them acceptable. These prohibitions are absolute — they apply to providers, deployers, importers, and distributors alike, without regard to the commercial purpose or technical sophistication of the system. Critically, the prohibited practices chapter entered into force on 2 February 2025, ahead of the main high-risk obligations framework.

In force since 2 February 2025: the Article 5 prohibited practices apply now — not from 2026. Any AI system or practice that falls within one of the eight prohibitions must be discontinued immediately. Non-EU providers whose systems are used within the EU are within scope from the same date. The maximum penalty for prohibited practices is €35 million or 7% of worldwide annual turnover, whichever is higher — the highest sanction level in the AI Act.
Art. 5(1)(a)
⛔ Prohibited

Subliminal, manipulative, or deceptive techniques

AI systems that deploy techniques operating below the threshold of conscious perception — subliminal techniques — or that exploit psychological weaknesses or biases to distort behaviour in a way that causes or is likely to cause significant harm. Covers systems using dark patterns, personalised persuasion at scale, or manipulative recommendation mechanics designed to bypass rational decision-making.

Art. 5(1)(b)
⛔ Prohibited

Exploitation of vulnerabilities of specific groups

AI systems that exploit vulnerabilities of specific groups — based on age (children, elderly), disability, or socio-economic situation — in a way that distorts their behaviour significantly and is likely to cause them harm. An AI system targeting financially distressed individuals with manipulative lending offers, or targeting children with addictive content mechanics, falls within this prohibition.

Art. 5(1)(c)
⛔ Prohibited

Social scoring by public authorities

AI systems used by public authorities — or on their behalf — to evaluate or classify natural persons or groups based on their social behaviour or personal characteristics, where the resulting score leads to detrimental or unfavourable treatment of those persons in unrelated social contexts. The prohibition targets generalised social credit systems modelled on the Chinese system — not sector-specific assessments like creditworthiness within financial services.

Art. 5(1)(d)
⛔ Prohibited

Individual criminal risk assessment

AI systems making risk assessments of natural persons to predict future criminal or reoffending behaviour based solely on profiling or assessment of personality traits and characteristics. The prohibition targets predictive policing tools that generate individual-level risk scores without grounding in objective, verifiable individual facts and circumstances directly related to the individual's past conduct.

Art. 5(1)(e)
⛔ Prohibited

Biometric categorisation inferring sensitive attributes

AI systems that categorise individuals based on biometric data to infer or deduce their race, ethnic origin, political opinions, trade union membership, religious or philosophical beliefs, sexual orientation, or health data. This prohibition targets systems that use facial analysis or voice analysis to infer these protected characteristics — a common feature of surveillance and profiling technologies.

Note: biometric categorisation for law enforcement identification (with court authorisation and within strict conditions) is treated separately under Article 5(1)(h) — these narrow law enforcement exceptions do not extend to commercial operators.
Art. 5(1)(f)
⛔ Prohibited

Untargeted facial image scraping for recognition databases

AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. The prohibition targets the practice of building large-scale face recognition databases by collecting images without individual knowledge or consent — a practice used by several surveillance technology companies to build commercial facial recognition services.

Art. 5(1)(g)
⛔ Prohibited

Emotion recognition in workplace or educational settings

AI systems used to infer emotional states of natural persons in the context of the workplace or educational institutions. Systems that analyse employee facial expressions, voice stress, or physiological signals during work interactions to assess their emotional state — for performance review, monitoring, or productivity management — are prohibited. AI-enabled wellbeing monitoring tools marketed to employers are within the scope of this prohibition if they infer emotion.

Exception: emotion inference systems used for medical or safety purposes — such as driver drowsiness detection — are excluded from the prohibition.
Art. 5(1)(h)
⛔ Prohibited (with exceptions)

Real-time remote biometric identification in public spaces (law enforcement)

The use of real-time remote biometric identification systems in publicly accessible spaces by law enforcement is prohibited — with three narrow exceptions requiring prior judicial or independent administrative authorisation: (1) targeted search for missing persons or victims of trafficking or sexual exploitation; (2) prevention of a specific and present terrorist threat; (3) identification of persons suspected of the most serious criminal offences carrying imprisonment of at least 4 years.

Critical point for non-law-enforcement providers: this prohibition applies specifically to law enforcement use. However, commercial biometric identification systems that could be used for these purposes face the high-risk classification framework under Annex III Category 1 — not the prohibited practices framework.
⚖️ Compliance implications for non-EU providers and deployers
Immediate scope: the prohibitions apply to any provider or deployer whose system's output is used within the EU — a US-based company whose emotion recognition software is used by EU employers, or whose manipulative recommendation engine reaches EU users, is within scope of Article 5 from 2 February 2025.
Product audit required: non-EU providers with EU market exposure should conduct an immediate audit of their AI systems and features against the eight prohibited practice categories. Systems marketed for emotion detection, biometric analysis, social scoring applications, or manipulative personalisation require priority review.
Deployer liability: deployers that use systems for prohibited purposes — even if the system was not designed or marketed for that purpose — are independently liable under Article 5. The prohibition attaches to the use, not only to the design. EU enterprise customers using AI systems in prohibited ways will face direct enforcement exposure.
Contractual risk allocation: non-EU providers supplying AI systems to EU deployers should include explicit use restrictions in their terms of service prohibiting deployment for Article 5 purposes — both to comply with their own provider obligations and to manage downstream liability exposure if EU customers misuse the system.

Section 5 — High-Risk AI: Obligations for Providers and Deployers

For AI systems classified as high-risk under Annex II or Annex III of the AI Act, both providers and deployers face significant mandatory obligations. Provider obligations are the most extensive in the Act — they span the full system lifecycle from design to post-market monitoring. Deployer obligations, while narrower in scope, are substantial and independently enforced. Both sets of obligations apply from 2 August 2026 for most high-risk systems, with Annex I product-embedded systems following the respective product legislation timelines.

Provider Obligations — Articles 9–17 and 43–72

Providers of high-risk AI systems must satisfy a comprehensive set of pre-market and ongoing obligations. These obligations cannot be contracted out to deployers — the provider remains directly liable for compliance with the technical and procedural requirements regardless of how the system is distributed or deployed.

Art. 9
Risk Management System

A continuous, iterative risk management process throughout the system's lifecycle. Providers must identify and analyse known and reasonably foreseeable risks, estimate and evaluate risks arising from intended use and foreseeable misuse, and adopt risk mitigation measures. Residual risk must be judged acceptable before market placement.

Art. 10
Data and Data Governance

Training, validation, and testing data must meet quality criteria: datasets must be relevant, sufficiently representative, and free of errors to the extent possible. Providers must address known biases and ensure data is appropriate for the system's intended purpose. Data governance practices must be documented.

Art. 11
Technical Documentation

Providers must draw up and maintain comprehensive technical documentation (Annex IV format) before placing the system on the market. Documentation covers: system description and intended purpose, development methodology, training data, testing and validation results, risk management output, and monitoring procedures. Must be kept current throughout the system's lifetime.

Art. 12
Record-Keeping and Logging

High-risk AI systems must have automatic logging capability to record events relevant to system operation — including system activation periods, reference databases consulted, input data, and decisions. Logs must be retained for periods aligned with intended use (minimum 6 months for most systems, 3 years for law enforcement and migration systems).

Art. 13
Transparency and Instructions for Use

Systems must be designed and developed to ensure adequate transparency so deployers can interpret outputs and use them appropriately. Providers must supply instructions for use covering: system identity and purpose, accuracy and performance metrics, human oversight requirements, technical measures available, and circumstances requiring operator intervention.

Art. 14
Human Oversight

Systems must be designed and built to enable effective human oversight. This includes technical measures allowing natural persons to monitor, understand, and intervene in system operation — including the ability to override, interrupt, or disregard system output. The level of oversight must be commensurate with the risk and context of use.

Art. 15
Accuracy, Robustness & Cybersecurity

High-risk AI systems must achieve appropriate levels of accuracy throughout their lifecycle and be resilient against errors, faults, and inconsistencies. They must be sufficiently cyber-secure to prevent adversarial attacks that could alter behaviour, output, or performance. Providers must specify accuracy metrics in technical documentation.

Art. 17
Quality Management System (QMS)

Providers must put in place a documented quality management system covering: compliance strategy; techniques for system design and development; testing and validation procedures; technical standards applied; data management procedures; risk management processes; post-market monitoring; incident reporting; and staff training. The QMS must be proportionate to the size of the provider organisation.

📋 Conformity Pathway — Articles 43–49
43
Conformity assessment: most high-risk systems follow a self-assessment pathway — the provider conducts the conformity assessment against the Annex IX checklist internally. Systems in certain Annex II product categories (medical devices, machinery, vehicles) follow the third-party notified body assessment required by the relevant sectoral legislation. Biometric identification systems for law enforcement always require a notified body.
47
EU Declaration of Conformity: providers must draw up and sign an EU Declaration of Conformity (Annex V format) declaring that the system complies with all applicable requirements of the AI Act. The Declaration must identify the provider, the system, the applicable standards, the conformity assessment procedure followed, and the notified body where applicable.
48
CE marking: once conformity is assessed and the Declaration drawn up, the provider must affix the CE marking — including to documentation, packaging, and digital presence. The CE marking on an AI system indicates conformity with the AI Act and any other applicable EU legislation (e.g., Medical Devices Regulation). Non-EU providers must ensure CE marking through their EU authorized representative.
49
Registration in the EU AI database: before market placement, providers must register high-risk AI systems in the EU AI public database maintained by the European AI Office. Registration requires: provider identity, system name and version, intended purpose, countries of intended deployment, Declaration of Conformity reference, and post-market monitoring contact. Non-EU providers register through their EU authorized representative.
72
Post-market monitoring (PMM): providers must operate a continuous post-market monitoring system to proactively collect and review experience from deployers and users. Where serious incidents or malfunctions are identified, providers must report to the relevant national market surveillance authority within 15 days (for serious incidents causing death) or 3 months (other serious incidents). PMM data must feed back into the risk management system.

Deployer Obligations — Articles 26 and 27

Deployers of high-risk AI systems — businesses and public bodies using a high-risk system under their authority — carry a distinct and independently enforced set of obligations. These obligations are not discharged by provider compliance; the deployer is directly responsible for how the system is deployed and used.

⚖️ Provider obligations vs. Deployer obligations — high-risk AI systems
🏭 Provider — pre-market and ongoing
Design, train, and test the system — technical obligations (Art. 9–15)
Draw up technical documentation (Annex IV) and maintain it
Operate a quality management system (Art. 17)
Conduct conformity assessment and draw up EU Declaration of Conformity (Art. 43, 47)
Affix CE marking and register in EU AI database (Art. 48, 49)
Maintain post-market monitoring and report serious incidents (Art. 72)
Appoint an EU authorized representative if established outside the EU (Art. 22)
🖥 Deployer — deployment and use obligations
Use the system in accordance with provider's instructions for use (Art. 26(1))
Assign competent persons to implement human oversight measures specified by provider (Art. 26(2))
Monitor system operation and report serious incidents to provider and national authority (Art. 26(5))
Retain logs automatically generated by the system for at least 6 months (Art. 26(6))
Inform and notify affected persons where required (Art. 26(11)) — relevant for employment and credit decisions
Conduct a Fundamental Rights Impact Assessment (FRIA) before first deployment if a public body, or if deploying in certain high-risk categories (Art. 27)
🔎
Fundamental Rights Impact Assessment (FRIA) — Article 27: deployers that are bodies governed by public law, or that deploy high-risk AI systems affecting natural persons in areas such as creditworthiness, employment decisions, or access to essential services, must complete a FRIA before first deployment. The assessment must evaluate: the purpose and context of deployment; affected persons and groups; concrete risks to fundamental rights (including discrimination, privacy, and access to justice); mitigation measures; and the deployer's procedures for managing identified risks. The FRIA must be submitted to the relevant market surveillance authority on request.

Section 6 — Compliance Timeline, Penalties, and Your Action Plan

The EU AI Act does not apply all at once. It follows a graduated implementation schedule tied to four specific application dates, with the most severe restrictions — the prohibited practices — already fully in force since February 2025. Non-EU providers with EU market exposure cannot treat August 2026 as a single deadline: the compliance preparation window for high-risk AI systems involves technical, legal, and organisational work that typically requires 12–18 months to complete properly.

The Four Key Application Dates

1 August 2024
✓ In force

Entry into Force

Regulation (EU) 2024/1689 entered into force on 1 August 2024. The Act is binding as a matter of EU law from this date. General provisions, definitions, and institutional framework apply — including the establishment of the European AI Office and the framework for national competent authorities. Member States must begin designating national supervisory authorities from this date.

2 February 2025
✓ In force

Prohibited Practices Apply + GPAI Codes of Practice

All eight Article 5 prohibitions apply from this date — any AI system or feature falling within the prohibited categories must have been discontinued. The penalty for prohibited practices (€35M / 7% global turnover) is live. The development of codes of practice for general-purpose AI models (Article 56) also commences, with the European AI Office coordinating industry and civil society participation.

2 August 2025
⚡ Upcoming

GPAI Model Obligations Apply

Chapter V of the AI Act — covering general-purpose AI models — applies from this date. Providers of GPAI models placed on the EU market must comply with transparency obligations, draw up and maintain technical documentation (Annex XI/XII format), establish copyright compliance policies, and publish summaries of training data. Providers of systemic-risk GPAI models face additional obligations including adversarial testing, incident reporting, and cybersecurity measures. The codes of practice finalized before this date create the compliance framework.

2 August 2026
⚠️ Full application

Full Application — High-Risk AI, Conformity, Registration

The main high-risk AI obligations framework applies in full. All providers of Annex III high-risk AI systems must have: completed conformity assessments; drawn up technical documentation and EU Declarations of Conformity; affixed CE marking; registered systems in the EU AI database; and established quality management and post-market monitoring systems. Deployers must have implemented human oversight measures, completed Fundamental Rights Impact Assessments where required, and established log retention procedures. National market surveillance authorities begin enforcement from this date. Non-compliance after 2 August 2026 exposes providers and deployers to the full penalty framework.

Penalties — The Three-Tier Fine Structure

The AI Act uses a tiered penalty structure linked to the severity of the violation. Fines are imposed by the national competent authority of the Member State where the infringement occurred. For non-EU providers, the authority of the Member State where the EU authorized representative is based has jurisdiction. Fines are capped at the higher of the absolute amount or the percentage of worldwide annual turnover — meaning global revenue, not only EU revenue, is used to calculate the cap.

Tier 1 — Most severe
€35,000,000 or 7% global turnover
Applies to
Violations of Article 5 — prohibited AI practices. Any provider or deployer that places on the market, puts into service, or uses an AI system falling within the eight prohibited categories.
SME / startup note
For SMEs and startups, national authorities may impose fines up to the lower of the absolute cap or the percentage figure — but the Article 5 prohibition violations remain within this highest tier regardless of company size.
Tier 2 — Significant
€15,000,000 or 3% global turnover
Applies to
Violations of provider and deployer obligations under Chapters III and IV — including failure to conduct conformity assessment, absence of technical documentation, no QMS, no CE marking, no registration, breach of GPAI model obligations under Chapter V.
Scope
This tier covers the full range of high-risk AI compliance failures. It is the most relevant tier for non-EU providers with EU market exposure who have failed to build a compliant pre-market pathway. Each obligation failure is a separate infringement.
Tier 3 — Administrative
€7,500,000 or 1.5% global turnover
Applies to
Supplying incorrect, incomplete, or misleading information to notified bodies, national competent authorities, or the European AI Office. Applies to both providers and deployers in the context of audits, inquiries, and supervisory investigations.
Practical relevance
The information-supply obligation is ongoing — national supervisory authorities have power to request technical documentation, QMS records, conformity assessment files, and post-market monitoring reports. Inaccurate responses to these requests engage Tier 3 exposure.

Your 6-Step AI Act Action Plan

For non-EU providers and deployers with EU market exposure, the following roadmap provides a structured approach to achieving compliance before 2 August 2026. Given the technical and organisational investment required, organisations that have not yet started compliance work should treat this as urgent.

1
Conduct an AI systems inventory and role assessment
Map every AI system your organisation develops, provides, imports, or deploys. For each system, determine your role in the AI Act supply chain (provider, deployer, importer, distributor) and the relevant application date. Identify any systems that may touch the eight prohibited categories and discontinue them immediately if not already done.
⚠️ Immediate — prohibited practice audit overdue since Feb 2025
2
Apply the risk classification framework to each system
For each system in your inventory, determine its risk tier using the Annex II / Annex III classification criteria. Assess whether any system falls within the eight Annex III high-risk categories. Apply the six-step decision framework: intended purpose → sector → Annex III category → safety role → significant risk to fundamental rights → high-risk classification. Document the reasoning for each system's classification — this will be required for supervisory inquiries.
Q1–Q2 2025 — complete before technical documentation work begins
3
Appoint an EU authorized representative (providers only)
Non-EU providers with any high-risk AI system reaching the EU market must appoint an EU authorized representative under Article 22 before market placement. The representative must be established in a Member State where the system is made available, must hold a written mandate, and is jointly liable for compliance. If your system is already on the EU market without a representative, this is a current infringement — rectify immediately.
⚠️ Urgent — required before or concurrent with market placement
4
Build the technical compliance package for high-risk systems
For each high-risk AI system, build the full technical compliance package: risk management system (Art. 9); data governance documentation (Art. 10); Annex IV technical documentation (Art. 11); automatic logging capability (Art. 12); instructions for use (Art. 13); human oversight technical measures (Art. 14); accuracy and robustness testing (Art. 15); and quality management system documentation (Art. 17). This is the most time-intensive element — allow 6–12 months for complex systems.
H2 2025 – H1 2026 — must complete before conformity assessment
5
Complete conformity assessment, CE marking, and registration
Once the technical compliance package is complete, conduct the conformity assessment (self-assessment for most Annex III systems; notified body for Annex II product-embedded systems and biometric ID). Draw up the EU Declaration of Conformity (Annex V format) and affix CE marking. Register each high-risk AI system in the EU AI database before market placement. Registration requires: provider details, system description, intended purpose, countries of deployment, and a Declaration of Conformity reference number.
Q1–Q2 2026 — must complete before 2 August 2026
6
Establish ongoing compliance infrastructure
AI Act compliance is not a one-time project — it requires permanent compliance infrastructure: post-market monitoring with incident reporting procedures (Art. 72); technical documentation maintenance processes; deployer notification procedures for serious incidents; log retention and retrieval systems; and a contact point for national supervisory authority requests. Deployers must additionally establish human oversight procedures, log retention for 6 months, and complete Fundamental Rights Impact Assessments where required. Integrate AI Act compliance into your standard product lifecycle governance.
By August 2026 — and maintained on an ongoing basis
⚖️ AI Act Legal Advice

Need legal guidance on your EU AI Act compliance strategy?

Whether you are a non-EU provider mapping your systems for the first time, a GPAI model developer preparing for the August 2025 obligations, or an enterprise deployer building your human oversight and FRIA framework, our AI law practice advises on every stage of the EU AI Act compliance lifecycle — from risk classification and technical documentation to authorized representative appointments and national authority interactions.

Speak to our AI Law Team →

Oleg Prosin is the Managing Partner at WCR Legal, focusing on international business structuring, regulatory frameworks for FinTech companies, digital assets, and licensing regimes across various jurisdictions. Works with founders and investment firms on compliance, operating models, and cross-border expansion strategies.