Compliance Checklist
EU AI Act Compliance Checklist: A Practical 10-Step Guide for AI Deployers (2026)
The EU AI Act is the world's first comprehensive AI regulation. Enforcement has already begun — prohibited practices and AI literacy obligations (Article 4) have been enforceable since February 2, 2025. Full enforcement of high-risk system requirements starts August 2, 2026. This checklist focuses on what enterprises using AI tools (deployers) must do — not organizations building them.
Who does the EU AI Act apply to?
The EU AI Act distinguishes three primary roles. Providers develop and place AI systems on the market. Deployers use an AI system under their authority in the course of a professional activity. Distributors make an AI system available on the market without modifying it. The vast majority of European enterprises are deployers.
The regulation applies if even one EU-based user uses your AI system. It also applies to non-EU companies whose AI systems affect EU individuals. In practice, this means any global enterprise with customers, employees, or users in Europe is in scope.
Most enterprise AI usage — ChatGPT, Claude, Microsoft Copilot, Gemini — falls into the limited or minimal risk categories. But this does not mean zero obligations: AI literacy (Article 4), risk management, and transparency apply regardless.
Critical Timeline
February 2, 2025
Already in force
Prohibited practices enforceable + AI literacy obligations (Article 4). If you have not started, you are already behind.
August 2, 2025
GPAI model obligations
Foundation model providers (GPT-4, Claude, Gemini...) must comply. This affects deployers through their vendor contracts.
August 2, 2026
Full enforcement — high-risk systems
All requirements for high-risk AI systems become enforceable. Conformity assessments, EU database registration, incident reporting procedures.
August 2, 2027
Extended deadline for certain existing high-risk AI systems
Additional deadline for high-risk AI systems already deployed before August 2, 2026, subject to conditions.
The 10-Step Compliance Checklist
Inventory all AI tools
You cannot govern what you cannot see. The inventory covers both enterprise-sanctioned tools and shadow AI used without IT validation. For each tool, document: who uses it, how often, in which department, and what categories of data are processed.
Noxys automatically discovers 15+ AI platforms via a browser extension deployed in minutes. No infrastructure changes required.
Classify by risk level
The EU AI Act defines four risk levels. Prohibited practices include social scoring, remote biometric categorization, subliminal manipulation — banned without exception. High-risk systems include credit scoring, automated hiring, medical devices, law enforcement. Limited risk covers chatbots and emotion recognition — transparency obligations apply. Minimal risk covers spam filters and AI-assisted games — no specific obligations.
Determine your role
The scope of your obligations depends on your role in the AI value chain. The provider builds or trains the AI system. The deployer uses an AI system under their own authority in a professional context — this is the role of the majority of enterprises. The distributor makes the AI system available on the market without substantial modification.
The same organization can simultaneously be a provider for some systems (in-house developed AI) and a deployer for others (ChatGPT Enterprise, Microsoft Copilot). Each system must be analyzed independently.
Implement AI literacy (Art. 4)
Article 4 has been enforceable since February 2, 2025. It requires organizations to ensure that staff using AI systems have a sufficient level of AI literacy. The EU Commission has confirmed there is no mandatory training format — each organization must determine what constitutes a sufficient level based on its context.
In practice, the first step is knowing which tools are in use. You cannot ensure AI literacy around tools you are not aware of. Shadow AI discovery is therefore the natural starting point for Article 4 compliance. The AI Office maintains a repository of AI literacy practices that organizations can consult.
Noxys provides a complete inventory of AI tools in use across your organization, forming the documentary basis to substantiate your AI literacy efforts.
Document risk assessments (Art. 9)
Article 9 requires a risk management system established and maintained throughout the AI system's lifecycle. For a deployer, this means continuous risk monitoring per tool and per interaction — not just a one-time annual assessment.
Documentation must cover: identification of residual risks, mitigation measures in place, ongoing monitoring, and results of periodic testing. For high-risk systems, this documentation becomes mandatory and must be kept available to surveillance authorities.
Noxys provides automated risk scoring per interaction, with classification of sensitive data (PII, IBANs, credentials) detected in prompts.
Ensure transparency and auditability (Art. 13)
Article 13 requires high-risk AI systems to be designed to be sufficiently transparent. For a deployer, this translates to maintaining an immutable audit trail of AI interactions: who used which tool, when, with what classification of data involved.
The audit trail must be integral and non-alterable. It must allow faithful reconstruction of the interaction history to respond to regulatory audit requests or internal investigations following an incident.
Implement human oversight controls (Art. 14)
Article 14 requires that AI systems be designed to allow effective human oversight. Organizations must implement controls enabling humans to review, intervene, and override AI decisions. In practice, this means having a configurable policy engine: block, coach, or log per department.
Administrators must be able to examine individual interactions, trigger alerts, and override non-compliant AI behaviors in real time. Human oversight cannot be purely retrospective — it must be operational at the moment interactions occur.
Appoint an AI compliance officer
EU AI Act compliance requires clearly assigned accountability. This role can be filled by an existing DPO, CISO, compliance officer, or a dedicated role. The essential requirement is that this person has operational visibility into the organization's AI usage patterns.
The AI compliance officer must have access to AI usage dashboards, incident reports, risk assessments, and must be able to respond to requests from national market surveillance authorities (in France, AI surveillance will be coordinated with the CNIL).
Update vendor contracts
Your AI vendors (OpenAI, Anthropic, Microsoft, Google...) are data processors under the GDPR and potentially providers under the EU AI Act. Your contracts must include up-to-date data processing agreements (DPAs), verification of GDPR compliance and data residency, and a guarantee of opt-out from training data usage.
Verify that your vendors' standard commercial agreements include the necessary clauses. Many do not by default — ChatGPT Team and Enterprise have separate DPAs from free accounts, for example. European data residency may require specific tiers (Azure EU, Google Cloud EU...).
Prepare regulatory documentation
For high-risk systems, formal conformity assessments are required before deployment. Certain systems must be registered in the EU database of high-risk AI systems. Incident reporting procedures must be established and tested. This documentation must be kept current and available on request from surveillance authorities.
Even if your primary AI usage is limited or minimal risk, building this documentary infrastructure now allows you to more easily absorb future obligations if your AI usage evolves toward high-risk use cases.
Penalties
Penalties under the EU AI Act are structured across three tiers of severity. The applicable amount is the higher of the absolute ceiling in euros and the percentage of global annual turnover.
| Maximum fine | Infraction |
|---|---|
| 35M EUR or 7% of turnover | Use of prohibited AI practices (social scoring, subliminal manipulation, unauthorized biometric categorization...) |
| 15M EUR or 3% of turnover | Non-compliance with AI system obligations (risk management, transparency, human oversight, AI literacy...) |
| 7.5M EUR or 1% of turnover | Supplying incorrect, incomplete, or misleading information to surveillance authorities |
Note: for SMEs and startups, ceilings may be reduced. National authorities have discretion in applying penalties.
Article 4 Deep Dive: AI Literacy
Article 4 is the most immediate obligation for deployers, having been enforceable since February 2025. It states that AI providers and deployers must take measures to ensure their staff have a sufficient level of AI literacy.
What the EU Commission says
The European Commission has confirmed that there is no one-size-fits-all mandatory approach to AI literacy. No specific training module required, no minimum duration, no mandatory certification. The obligation is one of result — you must demonstrate your staff have a sufficient level of AI understanding — not of means.
Practical interpretation
In the absence of a prescribed format, regulators will assess your good-faith efforts. Documentation is therefore essential: inventory of AI tools in use, awareness programs implemented, acceptable use policies communicated. Shadow AI discovery is not just a security matter — it is also your first piece of evidence for Article 4.
The EU AI Office maintains a repository of AI literacy practices that organizations can use as a reference and additional documentation of their compliance efforts.
Direct connection with Noxys: the automatic AI tool inventory constitutes your baseline documentation for Article 4. Knowing which tools are used, by whom, and in what context is the prerequisite for any credible AI literacy program.
Start your compliance journey today
Discover in minutes which AI tools are in use across your organization. Free up to 10 users, no credit card required.
FAQ
My company is not in the EU — am I concerned?
Yes, if your AI systems process data or produce outputs affecting individuals located in the EU. The regulation applies based on the impact on people in the EU, not just your registered office location. If you have employees, customers, or users in Europe, the regulation likely applies to you.
Are free tools like ChatGPT covered?
Yes. The EU AI Act applies regardless of the commercial model of the tool. A free ChatGPT account used in a professional context is subject to the same obligations as an enterprise license. The relevant distinction is professional use (deployer) versus purely personal use, not free versus paid.
Do I need to audit existing AI usage?
Yes, and the deadline for Article 4 has passed since February 2025. An audit of existing usage is the mandatory starting point: you cannot manage risk or ensure literacy around tools you are not aware of. The audit must cover both sanctioned tools and shadow AI used without IT validation. There is no prescribed format, but documenting the audit process itself constitutes a compliance element.