Articles

EU AI Act Compliance: What Your Business Needs to Do Before August 2026

The EU AI Act's broadest enforcement wave hits August 2, 2026, bringing high-risk AI rules, transparency obligations, and penalties up to €35M or 7% of global revenue. This guide covers risk classification, obligations by tier, and what to do before the deadline.

Concord Team · Published Mon Apr 27 2026

EU AI Act Compliance: What Your Business Needs to Do Before August 2026

The EU AI Act's broadest enforcement wave hits August 2, 2026

The EU AI Act is the world's first major regulation to treat artificial intelligence as a distinct category of legal risk. On August 2, 2026, its broadest enforcement provisions take effect, including obligations for high-risk AI systems, Article 50 transparency requirements for AI-generated content and chatbots, and full European Commission enforcement powers over general-purpose AI models. The penalties are scaled to get attention: up to 35 million euros or 7% of global annual revenue, whichever is higher.

This is not a future-state concern. The Act's implementation timeline is already well underway. Prohibitions on the highest-risk AI practices have been in force since February 2, 2025. General-purpose AI model obligations activated on August 2, 2025. What August 2, 2026 brings is the broadest wave: the provisions that touch the widest range of organizations, including any company that deploys AI systems classified as high-risk and any company whose AI interacts with people in the EU or generates content consumed in EU markets.

The Act applies extraterritorially. If your AI system's output is used in the EU, you are in scope regardless of where your company is headquartered. That makes the EU AI Act relevant not only to European organizations but to any mid-market company with EU customers, EU-based users, or AI features that produce outputs consumed within the EU.

This guide covers the four risk tiers, the specific obligations each tier triggers, the penalty structure, the current state of implementation guidance, and where AI governance connects to your broader data privacy and compliance stack.

The four risk tiers determine your obligations

The EU AI Act organizes AI systems into four risk categories. Your obligations (and whether your system can operate in the EU at all) depend on which tier your system falls into. The official framework defines each tier by the potential harm the system poses to health, safety, and fundamental rights.

Prohibited (unacceptable risk)

These AI practices are banned outright. No conformity assessment, no mitigation plan. They cannot be placed on the EU market or used within the EU under any circumstances. The prohibitions have been enforceable since February 2, 2025, and include:

  • Social scoring systems that evaluate or classify people based on social behavior or personal characteristics, leading to detrimental treatment
  • AI that deploys subliminal, manipulative, or deceptive techniques to distort behavior in ways that cause significant harm
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with narrow exceptions for specific serious crimes, subject to judicial authorization)
  • Emotion recognition systems in workplaces and educational institutions
  • Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases
  • AI that exploits vulnerabilities related to age, disability, or socioeconomic situation

If your organization operates any system that falls into this category, the obligation is straightforward: stop.

High-risk

This is the most heavily regulated category of AI systems that are permitted on the market. High-risk systems carry the full weight of the Act's compliance obligations (risk management, documentation, logging, human oversight, and conformity assessment), all of which must be in place before August 2, 2026.

High-risk AI systems fall into two subcategories. The first includes AI that serves as a safety component of products already covered by existing EU harmonization legislation, including medical devices, machinery, toys, aviation systems, vehicles, elevators, and similar regulated products. These systems must meet EU AI Act requirements in addition to the sector-specific rules they already comply with.

The second subcategory covers standalone AI systems listed in Annex III of the Act, operating in areas the EU considers high-stakes:

  • Biometrics: Remote biometric identification (beyond what's prohibited), biometric categorization, emotion recognition (outside the prohibited workplace/school contexts)
  • Critical infrastructure: AI managing the operation or safety of digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity
  • Education and vocational training: Systems that determine access to education, evaluate learning outcomes, or monitor prohibited behavior during exams
  • Employment and worker management: AI used in recruitment, screening, hiring decisions, task allocation, performance monitoring, or termination decisions
  • Access to essential services: Credit scoring, insurance risk assessment and pricing, evaluation of eligibility for public assistance, emergency service dispatch prioritization
  • Law enforcement: Individual risk assessment, polygraph and deception-detection tools, evidence evaluation, crime prediction (where it evaluates individuals)
  • Migration, asylum, and border control: Polygraph tools, risk assessment of irregular migration, examination of visa and asylum applications
  • Administration of justice and democratic processes: AI systems used by judicial authorities in researching, interpreting, or applying the law

For compliance teams at mid-market companies, the practical question is whether any AI system you build, deploy, or procure operates in one of these areas. An AI-powered hiring tool, an automated credit-scoring model, an AI system that routes customer support tickets based on risk assessment of the customer: each of these could fall within Annex III.

Limited risk

Limited-risk AI systems face transparency obligations only. They do not require conformity assessments, risk management systems, or technical documentation at the high-risk level. But they do require disclosure, and the disclosure rules are specific:

  • AI systems that interact directly with people must inform users that they are interacting with an AI
  • AI-generated or AI-manipulated content (text, audio, images, video) must be labeled in a machine-readable format
  • Deepfakes must be disclosed as artificially generated or manipulated
  • Emotion recognition and biometric categorization systems (those not already prohibited) must disclose their function to the people they affect

These obligations take full effect on August 2, 2026, and they apply broadly. If your organization uses AI chatbots for customer support, generates marketing content with AI tools, or deploys any system that produces synthetic media, the transparency requirements are relevant.

Minimal risk

The majority of AI systems fall here: spam filters, AI-enhanced video games, inventory management algorithms, recommendation engines, and most general-purpose business software. The EU AI Act does not regulate minimal-risk systems. Voluntary codes of conduct are encouraged, but no binding obligations apply.

The distinction matters because organizations often overestimate their exposure (assuming everything with "AI" in the name is regulated) or underestimate it (assuming that because they don't build autonomous weapons, the Act doesn't apply). The risk tier is determined by what the system does and the context in which it operates, not by the underlying technology.

What high-risk AI systems must do before August 2

Organizations that deploy, develop, or import high-risk AI systems into the EU market face a specific set of obligations. These are not aspirational. They are conditions for market access. Meeting them requires changes to how AI systems are designed, documented, and governed.

Risk management system. High-risk AI systems must have a documented risk management process that runs throughout the system's lifecycle, not a one-time assessment at launch. This includes identifying foreseeable risks, estimating their likelihood and severity, and adopting mitigation measures. The risk management system must be updated as the system evolves and as new risks emerge from real-world use.

Data governance. Training, validation, and testing datasets must meet quality criteria specified in the Act. This includes relevance, representativeness, freedom from errors, and completeness relative to the intended purpose. Bias detection and mitigation procedures must be documented and applied. For organizations that build or fine-tune AI models, this means formalizing data lineage and quality practices that may currently exist only informally.

Technical documentation. High-risk systems require documentation maintained throughout the system's lifecycle. The documentation must demonstrate compliance with EU AI Act requirements in a way that allows national authorities to assess it. This includes system design, development methodology, testing procedures, and monitoring plans.

Automatic event logging. The system must include built-in logging capabilities that record events relevant to identifying risks, facilitating post-market monitoring, and enabling traceability. This is an architecture-level requirement: logging must be designed into the system, not added after deployment.

Transparency to deployers. Organizations that deploy high-risk AI systems must receive clear, sufficient information about the system's capabilities, limitations, intended purpose, foreseeable misuse scenarios, and the human oversight measures built into it. If you procure a high-risk AI system from a third-party provider, you are entitled to this documentation, and obligated to act on it.

Human oversight. High-risk AI systems must include mechanisms that allow natural persons to understand the system's outputs, intervene in real time, and override or reverse the system's decisions. The level of oversight must be proportionate to the risks, but the principle is non-negotiable: a high-risk AI system cannot operate as a black box with no human in the loop.

Accuracy, robustness, and cybersecurity. The system must achieve levels of accuracy, resilience to errors, and security that are appropriate to its risk level. These must be declared in the technical documentation and validated through testing.

Conformity assessment. Before placing a high-risk AI system on the EU market, the provider must complete a conformity assessment. For most Annex III systems, this is a self-assessment. For remote biometric identification systems, a third-party assessment by a notified body is required. The assessment must demonstrate that all of the above obligations are met.

CE marking and EU database registration. Compliant high-risk AI systems receive a CE marking and must be registered in the EU database before market placement.

For compliance leaders at mid-market companies, the operational implication is clear: if you have any AI system that falls into the high-risk category, the work to document, assess, and formalize its governance must be underway now. August 2 is not a start date. It is a deadline.

Transparency obligations apply even to limited-risk AI

The high-risk obligations receive the most attention, but Article 50 of the EU AI Act creates a separate set of transparency requirements that apply to a much broader range of organizations. These provisions take full effect on August 2, 2026.

The rules are specific:

AI systems that interact with people must say so. If your company operates a chatbot, a virtual assistant, or any AI system that communicates directly with users, those users must be informed, clearly and before or at the start of the interaction, that they are engaging with AI. The exception is where this is obvious from the circumstances and context to a reasonably informed person, but the threshold for "obvious" is high.

AI-generated content must be machine-readably labeled. Text, audio, images, and video that are generated or substantially modified by AI must carry machine-readable markers. This applies to organizations that produce the content, not only to the AI system providers. If your marketing team uses AI to generate blog posts, social media content, product images, or video, the outputs must be labeled.

Deepfakes require explicit disclosure. Any artificially generated or manipulated image, audio, or video that resembles existing persons, objects, places, or events and could falsely appear authentic must be disclosed as AI-generated. The disclosure must be visible and intelligible.

The practical reach of Article 50 is wider than many organizations realize. Customer-facing AI chatbots are an obvious case. But the content-labeling requirements also touch marketing teams, content operations, and product teams that use generative AI in their workflows. Compliance teams should audit every customer-facing AI touchpoint and every content pipeline that involves generative AI tools.

General-purpose AI models face their own rules

The EU AI Act includes a dedicated regulatory track for general-purpose AI (GPAI) models, the foundation models and large language models that power a growing share of business applications. The GPAI provider obligations have been technically in effect since August 2, 2025, but the European Commission's enforcement powers activate on August 2, 2026, ending a one-year grace period during which the AI Office could issue guidance but not enforce.

All GPAI model providers must:

  • Maintain and make available technical documentation describing the model's training, capabilities, and limitations
  • Provide information and documentation to downstream deployers who integrate the model into their systems
  • Establish a policy for complying with EU copyright law
  • Publish a sufficiently detailed summary of the training data used

GPAI models classified as posing "systemic risk" (currently defined as models trained with computational resources exceeding 10^25 floating-point operations, or FLOPs) face additional obligations. These include adversarial testing (red-teaming), serious incident monitoring and reporting to the AI Office, and energy consumption documentation. Today, this tier applies primarily to frontier models from a small number of providers.

For most mid-market organizations, the practical implication is layered. If you use a third-party GPAI model (GPT-4, Claude, Gemini, Llama, or similar) as a component in your products, the model provider carries the GPAI-specific obligations. But you still carry deployer obligations if your use case places the system in a high-risk category. Using Claude to power an internal knowledge base is different, from a regulatory standpoint, than using it to make automated hiring recommendations.

The distinction between model provider obligations and deployer obligations is one of the Act's most operationally important boundaries. Compliance teams should map every GPAI integration, identify whether the downstream use case triggers high-risk classification, and ensure that the deployer-side obligations are addressed regardless of what the model provider documents.

The penalty structure makes non-compliance existential

The EU AI Act's enforcement teeth are proportional to the obligations it creates. The fine structure is tiered by violation category and scaled to revenue, a design borrowed from GDPR that has already proven effective at motivating compliance investment.

Prohibited practices: Up to 35 million euros or 7% of global annual turnover, whichever is higher. For a company with 200 million euros in annual revenue, that ceiling is 14 million euros. For a company at 500 million, it is 35 million.

High-risk non-compliance: Up to 15 million euros or 3% of global annual turnover. This covers failures in documentation, risk management, conformity assessment, logging, transparency to deployers, and human oversight obligations.

Providing incorrect or misleading information to authorities: Up to 7.5 million euros or 1% of global annual turnover. This provision incentivizes accuracy in self-assessments and documentation, a direct lesson from GDPR enforcement, where misrepresentations to Data Protection Authorities have drawn penalties distinct from the underlying violations.

Enforcement is decentralized. Each EU member state designates national competent authorities responsible for market surveillance and enforcement within their jurisdiction. Finland became the first member state with full AI Act enforcement powers in December 2025, and other member states are following. The European AI Office, housed within the European Commission, handles GPAI model enforcement directly.

For organizations accustomed to GDPR enforcement patterns, the trajectory is instructive. GDPR fines have reached billions of euros since 2018. The EU AI Act's penalty ceilings are higher in relative terms (7% vs. 4% of turnover), and the political will behind AI regulation is at least as strong as the momentum that drove GDPR enforcement in its early years. The penalties must account for proportionality and the interests of small and medium enterprises, but proportionality does not mean immunity.

The guidance gap is real, and standards are not ready either

Organizations preparing for August 2 face a practical obstacle: the supporting guidance and technical standards that would make compliance more concrete are behind schedule.

The European Commission missed its own deadline to publish guidelines on high-risk AI system obligations. These guidelines were intended to provide practical clarity on how to meet the Act's requirements, the kind of interpretive guidance that DPAs provided (eventually) under GDPR and that proved essential for organizations building compliance programs.

Meanwhile, the two European standardization bodies responsible for developing harmonized technical standards (CEN and CENELEC) missed their fall 2025 deadline. Current projections aim for publication by end of 2026, well after the August 2 enforcement date. Without harmonized standards, there is no presumption of conformity: an organization can't point to a certified standard and say "we followed this, therefore we comply."

The Council of the EU acknowledged this reality on March 13, 2026, when it agreed to streamline the Act's implementing rules, a recognition that the regulatory framework's ambition has outpaced its operational scaffolding.

What does this mean for compliance teams? Waiting for perfect guidance is not an option. The obligations are defined in the Act itself. The missing standards would have provided one path to demonstrating compliance, but they are not the only path. Organizations should:

  • Start with GDPR-style documentation discipline: document what AI systems you operate, what data they process, what decisions they influence, and what oversight mechanisms exist
  • Conduct internal risk classifications against the Annex III categories, even without harmonized standards to benchmark against
  • Build and maintain AI policies that describe governance structures, risk management procedures, and incident response processes
  • Treat the August 2 deadline as the starting line for compliance posture, not the finish line. Enforcement will ramp gradually, and organizations with demonstrable good-faith efforts will be better positioned than those with nothing in place

The regulatory framework is clear in its requirements, even if the technical standards lag. Organizations that wait for clarity they can touch will find themselves preparing under enforcement pressure rather than ahead of it.

Where AI policy generation fits into your compliance stack

The EU AI Act does not exist in isolation. It intersects with GDPR (where personal data is used to train or operate AI systems), the Digital Services Act (where AI is used in content moderation or recommendation systems), the Product Liability Directive (updated to cover AI-caused harm), and the national implementations that each member state is building. For compliance teams, this creates a coordination problem: AI governance must connect to existing privacy governance, not sit beside it in a separate silo.

The practical requirements add up quickly. Organizations need documented AI policies that cover:

  • What AI systems they operate and their risk classifications. An inventory is the foundation: you cannot assess what you haven't mapped
  • Data processing bases for AI training and inference. GDPR's lawful basis requirements apply to personal data used in AI systems, and the EU AI Act's data governance obligations add quality and bias criteria on top
  • Human oversight procedures. Who reviews AI outputs in high-risk use cases, how they intervene, and how interventions are documented
  • Transparency disclosures. What users are told, when, and through what mechanism
  • Incident response. How failures, misclassifications, or harmful outputs are detected, escalated, and reported to authorities

These policies must stay current as regulations evolve, as AI systems change, and as new guidance and standards are published. Static documents (a PDF drafted once and filed) go stale the moment the regulatory environment shifts. This is the same dynamic that makes manual cookie policies unreliable: the moment your site adds a new tracker or a regulation updates its consent requirements, a static policy becomes inaccurate.

The intersection is clear when you trace the data flows. Data mapping tells you what data feeds your AI systems and where it originates. Consent management ensures that data used for AI training or personalization was collected with appropriate legal basis. Policy generation maintains up-to-date AI policies, privacy policies, and cookie policies that reflect the current state of your systems and the regulations that govern them. Privacy request handling (DSARs) ensures that individuals can exercise their rights over data used in AI processing.

These are interconnected compliance obligations, not separate workstreams. Organizations that handle consent in one tool, AI governance in another, data mapping in a third, and policies in a shared drive are recreating the same fragmentation problem that point-tool privacy stacks create: gaps between systems where violations live.

A unified approach to data privacy and AI compliance keeps these obligations connected. When your data map updates, your AI inventory updates. When regulations change, your policies reflect it. When a person exercises a data rights request that touches data used in AI training, the request routes to the right system because the data map already knows where that data lives.

Key takeaways

  • August 2, 2026 is the EU AI Act's broadest enforcement date. High-risk AI system rules, Article 50 transparency obligations, and European Commission enforcement powers all activate. The Act applies extraterritorially, and EU market exposure puts you in scope.

  • Risk classification drives everything. Identify which tier each of your AI systems falls into. High-risk systems require risk management, documentation, logging, human oversight, and conformity assessment. Limited-risk systems require transparency disclosures. Minimal-risk systems are unregulated.

  • Transparency obligations reach further than most organizations expect. AI chatbots must disclose they are AI. AI-generated content must be machine-readably labeled. These rules apply to deployers, not only to model providers.

  • Guidance and standards are behind schedule, but the obligations are not. Start with what the Act requires (documentation, risk assessment, policy generation, and governance structures) rather than waiting for harmonized standards that may not arrive before enforcement begins.

  • AI governance and data privacy are the same problem. Data mapping, consent management, policy generation, and privacy request handling are interconnected with AI compliance obligations. Unified platforms close the gaps that fragmented tools leave open.