The EU AI Act is not coming. It is here. It entered into force in August 2024 and its obligations are rolling out on a phased timeline that has already started. If your company develops AI systems, deploys AI tools in its products, or sells into European markets, you are likely already subject to some of its requirements.
Most compliance guides make this sound more complicated than it needs to be. Here is the short version of what finance and compliance leaders at growing companies actually need to understand.
The Phased Timeline - What's Already Active
This is where most companies are underestimating their exposure. The EU AI Act is not a future obligation - it is a phased regulation with live deadlines:
August 2024 - Act enters into force. General obligations begin.
February 2025 - Prohibited AI practices banned. AI literacy obligations for all companies using AI become active. If your company uses AI tools in any business process, you already have an obligation to ensure your staff understand how to use them appropriately.
August 2025 - Rules for general purpose AI models and governance obligations for GPAI providers enter into force. If your product is built on a foundation model - OpenAI, Anthropic, Google, Mistral - this affects how you document and govern that dependency.
August 2026 - High-risk AI system obligations fully active. This is the deadline most growing companies need to be planning toward now.
August 2027 - Final provisions for certain legacy systems.
If you are waiting for 2026 to start thinking about this, you are already behind.
Who the EU AI Act Applies To
The Act applies to any company that places an AI system on the EU market, puts one into service, or uses one - regardless of where the company is headquartered. This includes:
Providers - companies that develop AI systems or place them on the EU market. If you build AI into your product and sell it in Europe, you are a provider.
Deployers - companies that use AI systems in their operations or products. Using OpenAI, Microsoft Copilot, Google Gemini, or any other AI service in a way that affects customers or employees makes you a deployer with active obligations.
Importers and distributors - companies in the supply chain of AI systems entering the EU market.
The EEA dimension: Norway, Iceland, and Liechtenstein are EEA members. The EU AI Act will apply to EEA states through the EEA agreement - which means Norwegian companies are subject to the same obligations as EU-headquartered companies on the same timeline, and should not wait for formal EEA transposition before planning for compliance.
The Risk Classification System - Where You Sit
The Act classifies AI systems into four risk levels, and your obligations depend entirely on which category your AI falls into:
Unacceptable risk - prohibited. AI systems that manipulate people subconsciously, exploit vulnerabilities, enable mass social scoring by governments, or conduct real-time biometric surveillance in public spaces. These are banned outright as of February 2025.
High risk - most demanding obligations. AI used in hiring and employment decisions, credit scoring, educational assessment, critical infrastructure, law enforcement, border control, and administration of justice. Also AI used as safety components in regulated products. If any of your AI touches these areas, this is your compliance tier.
Limited risk - transparency obligations. Chatbots, AI-generated content, deepfakes. You must disclose that users are interacting with AI.
Minimal risk - no specific obligations. Spam filters, AI in video games, most recommendation systems. These are covered by the Act but face no specific requirements beyond general good practice.
The critical question for most growing tech companies is whether any part of their AI application touches the high-risk categories - even indirectly.
What the EU AI Act Requires From Leadership
Like NIS2, the EU AI Act explicitly places accountability at the top. Article 26 on deployer obligations requires that deployers assign human oversight, ensure staff have sufficient AI literacy, and that fundamental rights impact assessments are conducted for high-risk AI.
The governance layer comes down to the same three workflows:
1. Sign-offs. AI literacy policy approved at leadership level. Risk classification decisions for each AI system documented and signed off. Fundamental rights impact assessments for high-risk applications. Human oversight policy with named accountable owners. These require documented approval trails.
2. Disclosures. Technical documentation for AI systems, conformity assessments for high-risk AI, transparency notices for limited-risk AI, and incident reports to market surveillance authorities. These must be current, retrievable, and accurate.
3. Information requests. Regulatory evidence requests from national competent authorities, customer AI governance questionnaires, and internal data collection for risk assessments and impact assessments.
What the Penalties Look Like
The EU AI Act has the steepest penalty structure of any regulation in this set:
- Violation Type
- Maximum Penalty
For SMEs and startups, the Act specifies that penalties should be proportionate - but the percentages of turnover remain. A a5M ARR company faces a potential a350,000 penalty for prohibited practices. That is not an abstract risk.
How Long EU AI Act Compliance Takes
What It Costs
With governance automation tooling: €6,000-15,000 first year.
Traditional consultant route: €20,000-60,000 depending on the number of AI systems in scope and risk classification complexity.
Annual ongoing with tooling: €6,000-12,000. The EU AI Act creates a continuous obligation - AI systems must be monitored, incidents reported, and documentation kept current as systems evolve.
For companies already pursuing ISO 42001, the governance infrastructure overlaps almost entirely. The AI system inventory, risk assessments, impact assessments, and sign-off workflows are shared - making EU AI Act compliance materially cheaper alongside ISO 42001 than as a standalone exercise.
Want the EU AI Act compliance checklist, the risk classification guide, and the full leadership sign-off framework?
Download: EU AI Act for Growing Companies - What Leadership Needs to Know →