Skip to content
← Back to Blog
AI Governance

EU AI Act Preparation for UK Companies: What You Actually Need to Do Before August 2026

MW
Muhammad Waleed,Lead Consultant, Pixelette Certified
8 min read

On 2 August 2026, the second major application date of the EU Artificial Intelligence Act becomes effective. From that date, the obligations on providers and deployers of "general-purpose AI models" come into force, joining the prohibitions on unacceptable-risk AI systems that became effective in February 2025. The full obligations on high-risk AI systems follow in August 2027. If you are a UK technology company that builds, integrates, deploys or sells AI to EU customers, the EU AI Act applies to you, post-Brexit, regardless of where you are headquartered.

The extraterritorial reach is the part most UK founders underestimate. The EU AI Act applies to providers placing AI systems on the EU market or putting them into service in the EU regardless of where the provider is established, deployers of AI systems located in the EU, and providers and deployers of AI systems located in third countries including the UK where the output produced by the system is used in the EU. If you operate an AI-enabled SaaS platform from the UK, with a UK legal entity, hosted on UK infrastructure, and one of your customers is a French bank using your output to make decisions about French data subjects, the EU AI Act applies to you. Brexit does not exempt UK companies from the EU AI Act any more than the GDPR exempted them from EU data protection law.

The Act classifies AI systems into four risk tiers. Unacceptable Risk systems are prohibited from February 2025 and include social scoring by public authorities, real-time biometric identification in public spaces, emotion recognition in workplaces, and AI systems exploiting vulnerabilities of specific groups. High Risk systems face full obligations from August 2027 with phased application from August 2026, including AI used in biometric identification, critical infrastructure management, educational and vocational training, employment and worker management, access to essential services such as credit scoring and insurance pricing, law enforcement, and migration. Limited Risk systems have transparency obligations requiring users to be informed they are interacting with AI. Minimal Risk systems have no specific obligations.

The 2 August 2026 deadline is when the obligations on general-purpose AI (GPAI) models become effective. GPAI obligations include technical documentation of the model including training data sources and energy consumption, information and documentation provided to downstream providers, compliance with EU copyright law for training data, and publication of a sufficiently detailed summary of the content used for training. If your product is built on a foundation model, whether your own or a third party's, and used in the EU, GPAI obligations apply from August 2026. The August 2027 deadline is when the full obligations on high-risk AI systems become effective: conformity assessment, CE marking, post-market monitoring, registration in the EU database, and a quality management system.

ISO/IEC 42001:2023 is the international management system standard for AI. It is not legally equivalent to EU AI Act compliance, but it is the most efficient evidence base for the management system obligations the Act imposes. Specifically, ISO 42001 provides a documented AI management system meeting the quality management system requirement, an AI risk assessment and treatment process aligned with the Act's risk-based approach, AI inventory and lifecycle management satisfying the technical documentation obligations, human oversight and impact assessment processes, and continual improvement through the management review cycle. Holding ISO 42001 does not automatically make you EU AI Act compliant, but an ISO 42001 certified company has roughly 70 percent of the structural work done.

If you are a UK technology company with EU exposure and AI in your product, the 90-day priority list is: Day 1 to 14 to map your AI systems against the four risk categories and identify any high-risk systems specifically. Day 14 to 30 to map your customer base by geography, identifying which customers are in the EU and which use your AI output for decision-making. Day 30 to 60 to build an AI inventory covering which models you run, what data trains them, who is the human in the loop, and what decisions they influence. Day 60 to 90 to decide on the certification path: ISO 42001 alone, ISO 42001 plus ISO 27001, or formal EU AI Act conformity assessment for high-risk systems.

EU AI Act non-compliance penalties are deliberately punitive. Maximum fines are 35 million euros or 7 percent of global annual turnover for prohibited AI practices, 15 million euros or 3 percent for non-compliance with most other obligations, and 7.5 million euros or 1.5 percent for supplying incorrect information to authorities. These figures dwarf GDPR maximums. The likelihood of enforcement is also higher than under GDPR's first years, as the EU AI Office established in 2024 has explicit enforcement powers and is staffing up specifically to supervise the new regime.

Related Articles

Ready to get certified?

Book a free consultation to discuss your certification needs. Our team will assess your current position and recommend the fastest path to compliance.