27 October 2025

The EU AI Act through a Cyber Risk Transfer Lens: What UK & EU Businesses Need to Know

The EU Artificial Intelligence Act is no longer a ‘coming soon’ event; it’s here. The Act entered into force on 1 August 2024; bans on ‘unacceptable risk’ systems began applying on 2 February 2025; transparency and governance rules for general-purpose AI (GPAI) kicked in on 2 August 2025; and the bulk of requirements will become fully applicable from 2 August 2026 (with some high-risk obligations extending to 2 August 2027)

Crucially for UK businesses, the Act has extraterritorial reach. If you place AI systems on the EU market or your outputs are used in the EU, you can be in scope, even without an EU legal entity. In practical terms, the Act is becoming a standard that many UK and global firms selling into Europe are required to follow.

What is the EU AI Act?

The AI Act is considered the world's first comprehensive legislative framework for AI. The EU parliament’s priority is to regulate a safe environment for AI systems implemented and used in the EU, harnessing the potential benefits to deliver technological advancements and promote innovation and growth.

Who does it apply to?

The framework applies to both public and private companies, inside and outside the EU, if the AI systems are in the EU market. Entities include providers (i.e., those developing the systems), users (referred to as ‘deployers’), importers, distributors, or manufacturers of AI systems.

What does this mean in risk terms?

The AI Act organises obligations by risk tier. Unacceptable risk uses (e.g., certain manipulative systems, social scoring) are banned; high-risk systems (such as safety components in regulated products or certain uses listed in Annex III) face the heaviest duties. These duties include: a risk management system, data governance and quality controls, technical documentation, logging, human oversight, robustness/cybersecurity measures, post-market monitoring, and serious-incident reporting. General-Purpose Artificial Intelligence (GPAI) providers have transparency and model-governance duties, now guided by an EU Code of Practice released in July 2025. Fines are eye-watering: up to 35m euros or 7% of global turnover for prohibited practices; 15m euros or 3% for other key breaches; 7.5m euros or 1% for supplying misleading information.

From a cyber risk transfer perspective, three parts of the Act stand out:

1. Post-market monitoring and logging. Providers of high-risk AI must monitor performance, collect evidence of compliance, and keep automatically generated logs for specified periods (often at least six months). These records become critical evidence in claims, incident response, and regulatory defence.

2. Serious-incident reporting clocks. For high-risk AI, providers must notify national authorities as soon as a causal link (or likely link) is established, and in any event within 15 days—with faster timelines for widespread incidents and fatalities. That compresses internal triage windows and should be reconciled with cyber policy notice provisions and GDPR/NIS2 notification plans.

3. GPAI governance and the Code of Practice. The new Code helps model providers and downstream deployers align on technical documentation, transparency and safety controls before enforcement ramps up. Underwriters will increasingly look for objective evidence that you’re mapping to this Code (or an equivalent control framework).

How does this change Cyber (and E&O) underwriting?

Regulatory exposure becomes model-specific

Underwriters will want to know which systems are in scope (GPAI vs. high-risk vs. limited risk), where they’re deployed, and what controls apply. Expect supplemental AI questionnaires covering: data provenance, red-teaming, prompt-injection defences, model registry/inventory, approval workflows, evaluator bias testing, rollback plans, and third-party model dependencies. Firms that can evidence AI risk management (documentation, logs, and post-market monitoring outputs) will look better at renewal.

Notification discipline matters

The AI Act’s incident-reporting deadlines may be shorter or different from your cyber policy conditions. Brokers should help insureds align: define what a serious incident is for high-risk AI, flow it into incident response playbooks, and pre-agree with insurers how rapid regulatory notices integrate with carrier notification, privilege considerations, and forensic workstreams.

Defence costs vs. fines

Most cyber policies cover regulatory investigations and defence costs, but fines and penalties are a patchwork, often uninsurable where law forbids (and member states will implement penalties nationally under Article 99). Expect tighter wording around insurability, especially for prohibited-practice breaches carrying the 7%/35m euros tier. Coordinate cyber with Tech E&O and product liability programs to prevent coverage gaps for algorithmic harm, discrimination claims, IP/copyright issues from generative AI outputs, and service outage from model failure.

Third-party/vendor risk elevates

Many enterprises will deploy models from others. The Act effectively pushes diligence upstream: demand contractual warranties (e.g., conformity with the Act, logging retention, security controls), audit rights, and indemnities aligned to the new risk tiers. Insurers will probe how you manage your model supply chain and whether your contracts mirror statutory duties (including data governance and monitoring).

Systemic accumulation

GPAI flaws, supply-chain attacks against popular model providers, or widely used safety-component AI in regulated products could drive correlated loss. Expect reinsurers and carriers to introduce AI-specific exclusions/conditions, or sublimits for specific harms unless you can demonstrate robust segmentation (e.g., ring-fenced deployments, kill-switches, model rollback) and vendor diversification.

UK context: don’t wait for Westminster

The UK still lacks a comprehensive AI statute, although a Private Member’s Artificial Intelligence (Regulation) Bill was re-introduced in 2025 to close the gap with the EU’s more prescriptive regime. Regardless of Westminster’s timetable, UK companies selling into or affecting the EU market must work to EU standards now. Insurers will treat EU compliance posture as a baseline for risk selection and pricing.

Practical playbook for insureds and brokers

1) Map your exposure. Build and maintain a live inventory of AI systems/models, tagging each by EU AI Act risk tier, market(s) of use, and business criticality. Tie this to a RACI for ownership and evidence preservation. Underwriters will ask for it.

2) Prove post-market vigilance. Stand up your post-market monitoring program with defined KPIs, near-miss capture, and retraining triggers. Ensure log retention is operational (and retrievable) across providers and deployers.

3) Rehearse the reporting clock. Update incident response runbooks to handle AI serious-incident triage and the 15-day deadline; align counsel, DPO, and brokers on sequencing notifications to regulators and insurers.

4) Tighten contracts. Flow down AI Act obligations to model vendors and integrators. Require transparency artefacts (model cards, training data summaries where applicable), documented safety testing, and security attestations. Reference the EU GPAI Code of Practice where relevant.

5) Tune your insurance stack. With your broker, map AI perils to policies:

• Cyber for security incidents, privacy breaches, business interruption, and regulatory defence;

• Tech E&O / MPL for performance failures, hallucination-driven service errors, or negligent integration;

• Product liability where AI is a safety component;

Consider endorsements addressing regulatory investigations under the AI Act.6) Document, document, document. The AI Act is evidence-heavy. Keep conformity assessments, testing results, bias evaluations, and human-oversight procedures up to date; they are both your regulatory shield and your insurance renewal narrative.

The EU AI Act is reshaping operational risk—and therefore cyber risk transfer—on both sides of the Channel. Companies that can show disciplined AI governance, swift incident reporting, and verifiable post-market monitoring will not only lower their regulatory risk; they’ll also present as better risks to insurers, preserving coverage breadth and price in a market that’s now pricing AI controls into every deal.

Let's talk


Nick Barker

Technology and Cyber Practice Leader

Nick_Barker@ajg.com

James Wall

Executive Director

James_Wall@ajg.com

Back to Home

Share on social

The Walbrook Building he Walbrook Building 25 Walbrook London, EC4N 8AW

Legal & Regulatory Privacy Policy Cookie policy

Arthur J. Gallagher (UK) Limited is authorised and regulated by the Financial Conduct Authority. Registered Office: The Walbrook Building, 25 Walbrook, London EC4N 8AW. Registered in England and Wales. Company Number: 119013.

The information provided in this article is for general informational purposes only and does not constitute legal, financial, or professional advice. While we have made every effort to ensure the accuracy and reliability of the information presented, the content is based on our interpretation of the New EU AI Act and its potential implications for cyber risk transfer as of the date of publication.

Laws, regulations, and industry practices are subject to change, and the application of these laws may vary depending on specific circumstances. Readers are encouraged to consult with qualified legal, insurance, or risk management professionals to obtain advice tailored to their individual needs and circumstances.