Navigating the New Reality: A Practical Guide to EU AI Act Compliance for Insurers
The European Union's Artificial Intelligence Act (AI Act) is now law, establishing the world's first comprehensive regulatory framework for AI. For insurance companies operating in or serving the EU market, this is not a distant concern—it's an immediate operational and strategic imperative. Whether you're using AI for risk assessment in life insurance, automated claims processing, or customer service chatbots, the Act sets binding rules. In this expert analysis, Christian Nölke, Principal Consultant at adesso SE, outlines a clear, seven-step roadmap to achieve AI compliance while harnessing the technology's potential for innovation in health insurance, property & casualty insurance, and beyond.
Why the AI Act Matters for Insurance: Clarity and High-Risk Designations
While the 144-page regulation may seem daunting, it ultimately provides crucial clarity, especially for pan-European insurers. Instead of navigating a patchwork of national rules, you now have a single, unified standard across the EU internal market. However, the Act's Annex III demands particular attention from insurers. It definitively classifies certain systems as "high-risk AI systems," including:
- Systems for risk assessment and pricing concerning natural persons in life and health insurance.
- Systems for recruitment and selection of natural persons, such as CV screening tools.
These applications are common ambitions within the industry, and their "high-risk" status triggers stringent requirements for risk management, data governance, transparency, and human oversight.
The 7-Step Roadmap to AI Act Compliance
Moving from regulatory anxiety to operational readiness requires a structured approach. Here is the actionable seven-step process:
Step 1: Understand the Regulation
Don't be overwhelmed by the volume. Systematically analyze the AI Act, focusing on articles relevant to your business. Pay close attention to the recitals, which provide essential context for interpreting the articles, and the aforementioned Annex III. Assemble a cross-functional team from legal, compliance, IT, and business units to lead this analysis.
Step 2: Conduct an AI Inventory
You cannot manage what you don't measure. Create a comprehensive register of all AI systems currently in use, in development, or under evaluation. This includes:
- Internally developed tools for underwriting automation or fraud detection.
- Third-party services and vendor products that incorporate AI, such as certain customer relationship management (CRM) or claims management systems.
This inventory is the foundation for all subsequent compliance work.
Step 3: Develop an AI Strategy
Your approach to AI must be strategic, not ad-hoc. Formulate a dedicated AI Strategy that is derived from and consistent with your overall business strategy, IT strategy, and frameworks like DORA (Digital Operational Resilience Act). This strategy should adopt a risk-based approach and incorporate principles from existing guidelines, such as the BaFin's (German Federal Financial Supervisory Authority) paper on "Big Data and Artificial Intelligence." Consider adopting a holistic framework like Trustworthy AI to guide ethical and compliant implementation.
Step 4: Establish Governance & Processes
Translate strategy into action by embedding AI compliance into your core processes. This includes:
- Updating project approval and architecture governance to include AI-specific checks and documentation requirements.
- Integrating AI compliance criteria into software procurement and vendor management processes.
- Clearly defining roles and responsibilities (e.g., an AI Compliance Officer).
- Formally engaging key internal stakeholders: Data Protection, Risk Management, Legal & Compliance, Works Council, and Enterprise Architecture.
Step 5: Implement a Risk Classification & Blueprint System
Create clear criteria to classify your AI systems according to the Act's risk tiers (unacceptable, high-risk, limited risk, minimal risk). For each risk class, develop a catalog of mandatory control measures. A highly effective tactic is to create compliance blueprints—detailed, exemplar implementations for common system types (e.g., a high-risk underwriting tool). These blueprints, developed with all relevant stakeholders, become reusable templates, ensuring consistency and speeding up future compliance efforts.
Step 6: Assess Existing Systems
Apply the new classification rules and control catalogs to all systems identified in your inventory (Step 2). This assessment will validate your processes and blueprints, revealing any gaps that need immediate remediation for legacy AI applications.
Step 7: Operationalize & Monitor
Integrate the new AI governance fully into business-as-usual operations. All new projects, purchases, and services must flow through the established compliant processes. Crucially, implement ongoing regulatory monitoring. As with GDPR and DORA, expect continuous guidance from supervisory authorities, court rulings, and best practice evolution. Your AI strategy and controls must be dynamic to adapt to this evolving landscape.
The Path Forward: Time for Action
The implementation clock is ticking. While some details—like which German authority will be the lead supervisor—remain unclear, insurers cannot afford to wait. The experience gained from implementing GDPR and DORA is invaluable. Proactive, structured action now is the only way to ensure compliance, maintain competitive parity, and responsibly unlock AI's potential to transform customer experience, operational efficiency, and product innovation in the insurance sector.
Christian Nölke is a Principal Consultant at adesso SE, specializing in digital transformation and regulatory compliance for the financial services and insurance industries.
Insurers and brokers struggle in claims management with high backlogs, increasing claim frequencies, a shortage of skilled labor, and growing customer expectations. Manual processes are expensive and slow.