11 November 2025

AI and the Human Edge in Aviation

The aviation industry has a long legacy of precision engineering, establishing its reputation as one of the world’s safest, most trusted and technologically advanced sectors. Today, Gen AI is elevating this legacy even further, delivering advanced analytics, enhancing anomaly detection, supporting real-time decision-making and unlocking operational efficiencies.

The aviation industry has a long legacy of precision engineering, establishing its reputation as one of the world’s safest, most trusted and technologically advanced sectors. Today, Gen AI is elevating this legacy even further, delivering advanced analytics, enhancing anomaly detection, supporting real-time decision-making and unlocking operational efficiencies.

Reflecting this momentum, the global AI market in the aerospace and defence sector is projected to reach USD43.02 billion by 2030, growing at a compound annual growth rate of 9.8%.

While the potential of AI is undeniably transformative, its adoption continues to be met with a degree of hesitancy across the aviation sector. This caution stems from a range of complex challenges, spanning technical limitations, operational disruptions and ethical considerations. Steering this AI revolution, therefore, requires a strategic, cross-functional collaboration to interpret AI’s evolving capabilities and ensure its responsible integration throughout the aviation ecosystem.

AI in aviation today

AI is no longer a future-facing concept in aviation; it’s a present-day operational asset powering automation, scheduling and system optimisation. With GenAI, this integration is undergoing a major shift across a myriad of aviation sectors, elevating safety standards and redefining operational models. Today, AI is transforming aviation through:

  • Predictive maintenance: AI-powered virtual replicas of aircraft systems (digital twins) analyse critical avionics data, such as engine performance and hydraulics, to optimise maintenance schedules, design efficient systems and enhance component lifecycles, saving millions in repair costs, cancellations and downtimes.
  • Proactive risk mitigation: AI-driven asset-health monitoring systems accurately predict component failures and even possible failure scenarios with over 90% accuracy, weeks in advance.
  • Runway intelligence: AI-powered Aircraft Braking Action Report (ABAR) systems integrate Synthetic Vision Technology, Automatic Dependent Surveillance – Broadcast data, and GPS telemetry to monitor a typical aircraft movement, predict runway enhance braking efficiency by using objective data for critical decisions. This improves taxi guidance in low-visibility and congested conditions for both pilots and controllers.
  • Flight path optimisation: GenAI-enabled collision avoidance systems, like Airborne Collision Avoidance System X, assess weather, traffic, and terrain data to minimise false alerts, recommend fuel-efficient routes, and enable dynamic mid-flight rerouting.
  • Autonomous flight systems: AI-driven drones and electric vertical take-off and landing aircraft serve cargo, medical and urban air transport functions, using real-time kinematic networks for centimetre-accurate navigation in complex environments.

Despite AI’s technological prowess, its greatest strength lies in augmenting human decision-making: translating complex data into strategic foresight. “Technology is transforming us into strategic thinkers rather than just general doers,” says Bill Parnell, senior partner of the Gallagher Specialty Aerospace team. “The next generation of engineers is being trained to leverage AI and will be, in many ways, products of this technology.”

Bridging AI’s trust gap

Even as AI continues to be integrated into aviation systems, a lingering gap remains, cautions Roger Sethsson, senior advisor and consultant at Sethsson Aerospace Advisory AB, especially the automation complacency gap. "Automation has significantly improved flight safety, but there have been numerous incidents where humans and technology have failed to interact. Therefore, a successful integration of AI will also require improvements in human and AI interaction. Non-punitive reporting of incidents in AI will be of paramount importance to improve safety and build trust."

While AI-driven insights offer clarity and strategic foresight, adoption in aerospace remains cautious; a reluctance stemming from concerns around job displacement, system opacity and emerging liabilities.

  • The trust barrier: AI’s decision-making opacity, often dubbed the black box, undermines transparency and erodes user trust. Additionally, even in scenarios where AI may offer superior support, an inherent ‘algorithm aversion’ may pose a significant barrier to adoption.
  • Exposure to new liabilities: AI systems introduce fresh risks, from cyber vulnerabilities to ethical bias, that impact mission-critical aviation functions like scheduling and alerting, reinforcing hesitation in adoption.
  • Evolving liability landscapes: As AI systems become more autonomous, liability models are gradually expanding to account for machine-led errors alongside human ones. While regulators are moving towards a more shared-accountability framework to foster collective responsibility, this evolution has stakeholders reassessing their risk exposures.
  • Workforce impact: The rise of AI has intensified job security concerns, stemming from perceived loss of relevance and expertise. While these apprehensions persist, most organisations are moving toward integrated roles over outright displacement. Additionally, operator overreliance on AI, resulting in reduced situational awareness and agility, remains a critical operational risk.

Addressing these challenges requires human-centred AI design and ongoing upskilling to ensure people remain active decision-makers, not passive monitors. "AI’s challenge isn’t technological," notes Parnell. "It’s in integration: ensuring those who engage with it truly understand it."

AI’s challenge isn’t technological. It’s in integration: ensuring those who engage with it truly understand it.
Bill Parnell, Senior Partner, Gallagher Specialty - Aerospace

Regulatory signals for the aerospace AI revolution

As AI’s operational complexities continue to surface, regulatory bodies are actively shaping frameworks to balance innovation with oversight. The challenge, however, is that the pace of technological innovation has often outpaced regulation, meaning regulation is always playing catch-up. By the time the regulation is in place, new technologies are in use that need to be regulated.

Airworthiness authorities such as the US Federal Aviation Administration (FAA) and the European Union Aviation Safety Agency (EASA) have each shown their readiness to integrate AI. Their roadmaps highlight safety modelling, system certification and human-AI teamwork, affirming that AI is a force multiplier for human decision-makers and not a substitute. However, these acceptances come with clear guardrails designed to uphold safety and public trust.

Currently, aircraft AI/ML (Machine Learning) certification is limited to low-criticality functions, including predictive maintenance and passenger services, which are classified as ‘Software Level D’. The Level D certification does not require verification of lower-level requirements (LLRs), source code reviews and structural coverage that are necessary at higher levels (A, B and C). As such, certification of safety-critical AI functions, such as flight control and collision avoidance, presents challenges.

New legal doctrines now assign liability across a broader ecosystem: developers, operators, manufacturers and even data providers. Additionally, the EU’s proposed updates to the Product Liability Directive and AI Act shift how AI is viewed entirely: not as a service but as a product. This subtle change in language signals a move towards shared accountability, where the burden of proof may rest on proving system transparency and robustness.

For stakeholders, this represents both a mandate and a defining inflexion point, whereby it will become common practice to harness AI’s transformative potential while preserving human oversight and judgment.

The role of insurers in fostering safe, scalable AI adoption

“With today’s fast-paced progress that comes with limited time to test and adapt, the biggest question is how to onboard AI while upholding quality and safety,” notes Sethsson. “That is where insurance plays a vital role.”

As AI adoption accelerates, insurance is evolving risk mitigation to become a strategic enabler of innovation, shaping how AI is deployed, trusted and scaled. In the aviation ecosystem, insurers can drive impact through:

  • Modelling transparency: By demanding explainable AI, insurers can incentivise algorithmic visibility and help align sectors with safety and security, ethical, and regulatory standards. Insurers need to know how AI software is used, whether in critical or non-critical applications, its sources, data (objective or subjective), algorithms, mathematical modelling, SMS, and lessons learnt.
  • Scenario modelling for emerging risks: With traditional risk models falling short, insurers can collaborate with aviation stakeholders to simulate edge cases, such as AI misinterpretation of sensor data or predictive bias in maintenance prioritisation, and develop coverage frameworks that reflect these risks.
  • Enabling scalable adoption: Through tailored underwriting and performance-based coverage plans, insurers can support responsible deployment of AI at scale, protecting against algorithmic liability, autonomous system failures and even reputational risks tied to AI decisions.
  • Protecting against cyber and privacy risks: By offering cyber insurance solutions and incentivising robust data protection measures. As AI systems rely on vast amounts of data, insurers can help aviation stakeholders implement steps to protect sensitive information, ensure compliance with privacy regulations, and manage AI-driven cyber threats.
  • Incentivising the adoption of proactive safety measures: Such as the AI monitoring systems, which use facial tracking to scan for signs of pilot fatigue and provide real-time alerts. Similar to Driver Monitoring Systems (DMS) in automobiles, these systems have the capability to demonstrate the powerful role of AI in proactively mitigating accident risks.

Effective human-AI teaming requires clear communication, shared situational awareness and a well-defined allocation of tasks. Without these foundational elements, reluctance to adopt AI is likely to grow.

Aviation brokers can serve as a critical bridge between humans and AI adoption, helping clarify the technology’s implications and associated risks, notes Parnell. “This evolving landscape offers aviation brokers and insurers a strategic opportunity to harness AI-driven data insights, identifying emerging risks and exposures while guiding stakeholders through complex claims and shifting risk appetites. The result: greater confidence and readiness to adopt AI.”

Tech-enabled risk intelligence: The Gallagher Aeropulse advantage

Bespoke exposure analysis tools, such as Gallagher’s Aeropulse, enable brokers to bridge the gap between human hesitancy surrounding AI complexities and successful adoption. By delivering actionable insights, including incident reports and post-accident claims, Aeropulse strengthens risk assessment and enhances forecasting accuracy, empowering stakeholders to manage liability coverage with greater precision and clarity.

Evolving from operators to strategists

As AI transforms aviation, the industry is shifting from a stance of passive adoption to proactive collaboration, centred on transparency, agile learning and forward-looking risk models that anticipate tomorrow’s challenges. A tripartite commitment among stakeholders, regulators and insurers to shared safety metrics will be key to ensuring AI adoption in aviation remains safe, scalable and resilient.

“AI adoption in aviation requires deliberate human engagement,” concludes Parnell. “Flight crews and operators must embrace the technology with confidence. As manufacturers and regulators refine how AI systems are designed and interpreted, insurers have a pivotal role to play, evolving coverage models and supporting the AI-human synergy.”

Talk to our experts and navigate the human-AI challenges in aviation with confidence.

Let's talk


Bill Parnell

Senior Partner

bill_parnell@ajg.com

Roger Sethsson

Sethsson Aerospace Advisory AB

roger@sethssonaerospace.com

Back to Home

Share on social

The Walbrook Building 25 Walbrook London, EC4N 8AW

Legal & Regulatory | Privacy Policy

Arthur J. Gallagher (UK) Limited is authorised and regulated by the Financial Conduct Authority. Registered Office: The Walbrook Building, 25 Walbrook, London EC4N 8AW. Registered in England and Wales. Company Number: 119013.