As artificial intelligence rapidly finds its way into cockpits, control rooms, and maintenance hangars, civil aviation faces a familiar dilemma in a new guise: how to harness technological progress without eroding the human authority that underpins safety. A joint position paper from the European Cockpit Association (ECA) and the International Federation of Air Line Pilots’ Associations (IFALPA) delivers a clear and uncompromising message. AI has a role in aviation, but only as a support tool. Command, accountability, and final decision-making must remain firmly human.
The aviation sector has always been an early adopter of automation. From fly-by-wire systems to sophisticated flight management computers, technology has reshaped how aircraft are flown and managed. Yet every major safety advance has been underpinned by one constant principle: humans remain in control. According to ECA and IFALPA, that principle must not be diluted in the race to deploy AI-driven systems.
The Promise of AI, and Its Limits
There is little dispute that AI offers genuine opportunities. Properly designed systems can sift through vast datasets, identify patterns invisible to the human eye, and support pilots, air traffic controllers, and maintenance personnel with timely, data-driven insights. In areas such as flight data monitoring, predictive maintenance, and decision support during high-workload phases of flight, AI may enhance situational awareness and safety margins.
However, the pilots’ organisations stress that these advantages come with inherent limitations. AI systems operate within defined parameters, known as operational design domains. When conditions fall outside those boundaries, as they inevitably do in abnormal or novel situations, human perception, experience, and intuition become decisive. Unlike algorithms, humans can recognise when something “does not look right”, even when data appears nominal.
The danger, the paper warns, lies in over-reliance. Excessive or uncritical dependence on automated assistance risks dulling pilots’ sensory awareness and decision-making skills, precisely when they are needed most. History has already shown that automation complacency can have fatal consequences.
Why Human Perception Still Matters
One of the paper’s strongest arguments centres on perception. Pilots do not merely process information; they interpret it in context. They know what to prioritise, what to question, and when to deviate from standard solutions. This capability becomes critical in complex, time-pressured, or degraded scenarios where data may be incomplete, misleading, or contradictory.
AI systems, by contrast, rely on training data and predefined logic. They may fail silently when pushed beyond their design limits, or worse, generate plausible-sounding but incorrect outputs, a phenomenon increasingly referred to as “hallucination”. In a safety-critical environment, such behaviour is unacceptable unless it is clearly identifiable and fully transparent to the human operator.
For ECA and IFALPA, this reinforces a non-negotiable requirement: AI must never replace human judgment. It must reinforce it.
Calibrated Trust, Not Blind Faith
Trust is often cited as a prerequisite for the successful deployment of AI. The pilots’ organisations challenge this framing. Trust, they argue, should not be engineered through branding or expectation management. It should be the outcome of transparency, certification, training, and operational experience.
They advocate “calibrated trust”, a level of confidence that accurately reflects a system’s capabilities and limitations. Pilots must be able to understand how AI-generated options are produced, how reliable they are, and under what conditions they may fail. AI systems should present options, not directives, and must always be overridable by pilot action.
So-called “black box” AI, whose internal logic cannot be scrutinised or reconstructed, is deemed incompatible with civil aviation. If a system influences safety-critical decisions, it must be explainable, auditable, and certifiable to the same standards as any other aircraft system.
Data: A New Fault Line in Aviation
Beyond the cockpit, the paper highlights a growing and often overlooked issue: data rights. Modern aviation generates vast amounts of operational data, much of it produced directly by pilots, controllers, and engineers. Flight Data Monitoring programmes, for example, are accepted by professionals on the basis of strict confidentiality and exclusive use for safety purposes.
ECA and IFALPA warn that the expansion of AI threatens to blur these boundaries. Data collected for safety must not be repurposed for commercial exploitation, performance surveillance, or disciplinary action without explicit, collective consent. The organisations argue that professionals who generate operational data retain legitimate rights over how it is used—and should share in any commercial value derived from it.
Particular caution is urged regarding biomonitoring technologies, such as eye-tracking or physiological sensors. While potentially useful for safety and training, such data is deeply personal. Without strict safeguards, transparency, and voluntary participation, biomonitoring risks undermining trust and violating fundamental privacy rights.
Cybersecurity: The New Safety Frontier
As AI becomes embedded in aviation systems, cybersecurity emerges as a critical safety concern. AI models are vulnerable to data poisoning, adversarial manipulation, and supply-chain attacks that could distort outputs or compromise system integrity. Even well-functioning systems may introduce risk if their outputs are poorly contextualised or insufficiently validated by human operators.
The pilots’ organisations call for robust cybersecurity frameworks tailored to AI, including strong authentication, encryption, anomaly detection, and secure update mechanisms. Crucially, they insist that human override must remain possible at all times, ensuring that pilots and other professionals can intervene if an AI system behaves unexpectedly.
Training for the AI Era
AI will also reshape training, but not in the way some predict. Rather than reducing the need for human skill, automation increases the premium on critical thinking, decision-making under stress, and intuitive judgment. Training programmes must ensure that pilots continue to develop these competencies, even as AI-supported tools become more prevalent.
While AI may assist in training design, performance analysis, and feedback, the paper is unequivocal on one point: pilot performance must only ever be assessed by trained pilots. Algorithms may inform, but they must not decide.
Accountability Cannot Be Automated
At the heart of the ECA–IFALPA position lies a simple principle: accountability and authority must align. If pilots, controllers, and engineers are held responsible for operational outcomes, they must retain commensurate control and understanding of the systems they use.
AI should therefore remain a support tool, not an executive agent. Clear regulatory frameworks are required to define its role, limits, and certification requirements. AI systems must be fully integrated into existing safety management systems and subject to continuous oversight and improvement.
A Human-Centred Future
The position taken by Europe’s pilots and IFALPA is neither anti-technology nor nostalgic. Aviation has always evolved through innovation—but safely, cautiously, and with humans at the centre. AI has the potential to strengthen safety and efficiency, but only if it is deployed in a way that respects human expertise, protects professional data rights, and preserves clear lines of responsibility.
In civil aviation, where the margin for error is unforgiving, one principle remains unchanged: technology may assist, but humans must decide. AI must support, not replace, the people who keep aircraft, passengers, and skies safe.








