Aerospace: human intervention and collaboration in AI and cyber security

From: techUK
Published: Thu Mar 10 2022


Recently, I was invited to take part in a discussion with Oxford University academics and other industry leaders. It was wide in scope, but focused on how to use AI to protect the aerospace industry from an increasing range and number of cyber threats and attacks.

Here are some of the key points...

The role of human intervention in aerospace security

Human/machine teaming is an emerging, but critical, research area in autonomy. Deploying any type of autonomous system first requires a big-picture view regarding technical robustness, safety, and governance - who can access data, how to team humans and machines, and build interfaces.

Relying on human intervention is a key security tenet of autonomous solutions. But it's fraught with peril - for example, an unstable system can still pose a threat whether you have a well-trained pilot in the cockpit or not.

Even if a system is effective, can we be sure the human intervention will work? And the more reliable an autonomous system becomes, the easier it is to become complacent, potentially leading to atrophy because maintaining human skills in the first instance becomes difficult.

And what about public confidence? Even though the technology could save millions of lives, the public still expects a record of zero harm before trusting autonomous cars. Typically, people only remember the mistakes.

It was the same when robots and humans started sharing the factory floor, so it's up to technologists and academics to convey the harder-to-see data that enlightens and educates.

Building cyber security into design

AI helped to build autonomous models long before the cyber security issue arose. So we must build in cyber security as part of the design process. Data has to be key to this as it offers a potential advantage over attackers, allowing us to gather and respond to real-life examples that theoretical models cannot adequately capture.

We have to start thinking about cyber security as we're designing systems.

- Dr Danette Allen, NASA

Safety architecture is also crucial - both internally and from a regulatory perspective. How do you prove the thing that needs to be secure is secure? We're never going to really know without many years of field trials. Plus, we can't necessarily prove a system is secure - only that it might be - by showing it's as safe as is reasonably practical and that human intervention is feasible.

Trust will be a crucial to autonomy succeeding. When introducing AI, we must consider how it supports trust and how we can ensure it identifies anomalies and doesn't contribute to them. And increasingly complex systems means we can't just think about academic, industrial or regulatory concerns in isolation. We need a joined-up approach covering everything - technology, policy, law, and strategy.

Collaboration is crucial

Collaboration between industry and academia is key to tackling cyber security threats. Trustworthy Autonomous Systems was set up to emphasise this collaboration and works for everyone, but more international collaboration is needed. While the US and Europe have typically been at forefront of global aviation standards, we'll need a more globalised approach - exemplified by the US-led international spaceflight programme, Artemis, where the US and its allies work on systems, protocols and standards that meet everyone's needs.

Standards and technologies are emerging from new countries and regions, too. As we start to leave the safety but no security legacy behind, it'll be interesting to see what problems arise from systems co-opted from different domains with different threats. Each will require the transfer of capabilities across domains (sometimes referred to as permeable boundaries).

This article was authored by Paul Gosling, CTO of Thales UK, and Fellow of the Royal Academy of Engineering.

Company: techUK

Visit website »