Trust in Increasingly Autonomous Aircraft
As we move towards increasingly autonomous systems, traditional human roles will inevitably change. How do we ensure trust in these systems is maintained as human control continues to evolve?
Photo credit: 123rf.com
Autonomous aircraft is a term we hear a lot; however, the term is not a correct reflection of existing technology within the industry. When we talk about a fully autonomous aircraft, we are talking about an aircraft that would not require a pilot nor air traffic controllers.
“Increasingly autonomous” systems is a more accurate description of existing capabilities which fall somewhere between automatic systems and more complex systems that move towards full autonomy. We are not at the fully autonomous stage yet. That stage would involve more adaptive systems and learning systems.
An adaptive system is one that changes its behaviour in response to its external environment. As an example, an increasingly autonomous aircraft with a detect and avoid component is an adaptive system as it is able to detect and respond to potential conflicts.
Such a system would do a good job of avoiding static obstacles, and even other autonomous aircraft particularly if communication is involved. But what about more unpredictable obstacles, such as a flock of birds - noting that drones will predominantly fly in low-altitude airspace?
A learning component added to an adaptive system should ideally improve outputs over time. So, if a learning component was introduced to an avoidance system, over time it could optimise avoidance behaviour based on data of previous encounters, such as the appearance, size and heading of the birds.
If we are to move toward increasingly autonomous systems, and ultimately, fully autonomous aircraft, trust must be engendered in every facet of the civil aviation system. Trust is a multifaceted concept, an amalgam of elements including accountability, transparency and predictability.
A common means of achieving higher levels of trust in autonomous systems is through retaining certain human roles as a means of oversight and redundancy. This approach is effective because elements that make up trust, such as quality and credibility, are perceived properties. Therefore embedding human oversight will ultimately promote a higher perception of trust.
The downside of this is that these human roles can become superficial and redundant from a performance perspective. It’s a tough balance to achieve, especially when considering the popular narrative of machines taking over and making human roles obsolete. Perhaps we need to reconsider our view of these roles and visualise them through a more pragmatic lens of traditional human roles being augmented by machines as opposed to being replaced by them.
Human machine integration is a compelling means of leveraging the benefits of each party to achieve a mutually beneficial outcome. However, if human roles are dictated by the demand for perceived properties, such as trust, we lose the strategic benefits of human-machine integration.
Adaptive learning systems pose an interesting problem as we can’t predict how these systems will adapt, therefore they are inherently unpredictable. Engendering trust in an inherently unpredictable system will be very challenging. How can we trust an adaptive learning system, which, by its nature, has some degree of unpredictability?
When considering how to integrate human roles with increasingly autonomous systems and aircraft, meaningful human control provides a more in depth comprehension of knowledge within this space.
At present, there is no one agreed upon definition for the concept of meaningful human control. Broadly speaking, meaningful human control implies a non-superficial involvement that is considered necessary to be undertaken by a human.
Meaningful human control is commonly associated with autonomous weapons systems, advocating for the need for human participation in systems capable of taking life. However, the implications and significance of meaningful human control are not limited to lethal weapons. This concept arises in most high stakes contexts including increasingly autonomous vehicles.
One reason for this is our ability to respond to scenarios that may be deemed unusual or uncommon. Increasingly autonomous systems are designed around a training data set, which means these systems are likely to under perform in scenarios deemed atypical or novel, as these scenarios are unfamiliar to the system. In this context, humans would provide a level of redundancy, aiding in error detection and recovery.
While our adaptability is beneficial for outlier cases, our variability does impede predictability and consistency. With the addition of human control, autonomous systems will require a capacity for monitoring, forecasting and reacting in an environment with a higher degree of uncertainty.
An additional motivation is legal liability and what’s known as an accountability gap. If a mid air collision occurs between two fully autonomous aircraft, who is responsible or liable for the incident?
This leads into one of the more complex reasons for meaningful human control, moral or ethical decision making. Machines do not make decisions based on empathy and compassion. As humans we are capable of interpreting situations that are sensitive to context and human values. The moral dimension of our involvement is difficult to quantify and train a system against.
The counter argument to this is that as humans we do bring our own biases to the table, regardless of intentionality; and therefore the claim that humans will inherently make ethical decisions does not always hold true.
An additional layer to be considered when determining human control is efficiency. While humans excel at novel tasks, through our traits of creativity, imagination and critical thinking, we will always be outmatched by autonomous systems when it comes to computation and processing of large volumes of data.
Some non-safety critical tasks are so trivial that a potential reduction in performance of removing the human element is almost negligible in comparison to the time and efficiency benefits achieved from autonomy.
Overall, when considering how human roles will evolve with the transition to increasingly autonomous systems, it’s important to consider human roles at present and what elements of these roles are critical to a high risk scenario, and what elements can safely and advantageously be replaced or transitioned.
Eliminating the need for continuous human oversight requires a system architecture that supports more autonomous technology, in addition to supporting intermittent human oversight and control.