AI Accountability: Should Artificial Intelligence Be Held Liable When Things Go Wrong?

My thoughts on whether or not we can hold AI, or any other emerging technology capability, liable in the event something goes wrong.

I was at a workshop the other week on AI in the military domain and the topic of liability came up. The age old question of “if something goes wrong, can we hold machines liable?” was asked. This is a question that has been floating around in the AI and emerging technology spheres for a while now.

What I find interesting about this question is that people don’t apply it to cases outside of AI or emerging technology more generally. Car accidents aren’t blamed on cars, plane crashes aren’t blamed on planes, so why would incidents with AI be blamed on AI?

If a car accident is caused by a malfunction with the car - for example, the breaks being faulty - the liable party would be the manufacturer, or a mechanic who repaired/replaced the breaks, or the manufacturer of the breaks. 

If a plane crashes, there is an entire investigation that has to take place before a liable party is blamed. That investigation interrogates everyone and everything from the pilots, to the maintainers, the manufacturers, the air traffic controllers and the regulators. 

AI is no different. At the end of the day, AI is a capability that is designed by humans and used by humans. What I think people conflate when talking about liability and AI are the ideas of intelligence and agency.

The independent decision making that AI systems are capable of making - or are claimed to be capable of making - can leave people with the impression that these systems are intelligent to the point of reason. Reasoning implies sentience, an awareness or understanding of one's feelings. AI is not “intelligent” in this way. 

Decisions that are made by AI are made based on objective functions and available data. Objective functions that are determined and programmed by humans. Data that is made available by humans. AI is really an illusion of intelligence.

The intelligence of these systems is in their design and application, more so than their capability. Yes they can process copious amounts of data to produce outputs, but they lack relevant context and reasoning. Interact with a chatbot or image recognition program for long enough and you’ll quickly be humbled by the idea of the “intelligence” of these systems. There’s a great paper, aptly titled “Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It” that explores this further.

Nonetheless, the debate around liability still needs to be addressed. From my perspective, the idea of holding AI accountable should honestly just be thrown away. Not just because these are not moral or sentient beings with the capacity for intent or reasoning, but also because it removes the responsibility from humans, and this is where the responsibility lies completely.

The Australian trucking industry employs a “chain of responsibility (COR)” model to prevent road and freight accidents. It’s been around for some time and is part of the legal landscape in Australia.

A colleague (Dr Brendan Walker-Munro) and I collaborated a while ago on how we could leverage this idea and apply it to human-machine teaming with robotics, autonomous systems and AI (RAS-AI). The outcome was a COR model for human-machine teaming that we developed and outlined in our paper “The Guilty (Silicon) Mind: Blameworthiness and Liability in Human-Machine Teaming.

In essence, the COR model imposes a duty on every person involved with a RAS-AI capability, from conception and design, through manufacture and testing, to procurement and deployment. At each stage, those involved will carry responsibilities that extend across a system’s lifecycle. In the event of an accident or incident, an investigation is carried out, examining the entire logistic chain to determine where there was a breach of responsibility and by who. 

Our focus was on human-machine teaming because we were conscious that these systems rarely operate in isolation. There is a lot of research around the design and manufacturing phases of emerging tech, but the implementation phase is just as critical. Just because something is designed and built to an impeccable standard, it doesn't mean it can’t be misused. This is why the implementation stage is included in the COR model we developed.

I don’t believe we can or should ever hold AI, or any other complex system, liable in the event of an accident or incident. Doing this removes the onus from humans and incorrectly positions them as passive bystanders instead of active participants. 

We are responsible for how these systems are designed. We are responsible for how they are developed. We are responsible for how we use them. So we need to be responsible when something goes wrong.

Previous
Previous

The Illusion of Control: Is Banning ChatGPT from Classrooms the Answer?

Next
Next

Driverless taxi chaos in San Francisco erodes public trust in autonomous technology