Is There a Place for AI Ethics and Governance?

A summary of a recent talk I gave on AI ethics and governance in an Australian Context.

Ethics and governance is currently at the forefront of many discussions around AI. I see it all over social media. I constantly get asked about it in events and forums. Yet, despite it being such a prominent topic, it’s not very clear what ethical AI is and why it seems to have been slotted under the umbrella of governance.

Broadly speaking, governance is the process of making and enforcing decisions within an organisation. From that perspective, if ethics is a product of decision making, then it makes sense to capture it under governance.

But the tricky thing about ethics is the complexity of the concept itself. Within philosophy there are countless debates about ethics. It falls under the enormous body of knowledge around morality, exploring what is and isn’t morally right.

What‘s interesting about ethics is that prior to the AI craze we are seeing today, I personally don’t recall ever seeing ethics within the governance of any organisation I worked in. I don’t recall seeing “ethical practices for board members” or “ethical requirements for CEOs”. 

Organisations may have some element of ethics within their governance processes that are not labelled specifically as ethics or ethical, but then my question is, how did we end up with ethical AI?

The AI boom of the last few years has been accompanied by a number of inaccurate and quite frankly, unrealistic narratives about AI and emerging technologies more broadly. And many of those narratives have latched on to this idea of technology mimicking human traits. This is exacerbated when we get instances like Sam Altman, the CEO of OpenAI, publicly saying things like “ChatGPT hallucinates” [1].

This kind of narrative reinforces the false idea that technology embodies human traits, when in fact it does not. I put a lot of emphasis on this in my work and my research because I feel the idea of technology embodying human traits, is not only factually incorrect, it also detracts from the reality of human decision making and influence on how technology is designed, developed and implemented.

By saying ChatGPT hallucinates, Sam Atlman is able to sidestep the fact that the system, like all systems, is flawed and needs work. Work that is done by humans. Using human traits to describe technology faults and errors is a fantastic marketing scheme. It gives these capabilities a sense of magic and whimsy, and this is dangerous.

It’s dangerous because it gives people an incorrect idea of what technology is and isn’t capable of. Our perceptions of technology influence how we interact with and use technology. This is why education and training is a fundamental part of technology adoption. How can we adopt technology effectively and safely, if we don’t understand what it can and can’t do?

But the thing with a giant technology company like OpenAI is they don’t actually care about people understanding technology, they just care about the number of people using their products, because that’s their bottom line. From this perspective, I believe ethics absolutely has a place in the discussions around AI and AI governance, but these ethical endeavours should be focused on humans, not technology.

If we lift the lid even slightly on the technology sector, we uncover a litany of ethical issues. From poor conditions of data labellers in low social demographic areas through to privacy concerns around data collection. The list is long and messy. In defence of the technology industry, these issues are not specific to just this industry and have been around for a long time. The fast fashion industry is a clear example of that. But the point is the idea of ethical AI applies well beyond the technology itself and sits in the broader ecosystem within which technology is designed, developed and implemented.

From an Australian perspective, the Australian government released their interim response to the safe and responsible AI discussion paper earlier this year. The response, which is publicly available [2], mentions ethics three times:

  1. There is a line item on page 20 of the report which states: “related work under the Data and Digital Minister’s Meeting to develop a nationally consistent approach to the safe and ethical use of AI by governments

  2. On that same page there is another mention of ethics in a statement which reads: “cyber security considerations consistent with the Cyber Security Strategy, as well as work underway in the Australian Signals Directorate through its Ethical AI Framework.

  3. And finally on page 22 of the report, there is a line item which states: “agreeing on an Australian Framework for Generative AI in Schools by education ministers to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools and society while protecting privacy, security and safety.

Similar to most initiatives around AI ethics and governance, these statements are broad and not very clear. It is not clear what the standard is for ethics. It is not clear whether this standard will focus on the technology or the people behind the technology - which in my opinion is what any form of ethical work should be focussing on. It is not clear how these ethical milestones can be demonstrated or achieved. And above all of these things, it is not clear what ethics means. 

What is ethics? What does it mean for technology to be ethical? What does ethical use of technology look like?

If we can’t define some benchmark for ethics, then directives, such as those outlined in the Australian Government’s interim response, are meaningless. We can’t achieve something or strive towards something if we don’t know where the goal posts are.

I believe there is a place for ethics discussions and initiatives for emerging technologies; however I believe this space has been suffocated under the avalanche of token and often surface level frameworks, roadmaps and principles. Many of them lack specificity and basic definitions of concepts. What is missing from the ethical AI ecosystem is depth. 

Australia has an opportunity to lead a clear and effective path forward, if we can sidestep the noise and focus on the practicalities of AI ethics and governance. This should include a focus on human roles and responsibilities and a clear understanding of what we mean by ethical AI. 

References

[1] OpenAI CEO Sam Altman sees “a lot of value” in AI hallucinations

[2] Australian Government’s interim AI response

Next
Next

Conceptualising Agency in Swarms