Navigating the Tensions Between Technology and Politics
It’s been less than a week since Trump’s inauguration and a lot has happened.
Revoking Biden’s executive order on AI was one of the many decisions he made this week. Revoking the executive order comes alongside an announcement of an investment of $500 billion into AI through partnerships with OpenAI, Oracle and SoftBank.
The executive order was originally announced in October 2023 and its purpose was to promote safe, secure and trustworthy development and use of artificial intelligence (AI). This included measures around AI safety and security, privacy, equity and civil rights, consumers, patients and students, supporting workers, innovation and competition, leadership abroad and government use of AI.
Of particular note in this executive was a requirement for developers of AI systems deemed a threat to US national security, the economy, public health or safety to share the results of safety tests with the U.S. government, in line with the Defense Production Act, before releasing those systems to the public. The executive order also directed agencies to set standards for that testing to address related chemical, biological, radiological, nuclear, and cybersecurity risks.
This statement comes after Meta’s announcement that they will abandon fact checking programs across their social media platforms - Facebook, Instagram and Threads - starting in the US. Similar to the republicans statement, Meta’s CEO Mark Zuckerberg claims they are “get[ting] back to our roots around free expression”.
Meta will now require users to flag inappropriate or false content through a “community notes” model, similar to the one used by Twitter (now stupidly named X).
Social media is a powerful tool and it impacts people's perceptions of the world. For a lot of people, social media is their primary source of information, and when that information is littered with hate speech, racist comments and homophobic slurs, people’s benchmarks for what is and isn’t acceptable begin to shift.
There’s a really interesting study on Facebook’s facilitation of hate speech in Myanmar, which was linked to multiple instances of violence against Muslim minorities in the region. The study demonstrates the dangers of unchecked narratives and highlights the partisan role of social media platforms in people’s lives.
It’s important to note the argument around fact-checking and censorship content has two sides.
Censorship is a double edged sword because the decision of what is deemed appropriate and what is not lies with a group of people who may not always be impartial. We saw this with Meta’s censorship of Palestinian related content in the last year.
The reality is the pendulum of free speech can swing too far in either direction - censoring whatever content you want and not censoring any content at all. Both these extremes have repercussions, and in the case of social media, both of these extremes create very warped perceptions of the world we live in.
I go on social media for ten minutes and I feel like the world is ending. Between the climate crisis, insane people being elected into power and the ongoing genocide in Palestine, I’m left feeling overwhelmed and miserable about the state of our world. But then I spend time away from social media and things look a little different. Those problems still exist, but they exist in broader socio-political contexts where I feel there is still hope and opportunities to make better decisions.
This past week has somewhat challenged that position. Seeing the US take ten steps back in their initiatives to safeguard technology development is cause for concern. The front row of Trump's inauguration featured a line-up of tech giants, an unsettling glance at shifting power dynamics.
What’s different about Trump’s election the second time round is the prominence of tech giants in his presidency campaign. The narratives they have shaped and the influence they have imparted is not a small thing.
The ubiquity of technologies like social media or generative AI tools, i.e. ChatGPT, is what makes them so powerful. We all use them, we all share personal information on them, we are all willing - to some extent - to breach our privacy and security to use the various functions these platforms offer. It is this ubiquity that is challenging to safeguard and regulate and the intersection of monopolised tech companies and US politics makes that challenge increasingly more difficult to address.
My work on safeguarding the development and use of technology has never felt more important. I took some leave over the holidays and spent that time really thinking about the impact of my work and where it all fits within all these changes we’re seeing.
This year will look a little different for me. I’ll still be at ANU, but there are some additional projects I’ll be working on outside of my ANU role, which I’ll share when I can. Stay tuned.