How Can the Public Sector ‘Keep it Real’ in the Digital Age?

Tackling Social Media Misinformation & AI Deepfakes

Author avatar
Heather Dailey 4 August 2024
How Can the Public Sector ‘Keep it Real’ in the Digital Age?


What are the two most valuable but also most easily influenced entities in any country? Youth and government. Our youth are growing, maturing and learning to be the individuals who influence, participate in, and even run our countries. Governments are those who already make the decisions on how to run our countries as effectively and efficiently as possible (ideally). 

Why are we giving attention to both here? Because as two bodies that can hold their countries together, it’s important to recognise the fragile nature of both the electoral campaigns of our governments and the minds of our young people. They are equally vulnerable to the external influences and persuasive tactics of social media and AI deepfakes. Both can be swayed and are just as susceptible to the impact of these forces, struggling with misinformation, and strategic messaging.  

Here, let’s explore how this wicked duo of deepfakes and social media are used to influence electoral campaigns and young people, undermining two of the most vital entities - societal trust and democratic processes. We’ll then acknowledge how governments must step up and safeguard these most targeted groups against the negative effects of these innovations to maintain the trust and wellbeing of their countries now and for the future.  

First, we’ll take a chance here and assume that we all know what social media is (Facebook, TikTok etc). Importantly for this conversation, it functions as the distribution channel for AI technologies which come to have negative societal effects – and these two forces work with extreme efficiency together. 

So let’s jump right into deepfake technology (which some may still be unfamiliar with). What exactly is it.  Well, it’s a powerful tool that can create realistic and convincing videos of people by using artificial intelligence (AI) and techniques of machine learning (ML) such as deep learning (neural networks with many layers (hence "deep" learning) to analyse and generate realistic images, videos, and audio). By feeding different videos and other content to the platform, the tool can generate a unique video in which the individual being impersonated says and does things they never really did. 

 

How Are These Technologies Influencing Our Youth Negatively? 

Young people are at a critical stage of developing their views and are heavily influenced by the content they consume on social media, making them prime targets for disinformation campaigns that can distort their understanding of reality and responsibilities as community members.  

Let’s look at how: 

  • False news articles or manipulated videos can influence young users' understanding of political events, health information, and societal norms
  • Constant exposure to manipulated content and unrealistic standards on social media can exacerbate feelings of inadequacy, anxiety, and depression
  • AI deepfakes can be used to create compromising or harmful content targeting young individuals, leading to cyberbullying and harassment
  • The prevalence of deepfakes and manipulated content can erode young people's trust in digital media and the information they consume. Unfortunately, this doubt can lead to mistrusting legitimate information sources, complicating their ability to distinguish between what is false or what is reality

 

How Are Electoral Campaigns Swayed by These Innovations? 

Electoral campaigns are foundational to the democratic process. And this year, a record number of voters from at least 64 countries, along with the European Union, will go to the polls, representing nearly half of the world's population. The outcomes of these national elections are expected to have consequential, long-lasting effects on these nations.  

A reality which is thread through this momentous period, as the World Economic Forum highlights, is the ongoing instability in geopolitical and geoeconomic relationships among major global powers which looms as the top concern for chief risk officers across both public and private sectors. This unrest, coupled with fierce competition can initiate the spread of deepfakes and social media dis and misinformation which can sway voter opinions unfairly, eroding the integrity of elections and leading to potentially unfair and preferential outcomes.  

Here’s how: 

  • AI deepfakes can create convincing but false videos of candidates making controversial statements or engaging in inappropriate behaviour. This disinformation can rapidly spread through social media, misleading voters and damaging the reputations of candidates.
  • When people can’t distinguish between real and fake information, trust in the electoral process and democratic institutions can be easily destroyed.
  • Deepfakes can be used as tools for propaganda by creating targeted content that exploits emotional triggers to manipulate voter behaviour. This can be particularly effective in spreading fear, anger, or false narratives.
  • Fuelled by algorithms that prioritize engagement during elections, deepfake content is often shared within like-minded groups, further entrenching partisan views and making it harder to achieve constructive political dialogue
  • Deepfakes can disrupt and derail political campaigns by forcing candidates to spend time and resources debunking false information. This distracts from their actual campaign messages and policy discussions.

 

The Solution: 

Protecting these groups ensures that young people grow and develop into informed, critically thinking citizens (who will graduate to voting-age citizens). Protecting electoral processes means that they will remain fair and representative of true public will, and in doing so, will maintain the fundamental principles of democracy. 

In Which Case, Governments MUST: 

  • Implement stricter regulations and legislation to govern the use of social media and AI technologies. This includes laws specifically targeting the creation and distribution of deepfakes and misinformation. Or in Australia’s case, a potential complete ban on social media accounts for anyone under 16.
  • Governments should invest in and promote the development of advanced technologies capable of detecting and mitigating the spread of deepfakes and misinformation. This includes AI tools that can identify fake content and flag it for removal or review.
  • Launch comprehensive public awareness and education campaigns to inform citizens, especially young people, about the dangers of deepfakes and misinformation. These campaigns should focus on teaching critical thinking skills and digital literacy.

 

As we can see here (and have most likely experienced first-hand) the rapid integration of AI media across social channels is already causing significant concerns and threats to societal trust and democratic systems. 

But by taking purposeful action through AI safety and regulation, governments can protect these two most vulnerable segments of our society, ensuring that people are empowered with the knowledge to navigate the digital landscape responsibly.  

However, it's crucial to remember that the responsibility to maintain trust and uphold democratic values lies not only with governments but with all of us as informed citizens in the digital age. 

Communities
Cyber Security and Risk Management
Data, Analytics and AI
General
IT Modernization and Cloud
Workforce, Skills and Capability
Region
Australia Australia

Published by

Author avatar
Heather Dailey Content Strategist, Marketing