How are you navigating the narrow AI pathway?

Learn what it means to take a ‘human at the centre’ approach to AI and why it’s critical for government and society.

Fiona Armstrong 18 August 2025
How are you navigating the narrow AI pathway?

AI is a double-edged sword when it comes to creating an equitable society. It has real potential to improve inclusiveness or to entrench bias. To level the playing field or widen the digital divide. Navigating the narrow pathway for community good will take conscious effort and a deep human-at-the-centre approach. 

This approach is more than just ‘human in the loop’ validation, it’s about understanding what sort of community we want to be, and what’s important to us. It’s about focussing on redesigning processes, government services and delivery models to better support all humans and communities equitably to achieve their best possible outcomes. How AI is designed, deployed and governed determines whether that vision is achieved.

So how might we create a considered, responsive, human-at-the-centre approach to AI that encourages and enables AI innovation for good at the same time as avoiding unintended negative consequences?

“With us, not to us” needs to the motto for AI implementation.  

The social licence to use AI to deliver government services will be built through trust, opportunity and a sense of fairness, that goes beyond “do no harm” or improving productivity. AI needs to be seen making life fair and be accessible to everyone. The threats must be outweighed by the benefits. Building that trust starts with engagement.  

Job taker or job maker?

Humans are hardwired to need economic security, so systemic changes to their ability to earn an income has always been a flashpoint. Just look back at the Industrial Revolution and the origins of saboteur. Protest takes many forms. For the last fifty years or so we’ve been living with the threat of technology taking our jobs. But with AI that threat has accelerated again.  

Late last year Treasurer Jim Chalmers, articulated the Government’s strategy to use AI as a key pillar to address Australia’s long-standing national productivity challenges. More recently, the Productivity Commission announced they’d be reviewing the regulatory setting for AI as a driver for productivity.  While this is a realistic and prudent economic approach, there is a narrow pathway to walk.

Australians are already wary of AI (according to the recent University of Melbourne global study), and if AI is synonymous with productivity that means its seen as a job taker, where many government roles are replaced by automation and AI.  

The social licence for AI in government comes from building trust, skills and addressing the elephant in the room:  We need to make AI a job maker.

AI and other tech advances are expected to have a positive net effect on global employment (World Economic Forum’s Future of Jobs Report 2025) with much of the growth from new roles in growing sectors such as fintech, cyber and sustainability. But those new employment opportunities won’t help build adoption among today’s workers who may be facing displacement. Their acceptance of AI comes from understanding that AI can help them do their job better, to focus on the things that matter by removing repetition or low-level tasks. Augmentation not redundancy.    

Let’s explore that narrow path between augmenting and replacing jobs in government. Employee is:

  • Supported: A social worker, inspector or investigator armed with AI to compile and analyse fragmented and unstructured data in a way that supports them to prioritise and make effective decisions, is more likely to see AI as a tool that can boost their effectiveness and help them to better protect the community. 
  • Enabled: An AI that makes the easy decisions, may also be acceptable if they need space to focus on the harder cases. 
  • Replaced: An AI that makes all judgements autonomously may be more efficient but may be a step too far. Not just because it's a job taker, but because AI doesn't recognise that some services and decisions require contextual understanding, empathy or discretion – things that it cannot replicate. Automating too far could lead to dehumanising experiences or unjust outcomes which would negatively impact societal acceptance of AI.

Done well, AI has the potential to make more satisfying and impactful jobs that enable stretched government services to achieve more.

From "do no harm" to creating community value

Fear of AI also comes from experiences like Robodebt, where AI was seen to do significant harm. However, used for good, AI can enable better targeted, inclusive services that extends the impact government can have in tackling entrenched social inequalities.  

Examples such as AI-driven diabetic retinopathy in rural Indian and Thai clinics or AI-enabled adaptive learning platform like DIKSHA in India are already levelling the playing field between rich and poor, urban and rural. Bringing education and health care to remote, disadvantaged and underserved communities builds social value and acceptance.

It can also make government services easier to access for people who may have language or ability barriers that stop them using traditional channels. Whether that’s a Seeing-AI that can describe surroundings or enable speech, automated translations that bring equity of access to information and learning, or apps that provide communication and emotional support for those with autism, dementia and neurodivergence – AI is helping connect people who are often marginalised.    

Despite the positives, there is a narrow path between improving equality and inadvertently doing harm when designing AI-powered solutions. AI is capable of entrenching discrimination. No LLM today can guarantee equity or inclusivity.  

The problem is how it is trained. AIs can pick up the worst of human bias or discrimination often from social media and amplify it at scale. Even when the AI is trained on government or trusted data, care is needed not to bake in historical inequalities in housing, healthcare, justice, employment etc which are present in the data. Fairness metrics need to be applied to the data to ensure diversity, equality and parity. For government applications, knowing how it has been trained is essential.

Seeing how it makes its decision is also vital. The pervasive ‘black box’ approach risks doing harm. To build trust and appropriateness, government employees need to be able to audit or challenge AI decisions. There are solutions that provide this transparency, but it is not yet the norm.

To walk the narrow path between equality and bias and ensure the data does not undermine AI’s potential to do good, will require careful oversight and attention to detail at the data and design stages, and significant involvement from the community in deciding the problems to be solved, what’s acceptable and usable. 

Acceptance comes through access

Building community acceptance also needs concerted efforts not to leave anyone behind. This needs policy, infrastructure, change management and skills building at a micro, meso and macro level.  

To encourage adoption, we need the clarity and empathy of a long-term human centred vision, combined with the opportunity to experiment. At home, in the community and the workplace. While governments have a lead role, all of us have a part to play.

Many basic AI tools are available for free, making it easier for many to start experimenting at home. This makes it fun and interesting, rather than ‘yet another change at work’.

However, if we are to avoid widening the digital divide, higher capacity infrastructure that supports access to AI-enabled job opportunities in regional and remote Australia is essential.  As is low or free cost devices and internet access for those that can’t afford access.

While skill building across the community is needed, automation via AI may disproportionately harm low-skilled or routine jobs, deepening economic inequality. To address this, inclusive and proactive reskilling programs are a necessity. On the positive side, AI can help here, with:

  • new and more intuitive interfaces using natural language processing or voice prompts making tech more accessible
  • AI-enabled advances in personalised education platforms supporting learning and skill building suited to the individual’s pace and style.

With us, not to us

How government agencies approach and use AI will determine the social acceptance and appetite for broader reform. Three factors are critical:

  • ensuring it is a job maker not a job taker  
  • showing it can be used for good to create equity of access to services and opportunities
  • involving the community in shaping changes and building AI skills.

Going into the design, deployment and governance of AI solutions eyes-wide-open will ensure we can walk the narrow pathway to creating better outcomes for all Australians.  


A version of this article was published in The Mandarin as part of Liquid's AI and innovation content series.

Published by

Fiona Armstrong CEO & Partner, Liquid

About our partner

Liquid

For over 25 years, Australian SME Liquid has supported local, state and federal government to achieve the best possible futures for their customers and staff. Experts in strategic design, applied technology and human interaction, the Liquid team works alongside policy and operational teams to shape, design and implement transformation and reform initiatives.Trusted innovation partners, Liquid is supporting government departments to understand the real potential of composite, LLM, ML and Generative AI in optimising their service delivery and operations.Liquid’s human-centred approach ensures change is embedded, sustainable and supported by building the right skills and capabilities in department teams.

Learn more