Innovate South Australia 2025 Key Takeaways: Data, Analytics & AI

Exploring AI-driven transformation, cybersecurity resilience, and strategic digital innovation shaping the future of public service delivery.

Avatar
Patrick Joy 10 March 2025
Innovate South Australia 2025 Key Takeaways: Data, Analytics & AI

Leveraging AI to Maximise Efficiencies in Operational Performance and Impact: Tangible ML/AI Use Cases

Shikha Sharma, Chief Information Officer, Department of Human Services
Greg Van Gaans, Director Geospatial Data Science and Analytics, Department for Housing and Urban Development
  1. AI adoption must be strategic and incremental
    Public sector agencies should begin with small, practical AI use cases that align with business needs, such as summarising case files, automating administrative tasks, and improving service planning. Executive buy-in and workforce engagement are essential for scaling AI initiatives effectively.

  2. Data quality and governance are prerequisites for AI success
    AI solutions are only as effective as the data they rely on. Agencies must invest in robust data governance, metadata management, and security frameworks to ensure AI models are accurate, unbiased, and compliant with ethical and privacy standards.

  3. Balancing automation with human oversight
    While AI can enhance decision-making, it should serve as an assistive tool rather than replace human judgment. Ensuring transparency, mitigating bias, and maintaining strong oversight in AI-driven processes will be critical for maintaining public trust and compliance with evolving regulations.


Unlocking the Next Realm of Data AI Opportunities Within Education

Dan Hughes, Chief Information Officer, Department for Education

  1. AI must complement, not replace, educators
    The introduction of EdChat in South Australia’s education system demonstrates how AI can support teachers by reducing administrative burdens, improving student engagement, and providing personalised learning assistance. AI should act as an augmenting tool, not a substitute for human expertise.

  2. Data security and contextual accuracy are critical
    For AI to be effective in education, it must be built with localised data and safeguards. EdChat prioritises South Australian curriculum content, student data protection, and responsible AI use to ensure relevance and reliability while preventing misinformation and bias.

  3. AI fosters student learning beyond the classroom
    AI-powered tools like EdChat enable students to access personalised, 24/7 academic support, aiding comprehension, inquiry-based learning, and language translation. By acting as an interactive tutor, AI can bridge educational gaps, particularly for students with diverse learning needs.


Modern Data Management in the Era of AI

Tom Stamatopoulos, Head of Solution Architecture [AUS & NZL], Informatica

  1. Ensuring data is fit for business use
    Public sector organisations must manage data that is accessible, accurate, secure, and integrated across systems. Poor data quality can lead to flawed AI outputs, inefficiencies, and increased risk exposure.

  2. AI-powered automation accelerates data processes
    AI-driven tools, such as Informatica’s Clear AI, streamline data ingestion, transformation, and governance, significantly reducing manual workloads. Automating complex data tasks enhances accuracy while maintaining human oversight.

  3. Building a strong data foundation for digital transformation
    Organisations must establish a single view of data to improve decision-making and operational efficiency. Case studies, such as La Trobe University, demonstrate that a well-structured data strategy can lead to cost savings and improved service delivery.


Assessing the Threat and Potential of LLMs: Remaining in AI Sovereignty Within SA

Professor Anton van den Hengel, Director, Centre for Augmented Reasoning (CAR) Australian Institute for Machine Learning (AIML), University of Adelaide

  1. Australia risks being left behind in the AI revolution
    While AI is driving global economic shifts, Australia’s approach remains overly focused on regulation rather than capability development. Without sovereign AI investment, the nation may become a passive consumer rather than an active innovator, leading to economic and strategic vulnerabilities.

  2. Foreign AI dominance threatens local industry and economic independence
    The rise of AI-powered platforms that extract value without local investment mirrors the disruption seen in industries like taxis (Uber) and advertising (Google). Without sovereign AI development, Australian companies risk losing control of critical infrastructure and business models to foreign AI firms.

  3. Investment in local AI capability is essential for economic resilience
    Large-scale AI investments, such as Telstra’s $700M AI partnership with Accenture (benefiting India rather than Australia), highlight the missed opportunities in developing homegrown expertise. A shift towards domestic AI R&D, training, and industry collaboration is necessary to build a competitive, future-proof economy.


Putting trusted data in the Hands of the Public Sector

Paul Tatum, Executive Vice President, Solution Engineering, Salesforce

  1. Agentic AI is shifting from passive tools to proactive, autonomous digital employees
    Unlike traditional AI, digital agents can take action, collaborate, reason, and improve over time. Public sector organisations can use these agents to augment human decision-making, streamline workflows, and provide 24/7 multilingual support.

  2. AI-driven case processing and compliance automation reduce complexity
    Demonstrations of AI-powered agents analysing regulatory documents, summarising policy requirements, and assisting caseworkers (e.g., Medicare claims processing) highlight the potential for AI to improve efficiency in complex, document-heavy government services.

  3. Trust, security, and human-centric design are essential for AI adoption
    Successful AI implementation requires secure data handling, policy-aligned guardrails, and user-friendly experiences. Emotional intelligence (AI with "a heart") enhances interactions, ensuring AI-driven services remain empathetic and effective in real-world scenarios.


Applied Ethics in Artificial Intelligence: Trustworthy AI – Are Your AI Systems Safe and Responsible?

Dr Melanie McGrath, Honorary Fellow, Melbourne School of Psychological Sciences, University of Melbourne
Dr Melissa McCradden, AI Director, Women's and Children's Hospital Network
Ameya Sawant, Director, Plan SA, Department for Housing and Urban Development
  1. AI literacy as a foundation for responsible adoption
    Effective and safe AI use in government requires more than technical responses—it demands AI literacy among public sector workers, stakeholders, and the public. A consistent, whole-of-government glossary of AI terms could support better understanding and risk mitigation. AI-related risks, such as misinformation and disinformation, are often human-driven, reinforcing the need for education and clear frameworks.
  2. Emerging risks from technology convergence and environmental impacts
    As AI integrates with immersive and emerging technologies, risk profiles shift and, in some cases, amplify. Governments need to assess these intersections to anticipate ethical challenges. Additionally, AI’s energy consumption raises concerns—while powerful AI models drive innovation, they also exacerbate environmental pressures, such as increasing reliance on energy-intensive infrastructure.
  3. Ethical AI extends beyond algorithms to organisational culture
    Developing trustworthy AI models is not enough if deployed in unethical environments. AI ethics should encompass organisational governance, risk management, and broader socio-technical systems. Ethical AI is not a checklist but a continuous process, requiring long-term commitment rather than a one-time compliance effort.

Your Citizens Are Ready for the Future of Digital Government Services. Are You?

Neil Hathaway, Executive, The Factor
  1. AI is reshaping government services faster than expected
    Generative AI has rapidly become an essential tool, transforming how governments operate. Despite early AI ethics principles and frameworks, public sector readiness remains uneven. Lessons from past failures, like Robodebt, highlight the risks of blind automation without human oversight. To realise AI’s benefits, governments must strike a balance between regulation and innovation, ensuring responsible use without stifling progress.
  2. Challenges and pitfalls in AI adoption for government agencies
    AI adoption brings practical challenges, including data security, regulatory gaps, and system integration issues. Many agencies struggle with data silos and outdated infrastructure, limiting AI’s effectiveness. Transparency statements now require departments to disclose their AI use, but inconsistencies remain in implementation. AI security policies must address risks beyond basic compliance, ensuring sensitive data isn’t misused in public AI models.
  3. AI should be an accelerator, not just a compliance exercise
    To maximise AI’s potential, governments must focus on tangible citizen outcomes, such as improving service delivery and resource allocation. AI can streamline tasks like document processing, summarisation, and decision support, reducing administrative burden. Successful implementation depends on strong data foundations, careful risk management, and an iterative approach to AI adoption, starting with small, practical improvements before scaling up.