Embedding AI Expertise into Public Sector Servicing

Part 3 of Ian Oppermann’s Vision for the Future of Public Sector Servicing

Author avatar
Ian Oppermann 26 August 2024
Embedding AI Expertise into Public Sector Servicing

In today’s rapidly evolving technological landscape, governments around the world are under increasing pressure to integrate AI and data-driven solutions into their operations. However, navigating this complex field can be a daunting task. Whether you're looking to hire AI experts, build expertise within your agency, or simply get a handle on the latest developments, this guide is here to help. It will break down the essentials for embedding AI and digital literacy into government functioning, ensuring that your team is equipped not just to meet today’s challenges but to stay ahead of tomorrow’s. By understanding data and AI standards, appreciating the importance of standards, and building a specialized labour pool, government agencies can position themselves to more readily tackle impending and necessary service transformations, and lean into innovations in service delivery.  

It's Time to Treat Data like a Financial Asset 

Use of data, and analysis of it using sophisticated algorithms including AI, are not just passing trends; they are increasingly essential components of modern business. As we’ve seen with cybersecurity, where adherence to standards like ISO 27001 have become a staple, AI and data standards are set to become increasingly important. Understanding and adopting these standards is not just about compliance; it’s about recognizing that every government agency is now, at its core, a data-driven organization. 

Consider this: every action, decision, and policy within a government agency generates data. When properly managed, this data can provide insights that are as valuable as financial assets. 

Imagine now if all your data were as valuable as money—how would you treat it? 

You’d likely ensure that it’s protected, accurately managed, and ensure that it is used to its fullest potential. This is where AI and data standards come into play. These standards provide frameworks for treating data as an asset, ensuring that it is governed with the same rigor and attention as any other critical resource. 

The shift towards viewing data as a strategic asset is not just theoretical; it has real-world implications for how government agencies operate. For example, the adoption of AI standards like ISO/IEC 42001 will help ensure that AI technologies are implemented in a way that aligns with ethical considerations and operational effectiveness. Similarly, data standards help agencies develop robust data governance frameworks that protect sensitive information, ensure data quality, and enable more informed decision-making. 

Moreover, as AI continues to evolve and become more integrated into government functions, being certified against these standards will no longer be optional—it will be a fundamental requirement. Agencies that embrace these standards will not only enhance their operational capabilities but also build trust with the public by demonstrating a commitment to responsible data management and AI deployment. 

Starting at Standards 

When it comes to data and AI, the landscape can seem overwhelming, but the good news is that there are some well-established international standards that can guide your efforts. Over the past few years, numerous high-quality standards have been developed to ensure that AI and data technologies are used responsibly and effectively. Australia, through Standards Australia, is actively participating in the development and adoption of these standards on a global scale. 

The two primary organizations responsible for these standards are the ISO (International Standards Organization) and the IEC (International Electrotechnical Commission). These bodies collaborate through a joint technical committee known as JTC 1 which, despite its unassuming name, plays a crucial role in setting the standards for cybersecurity, data management, and now AI. 

You may already be familiar with the ISO/IEC 27000 series, which focuses on cybersecurity. Many managers and directors recognize the importance of ISO/IEC 27001 certification for protecting their organizations against cyber threats. 

However, it's essential to note that standards related to AI are also gaining traction. For example, the ISO/IEC 42001 standard, published earlier this year and rapidly adopted by Australia, provides a framework for AI management systems, ensuring that AI technologies are implemented in a way that is ethical, transparent, and effective. 

Another key area of development is within SC32, the subcommittee that brought us SQL, the backbone of many data management systems. This group has now expanded its focus to include standards for data sharing and usage, which are fundamental to any organization looking to leverage data effectively. These standards, including those in the 42000 series for AI and the 32000 series for data sharing, are critical tools for any government agency looking to stay ahead in the digital age. 

 

Embedding AI and Data Literacy in Government 

To fully realize the potential of AI and data within government, it’s crucial to embed these technologies and the literacy surrounding them into the fabric of government operations. 

This goes beyond just having a few experts on staff—it’s about ensuring that every government employee has a basic understanding of data and AI, and that there is a culture of continuous learning and experimentation. 

In some countries, such as Finland, there is a strong relationship between government and universities, where relevant technical expertise is readily available and readily engaged. Government agencies regularly tap into this expertise to inform their decision-making and strategy. Unfortunately, Australia doesn’t yet have this level of integration, but it absolutely should. Establishing closer ties between government, universities, and industry is essential for building a knowledgeable and skilled workforce capable of navigating the complexities of AI and data. 

One of the most effective ways to embed AI and data literacy into government is to make it a fundamental requirement for all employees. This could be as straightforward as a knowledge and skills test, like getting a driver’s license, but focused on data literacy. Just as every government employee must adhere to privacy laws and ethical standards, they should also have a basic understanding of data governance and AI including knowledge of the existence of standards. This knowledge should be seen as a prerequisite for working in government, ensuring that every employee can contribute to data-driven decision-making. 

However, basic literacy is just the starting point. Every government agency should also have a dedicated group of specialists—people who truly understand the intricacies of data and AI. These specialists would not only support the rest of the team but also help to drive innovation and best practices across the agency. In New South Wales, for example, the Data Analytics Centre was established to be this expert group, initially doing detailed analytics but and over time, supporting agencies as they developed their own in-house data analytics expertise. 

Having such groups within agencies is crucial for staying ahead of the curve. These specialists can act as a bridge between the latest developments in AI and data science, and their practical applications within government. 

They can also ensure that best practices are disseminated throughout the agency, helping to build a culture of continuous improvement and innovation. 


You’ve got the Standards and the Skills, Now What? 

So, how do we make this vision a reality? The first step is fostering a culture where experimentation is not only accepted but encouraged. 

In many government agencies, there is a significant fear of failure, which can stifle innovation. 

However, without taking risks and trying new approaches, progress stalls, and opportunities for improvement are missed. 

One way to overcome this fear is by creating a safe environment for experimentation, where employees are encouraged to test new ideas without the fear of punitive consequences. No AI or data-driven service should be deployed without experimenting and trialing in these safe environments where risks and mitigations can be more readily identified. This doesn’t then mean that there should be no accountability for uses of data and AI—rather, it’s about finding a balance where calculated risks are rewarded, and mistakes are seen as learning opportunities. For instance, one government CEO recently emphasized the importance of running fast and occasionally “scraping your knees” as a sign of genuine effort and innovation. This mindset is crucial for fostering a culture of experimentation and learning within the public sector. 

Additionally, it’s vital to establish a safe space for collaboration between government, universities, and industry. Too often, concerns about privacy, procurement, or potential conflicts of interest prevent meaningful engagement with external partners. However, this lack of collaboration can hinder innovation and slow down progress. Governments must find ways to engage with these groups in a way that respects privacy, probity and ethical considerations while allowing for the free exchange of ideas and expertise. 

One potential approach is to create structured forums or partnerships where government agencies can work closely with universities and industry to address big challenges – think again of the housing crisis, the rising cost of health care, the transition to net-zero, or dealing with the impact of climate change. Such collaborations would allow for the sharing of knowledge and resources, enabling all parties to bring their perspectives to bear, and to develop more effective ways of learning together.  

It is often said that understanding the problem is most of the challenge. Very rarely do people actually allow the time and space to do so. Discussing and refining problems which underpin major challenges, rather than jumping to solutions, is one great way to do learn together. Getting very clear on the nature of the underpinning problem takes time especially if the complexity of the challenge is truly embraced. This then allows universities to reconsider how some of their cutting-edge research and technical expertise could be brought to bear on these problems, while industry partners could offer practical insights and tools that are directly applicable to better explore or probe these problems.  

Improving digital literacy in the public sector is not just about training—it’s about creating a culture that values data, embraces complexity, and isn’t afraid to push boundaries. This means encouraging government employees to think critically about how they can use data to solve problems, rather than seeing data as a magic bullet that will automatically provide answers. It’s about understanding that while data may not always deliver a single, clear-cut solution, it can provide valuable insights that, when combined, can lead to meaningful improvements.

 

The Future of AI in Government Looks Promising, If...

By embedding AI and data literacy into government operations, encouraging experimentation, and fostering collaboration with external partners, we can build a public sector that is not only more efficient and effective but also better equipped to meet the challenges of the future. This approach will enhance the capabilities of individual agencies and contribute to the strength and resilience of government.