Trust, Transparency and Practical AI: A Conversation with Darshil Mehta

Darshil Mehta (AustralianSuper, formerly ASIC) shares where AI is delivering practical value, what’s slowing adoption, and how stronger governance can build trust and scale responsible AI in the public sector.

Avatar
James Ireland 14 January 2026
Trust, Transparency and Practical AI: A Conversation with Darshil Mehta

In this interview, Grace Fea, Government Innovation Showcase Producer at Public Sector Network, sits down with Darshil Mehta (Principal, Data Governance and Capability, AustralianSuper, formerly Australian Securities and Investment Commission (ASIC)) to discuss where the biggest opportunities for data and AI are emerging, what’s getting in the way of progress, and how governance must evolve to support responsible AI at scale.


Grace Fea:
What is top of mind for you in terms of potential opportunity and gains for data and AI coming into 2026?

Darshil Mehta:
AI has evolved rapidly in the last couple of years and now organisations are seeing what can be the true use cases rather than just a software sales pitch.

 In terms of opportunity and gains, I see a few of them in different parts of the business. For example, in finance, in the finance team of any organisation, can AI determine invoice processing rapidly? Can AI determine purchase order creation rapidly using historical data or types of contract? AI has a huge opportunity on the finance side, slightly related in procurement as well. Procurement is always a very heavy process-oriented activity in any organisation. Can AI, with some good data, fast track procurement activity, contract formation and eventually contract execution? These are some of the good ideas within the finance area.

In terms of customer engagement, there is always an ongoing enrichment of data required for customer engagement, with ongoing law changes in terms of data collection from customers. Leveraging historical data, can AI predict customer engagement behaviour depending on type of industry? There are some really good use cases in customer management as well.

And then the third one I can think of is on the security side. As AI usage is becoming rapid, there are always increasing risks and potential issues with respect to security. Can we use AI to identify any flaws in the security posture of any of your app or website, or in terms of data? These are three specific areas I can think of where AI can have huge opportunity, as well as potentially tangible measurable gains in those areas.


Grace Fea:
And where are the biggest speed bumps going to lie?

Darshil Mehta:
I think the biggest speed bump is risk appetite. How much risk organisations are willing to take by trialling new AI solutions on a variety of use cases. Because the benefits are not known yet, they are not widespread. For example, in the procurement area, if I want to implement an AI solution, will an organisation accept that new trial and take the risk of trying out AI?

So, the first speed bump is risk appetite, and then also the other speed bump is what sort of governance there is on AI. There are a lot of discussions happening on social forums about what kind of governance is the right governance for AI, and unfortunately no one has nailed it down. There are different versions of that. The speed bump could be setting the right-size governance, which will still accelerate the adoption and usage of AI.


Grace Fea:
That's definitely been a big issue. AI governance and how to get it right. It's a key session in the agenda, there's not many people who have mastered that yet. So it'll be interesting to see …

How do we balance risk with innovation? -and how do you balance innovation speed with public trust, particularly when deploying AI systems that affect citizens directly?

Darshil Mehta:
Yeah, absolutely. To balance risk with innovation, there needs to be a play area to do the innovation—essentially a sandbox type of environment in every organisation. Do anything you can, break anything you can, but within a controlled environment. That sort of environment needs to be enabled to do innovation because innovation does come with a high degree of failure. Allowing users to do experiments in a sandbox type of environment will \balance risk with innovation.
Now when you think of public trust and trying things out on public services, it becomes even more important that the play area or sandbox environment has the appropriate guardrails, which helps to retain public trust. For example, in the sandbox area, can I download any publicly available data—like from ABS or from a website like data.gov.au—and do experimentation using that data? Absolutely. So there are different ways you can deploy an innovation sandpit environment with appropriate guardrails and still balance the risk.

Don’t miss Darshil Mehta live at Government Innovation Showcase Victoria 2026. Darshil will be speaking as part of the panel “Privacy, Data Governance, Transparency and Trust” (12:10 PM – 12:40 PM), exploring how we embed responsible AI governance into workflows, protect privacy rights, and strengthen trust and transparency in decision automation. Register to join us and hear Darshil’s perspectives in conversation with fellow public sector and industry leaders.


Grace Fea:
Right. And what have been the biggest practical barriers to rolling out AI solutions in Victoria—and how have you navigated regulatory, cultural, or legacy system constraints?

Darshil Mehta:
One of the biggest practical barriers is risk appetite. The moment you talk about rolling out an AI solution for the benefit of the public, you want to ensure that it cannot fail. The biggest barrier is what practical approaches and steps we have taken that will prevent any sort of mishap. And the quickest way to get started is to start small. Start small on smaller use cases which will not impact a larger community, or not large sensitive use cases—try some internal use cases and make sure the internal use cases are successful before scaling them out on a large scale for public benefit.

It goes back to the earlier question about balancing risk with innovation. So innovate in-house, experiment with in-house use cases, ensure the appropriate risk is managed, and then once those use cases are successful, start rolling them out for the benefit of the public.


Grace Fea:
Great. And final question, how do you approach data governance at scale to support AI reuse, consistency, and long-term sustainability across programs?

Darshil Mehta:
This is the time when data governance also needs to be uplifted. Yes, there is a very well-defined and well-known world of data governance. In a standard Governance framework, there is a policy followed by a series of standards. Then there are governance bodies and working groups and so on.

But when it comes to AI, it’s gone above and beyond traditional data governance. AI will have different kinds of use cases, users, and risk appetite. How do you flex the governance model so that it also supports AI for reusable use cases? For example, if there is a solution to identify high foot traffic in the City of Melbourne, if there is an ML model behind that, then how do you ensure that model provides the right level of accuracy depending on different kinds of data that we collect? How do we ensure that model is measured for success, and how frequently you can measure the success of that model?

Now these are some of the additional components which are very specific to AI use cases, which are not necessarily in traditional data governance. So data governance needs to be enhanced to make sure that AI-specific requirements are measured, monitored and governed on an ongoing basis.


Grace Fea:
Any final thoughts, comments or, I suppose, trends that you want to mention just to finish off?

Darshil Mehta:
AI is here to stay. It's not going to go away and definitely it has its own risk and security challenges. But at the same time, AI definitely benefits the good of human. So why not embrace AI and make the best use of it, by ensuring that we are using it safely and it’s not misusing our information.


Hear Darshil Mehta speak live at Government Innovation Showcase Victoria 2026. Join the panel “Privacy, Data Governance, Transparency and Trust” (12:10 PM – 12:40 PM) to dive deeper into responsible AI governance, privacy, transparency, guardrails and frameworks, and how culture shapes AI values and ethics. View the agenda and register now to secure your place and be part of the conversation.

Published by

James Ireland Marketing Manager, Marketing