APRA calls for a step-change in AI-related risk management and governance

AI is moving from experiment to infrastructure. It is now embedded in products, processes, third-party platforms and everyday decision-making. The risk is that governance is not moving at the same speed.

That is why APRA’s recent letter matters. It is a clear reminder that AI needs more than enthusiasm. It needs mature governance, quality risk assessment, strong assurance and evidence that controls are actually working.

The core message is clear:

AI adoption is accelerating, but governance, risk management, assurance and operational resilience practices are not always keeping pace.

APRA has observed that regulated entities are moving from experimentation into more embedded and customer-facing AI use cases, while governance maturity, board challenge, contingency planning and assurance practices remain uneven.

That matters because AI is no longer a side experiment sitting in an innovation team. It is increasingly embedded in business processes, third-party platforms, software tools, cyber capabilities, customer service models, data analytics, decision support, development environments and operational workflows.

This creates real opportunity. It also creates risk that may not be obvious until something fails.

The real issue is not AI. It is unmanaged AI.

AI can improve efficiency, decision-making, customer experience, risk analysis and operational insight. It can help organisations identify patterns faster, test scenarios more effectively, review evidence, support control assessments and improve the quality of risk conversations.

But the benefits do not remove the need for discipline.

In fact, the more powerful and embedded AI becomes, the more important mature governance becomes. Organisations need to know where AI is being used, what it is being used for, what data it relies on, what decisions it influences, what controls exist, what assurance has been performed and what happens if the AI-enabled service fails or changes.

This is not just a technology issue. It is a governance issue, a risk management issue, a third-party risk issue, an operational resilience issue and an accountability issue.

“The vendor has it covered” is not a risk assessment

One of the biggest traps with AI is assuming that a third party, software provider or platform vendor has already dealt with the risk. This is where organisations need to be more demanding.

It is not sufficient to ask whether a provider uses AI and accept a broad assurance statement or evidence they want to provide, including they use it elsewhere. Organisations need to understand the quality of the provider’s risk assessment. They need to see evidence. They need to know what has actually been tested, what assumptions have been made, what controls are in place and whether the assurance is strong enough for the importance of the service.

That means asking practical questions:

  • Where is AI used in the product or service?
  • What customer, operational, security or confidential data is exposed to the AI model?
  • How are model changes governed?
  • What controls prevent inappropriate, biased, insecure or unreliable outputs?
  • How is the service monitored?
  • What assurance has been completed?
  • What evidence supports the provider’s claims?
  • What contingency plans exist if the AI service fails, is compromised or materially changes?
  • How does the organisation know the provider’s AI risk assessment is fit for purpose?
Trust may be part of a commercial relationship. But trust is not a control.

Boards and executives need enough literacy to challenge

APRA also highlighted that boards often have strong interest in the benefits of AI and a desire to see it happen, but may not yet have the real information needed to effectively challenge management on AI-related risks.

Boards and executives do need enough understanding to ask better questions. They need to be able to challenge whether AI is being adopted safely, whether risks are being assessed consistently, whether assurance is meaningful and whether management has visibility over the most material AI dependencies.

Good AI governance should help leaders understand:

  • which AI use cases are material;
  • which business processes rely on AI;
  • which third parties are critical;
  • what could go wrong;
  • what controls are relied upon;
  • what evidence supports the risk position;
  • what risk appetite applies; and
  • whether current oversight is strong enough.
AI governance cannot sit only in technology. It needs to be connected to strategy, risk appetite, procurement, operational risk, cyber security, data governance, privacy, compliance, internal audit and business ownership.

Risk professionals should not sit on the sidelines

There is another important point: this is not only a defensive issue for risk professionals. AI is also an opportunity.

Risk teams should be embracing AI responsibly. Used well, AI can help improve the quality, speed and depth of risk maturity assessments. It can help identify gaps, structure evidence, compare practices against standards, support scenario analysis, improve board reporting and make risk conversations more accessible to business teams.

The risk function should not be the team that simply says “no” to AI.

Mature AI risk assessment is now essential

A mature AI risk assessment process should not be a one-off checklist. It should be part of how the organisation governs change, technology, third parties and operational resilience.

At a minimum, organisations should be able to demonstrate:

  • an inventory of AI use cases, including embedded third-party AI;
  • clear ownership and accountability;
  • risk assessments based on materiality and use case;
  • evidence of controls and assurance;
  • consideration of data, privacy, cyber, operational, conduct and resilience risks;
  • board and executive reporting for material AI exposures;
  • contingency planning for critical AI-enabled services;
  • ongoing monitoring, not just approval at implementation.

The key word is evidence.

Policies are useful. Frameworks are useful. Governance forums are useful. But if the organisation cannot show the evidence behind the risk position, it will struggle to demonstrate that AI is being managed in a mature and controlled way.

Where we are focused

At Innovation of Risk, we are doing a lot of work on how AI can be used to support better risk maturity and risk assessment practices. That includes using AI to help organisations explore maturity, identify gaps, structure evidence, assess governance practices, support regulatory self-assessments and make risk insights more practical for boards, executives and business teams.

The goal of AI in risk is not to replace judgement. It is to improve the quality of the assessment process and make risk conversations more useful.

The organisations that get this right will move with discipline

The answer is not to blindly trust AI. It is also not to block it out of fear.

The organisations that perform best will be the ones that adopt AI with discipline. They will understand where AI is being used, assess the risks properly, challenge third-party assurances, demand evidence, monitor changes and make sure boards and executives have enough visibility to govern effectively.

APRA’s message is timely.

AI is moving quickly. Risk governance needs to move with it.

And for risk professionals, this is a moment to step forward.

More from the Reading Room

AI Agents, Non-Human Identity Risk, and the Transparency Problem Leaders Cannot Ignore

A practical AI risk governance article focused on AI agents and non-human identity risk, evidence, ownership, challenge and maturity assessment.

Shadow AI and uncontrolled usage is not leveraging AI

A practical AI risk governance article focused on Shadow AI and uncontrolled staff usage, evidence, ownership, challenge and maturity assessment.

The Future of Australia’s Financial Services Industry: Embracing the Financial Accountability Regime

APRA and ASIC Spearhead a Revolutionary Change in the Financial Sector Introduction Today marks a significant milestone for the Australian financial services industry as the Australian...

The Future of Operational Risk in Financial Services: APRA’s CPS230

Introduction Today marks a momentous occasion in the world of financial services as the Prudential Regulator APRA releases the final version of CPS230. This milestone...