AI Agents, Non-Human Identity Risk, and the Transparency Problem Leaders Cannot Ignore

The 30-second take

AI agents and non-human identity risk are not just technical topics. They test whether leaders can connect AI use to decisions, customers, operations, accountability and ethics. The practical challenge is visibility and evidence.

Can leaders see how AI use cases are identified, assessed, approved, monitored and challenged? Can risk managers show that governance is helping the organisation take better, clearer and more deliberate risk? Can assurance teams test whether controls are working in practice?

If the answer is no, the organisation has an AI transparency and maturity problems.


AI is no longer just a tool that waits for a human to type a prompt. Increasingly, AI systems can take action: retrieve information, trigger workflows, draft responses, analyse data, interact with systems, call APIs, and even make decisions. These are often described as AI agents, systems that can perform tasks with a degree of autonomy on behalf of a person, team, function or organisation.

This matters because every action needs an identity.

In the past, most organisations managed identity risk by focusing on people: employees, contractors, administrators and privileged users. But AI changes that. An AI agent may operate through a service account, a bot account, a third-party platform, an integration, or an automated workflow. It may not look like a person, but it can still access data, influence decisions, create records, trigger actions and affect customers.

That is non-human identity risk.

And it matters even when AI sits “behind the scenes”.

If a third-party platform uses AI inside its product, if an automated process relies on AI-generated outputs, or if a workflow uses an AI agent to support internal decisions, your organisation still needs to understand what the AI is doing, what access it has, what decisions it influences, and who remains accountable.

Transparency is not optional. It is the foundation of ethical AI use.

If leaders cannot see where AI is being used, how it acts, what data it touches, and who owns the outcome, they cannot make informed decisions. They cannot properly assess customer impact, privacy risk, cyber exposure, operational dependency, fairness, accountability or assurance. They are not governing AI. They are hoping it behaves.

AI agents and non-human identity risk are therefore not narrow technology issues. They are leadership tests.

They test whether the organisation can see its real AI use cases, understand the trade-offs, challenge the evidence, and act before small AI decisions become unmanaged business dependencies.


Why this matters

AI governance is moving from principles to proof.

Policies, frameworks and committee papers have value, but they are not enough. Leaders need evidence that AI use is visible, understood, controlled and accountable across the lifecycle.

The danger is not simply that AI creates new risks. The greater danger is that AI quietly changes existing risks before the organisation has named the activity as an AI use case.

Customer fairness, privacy, cyber security, operational resilience, outsourcing, model risk, conduct, records management, accountability and assurance can all shift when AI becomes part of a process.

This can happen in obvious ways, such as a chatbot interacting with customers.

It can also happen quietly, such as:

  • a vendor embedding AI into an existing platform;
  • a workflow using AI to classify, prioritise or escalate work;
  • a system account giving an AI tool access to sensitive information;
  • an internal team using AI to draft customer communications;
  • an automated process relying on AI-generated summaries, recommendations or decisions.

The organisation may still see the process as business-as-usual. But the risk profile has changed.

AI risk management is not about slowing progress. It is about making sure progress is visible, explainable, ethical and controlled.


The real issue is leadership

The practical issue is visibility.

If leaders, particularly boards and executives, do not see where AI is being used or consider the non-human identity risk, they cannot make informed decisions about risk appetite, investment, controls or assurance.

Consider leadership from the perpsective of each group:

  • If boards do not lead and drive the direction of AI agents and their transparency to those using the services then risk appetite is unclear and decisions will be made without effective consideration of the real impacts.
  • If executives see AI agents as the solution, whilst not understanding the trade-offs and implications, they will not make the best decisions for those that use their services or products.
  • If business leaders cannot see or understand how AI is used in third-party tools, they cannot properly challenge whether the organisation understands the dependency or the implications of the AIs role.
  • If technology teams cannot identify which non-human identities are connected to AI-enabled workflows, they cannot properly manage security, access, logging, monitoring or misuse.
  • If risk and assurance teams cannot see the evidence trail, they cannot give meaningful comfort that controls are working.

This is why AI agents and non-human identity risk need to be discussed as business risk issues, not just risk or technology issues.

The organisation needs to understand:

  • what decision, process or customer outcome is changing;
  • what identity the AI or automated process uses;
  • what systems and data it can access;
  • who owns the use case;
  • what could go wrong;
  • what would be unacceptable;
  • what evidence shows the controls are working;
  • who can pause, change or retire the use case if risk increases.

Good AI governance is not a policy on a shelf.

It is the ability to explain a real AI use case, the decision or process it supports, the risk it creates, the identity it uses, and the evidence that controls are working.


Why transparency and ethics sit at the centre

AI ethics is often discussed in broad terms: fairness, accountability, privacy, explainability and human oversight.

Those principles only become real when the organisation can see what is happening.

Transparency allows leaders to ask the right ethical questions:

  • Are customers aware AI influences this process?
  • Could the AI produce unfair, misleading or harmful outcomes?
  • Does a human remain accountable for the decision?
  • Is the organisation using customer, employee or sensitive data in a way people would reasonably expect?
  • Can the organisation explain the outcome if challenged?
  • Can it prove who or what accessed the data?
  • Can it stop the AI-enabled process quickly if something goes wrong?

Without transparency, ethical AI becomes branding.

With transparency, ethics becomes operational.

That is the difference between saying “we use AI responsibly” and being able to prove it.


What business leaders should challenge

Business leaders do not need to become data scientists. But they do need to ask better questions.

They should challenge whether AI use is connected to business outcomes, customer impact, accountability and evidence.

Key questions include:

  • Which AI use cases are currently active, including those embedded in third-party platforms?
  • Which of those use AI agents, automated workflows, bots, service accounts or other non-human identities?
  • Who owns the use case after it moves from trial to business-as-usual?
  • What data, systems and decisions can the AI-enabled process access or influence?
  • What would make the use case unacceptable, even if it improves efficiency?
  • How would the organisation know if the AI-enabled process started producing poor, unfair, inaccurate or unreliable outcomes?
  • What changes if the AI capability sits inside a vendor platform rather than an internally built tool?
  • Who can pause, change or retire the use case if the risk profile changes?

These are not technical questions dressed up as governance. They are basic leadership questions for any organisation using AI.


What risk managers should evidence

Risk managers should avoid becoming the team that only says “be careful”.

The stronger role is to help the organisation take better risk.

That means turning AI governance into practical evidence that supports decision-making.

Risk managers should be able to evidence:

  • a current inventory of AI use cases, including material third-party AI capabilities;
  • identification of AI agents, bots, service accounts, integrations and other non-human identities connected to AI-enabled processes;
  • clear criteria for materiality, customer impact, operational dependency and risk level;
  • approval, challenge and risk acceptance at the right level;
  • controls tailored to the use case, not generic statements about responsible AI;
  • access controls, logging and monitoring for non-human identities;
  • review triggers when models, vendors, data, permissions, use cases or operating conditions change;
  • assurance activity that tests whether the governance process works in practice.

The goal is not more paperwork.

The goal is better visibility, better decisions and stronger accountability.


AI governance and decision rights

AI governance needs clear decision rights.

The organisation should know who can approve, change, pause, escalate or retire AI use cases. This becomes even more important when AI agents or non-human identities can access systems, trigger workflows or influence operational decisions.

Questions to ask:

  • Who owns the AI use case after launch?
  • Who owns the non-human identity or system access the AI uses?
  • Who can approve, pause, change or retire the use case?
  • Are risk acceptance decisions made at the right level?
  • Can the organisation evidence challenge, escalation and decision records?
  • Does the business owner understand they remain accountable, even when AI operates behind the scenes?

Risk assessment, testing and assurance

AI risk assessment should happen before use and continue after deployment.

A one-off review is not enough, especially where AI capabilities change, vendors update models, data sources shift, or workflows become more automated.

Questions to ask:

  • Has the use case been assessed against impact, complexity and materiality?
  • What testing occurred before release?
  • What could go wrong if the AI agent takes the wrong action or uses the wrong data?
  • What assurance evidence exists after deployment?
  • Who independently challenges the risk assessment and control design?
  • What monitoring shows the AI-enabled process is still working as intended?

Data, privacy, security and technology controls

AI relies on data, access and technology foundations.

The issue is not only whether controls exist. The issue is whether they are specific enough for the data, model, platform, identity and decision being supported.

Questions to ask:

  • What data does the AI use, and is it appropriate for the purpose?
  • What systems can the AI-enabled process access?
  • What permissions does the non-human identity have?
  • Have privacy, cyber and access controls been assessed for this use case?
  • Are data quality, retention and records risks understood?
  • Can the organisation monitor and audit AI-related access and activity?
  • What changes if data, permissions or system integrations change?

What an AI maturity assessment should show

A useful AI maturity and risk assessment should not simply ask whether a framework exists.

It should help leaders test whether the organisation can evidence how AI risk is understood, governed and challenged across the lifecycle.

That includes the human side.

Do business owners understand their accountability? Are risk teams involved early enough? Can technology teams identify and control non-human identities? Do assurance teams test what matters? Can executives see enough evidence to make informed decisions?

The assessment should also distinguish between activity and maturity.

Having a policy, committee or checklist is useful. But it is not the end point.

The better test is whether the organisation can show:

  • consistent decisions;
  • clear ownership;
  • transparent AI use;
  • controlled non-human identities;
  • practical controls;
  • ethical consideration of customer and stakeholder impact;
  • enough evidence to support confidence.

The final question

Can your organisation evidence how AI agents and non-human identity risk are identified, assessed, approved, monitored and challenged across the lifecycle — including where AI is embedded inside third-party platforms?

If not, the issue is not just AI risk.

It is a transparency, accountability and ethics risk hiding in plain sight.


Innovation of Risk provides AI maturity and risk assessment tools to help organisations have better internal risk, governance and assurance discussions. This post is general information only and is not legal, regulatory, audit or professional advice.

More from the Reading Room

Shadow AI and uncontrolled usage is not leveraging AI

A practical AI risk governance article focused on Shadow AI and uncontrolled staff usage, evidence, ownership, challenge and maturity assessment.

APRA calls for a step-change in AI-related risk management and governance

APRA has flagged a need for a step-change in AI-related risk management and governance across banks, insurers and superannuation trustees, indicating a sharper prudential focus on emerging technology risk.

Effective Risk Committees

The practice of effective risk management requires the management team to take ownership for the risks of their business through an effective and efficient decision making process.

Every Risk Moment Matters

In each of our working and personal lives every moment matters. This applies just as much for risk moments as customer facing moments.