Many boards naturally think of AI as a technology, and treat it as such. This is completely normal, but a deeper understanding of artificial intelligence has led us to guide boards and executives to think about it as a capability (more than a technology).
This distinction is important and changes how AI is governed, considers the shared responsibility for its use and usage, and opens up a much larger opportunity space for its strategic value.
Thinking about AI as a capability also improves the quality and scope of risk analysis. IT and Digital teams often lead these conversations however we already know AI is embedded into our everyday working lives and will influence the output (and perhaps judgements) at many levels.
When AI is understood as a capability, it naturally sits with the leaders who deploy it, not only the team that supports it. It gets governed the same way boards already govern other material capabilities: with clear accountability, defined boundaries, and oversight that reflects the consequence. This is familiar ground for most directors.
Consider what AI actually does inside your organisation. It shapes decisions. It influences which customers receive what offer, which claims are flagged, which candidates progress. These are consequential choices, and boards already have strong instincts for governing consequential choices. The opportunity is to apply those instincts here.
The governance principles themselves are well established. Risk appetite. Delegations of authority. Oversight and assurance. Boards have governed complex, opaque capabilities before: treasury operations, actuarial models, algorithmic trading. AI is no different in kind. It is different in speed and scale, which makes governance more timely, not less relevant.
What we observe in boards that govern AI well is a willingness to ask three straightforward questions: what decisions is AI making or influencing, what data do those decisions rely on, and who is accountable for the outcomes? These are governance questions, and experienced directors already know how to ask them.
The regulatory landscape is moving in this direction too. The EU AI Act, Australia's proposed mandatory guardrails, and sector-specific guidance from APRA and ASIC all reflect a growing expectation that boards will be accountable for AI outcomes. Treating AI as a capability now positions boards to meet these expectations with confidence rather than urgency.
A practical starting point is a simple question at your next board meeting: If we were to treat AI as a capability and not a technology per se, how would we govern and leverage it differently? The conversation that follows will often reveal how much of the governance foundation may already in place, where it may need to be strengthened, and outright gaps.
AI is evolving faster than anyone can keep up. But AI governance should be stable given it should consider the core values of the organisation, the strategy, established principles of accountability, and how such a capability (even if it is evolving) can help the organisation move forward and grow.