en | AU
OBSERVE Magazine

Subscribe to our global magazine to hear our latest insights, opinions and featured articles.

Board, Chair & NED

What Happens When AI Enters the Boardroom, and Who’s Accountable?

4 min read

Getting your Trinity Audio player ready...

As artificial intelligence reshapes decision-making, boards across Asia-Pacific must move beyond curiosity to confident, ethical oversight. This article explores how directors can lead with clarity in an era where machines may mislead, and governance has never mattered more.

Earlier this year, a global consulting firm was forced to issue a partial refund to a government client after it was revealed that artificial intelligence had been used to generate a high-value report riddled with inaccuracies, including fabricated academic references and a fictional quote from a court judgment.

This incident was not isolated. Across industries, organisations have faced reputational damage, regulatory scrutiny, and financial liability due to AI-generated content that was misleading, factually incorrect, or entirely fictional.

AI is powerful, but not infallible.

To illustrate this, we posed a simple question to an AI assistant and were startled by the response.

AI Copilot prompt

Let’s be clear: AI can mislead, misquote, and misrepresent. Not because it intends to deceive, but because it lacks intent altogether. It doesn’t lie in the human sense. It has no motives, no awareness. But it can produce false or misleading content — what technologists call “hallucinations.” Sometimes, AI presents uncertain information with unwarranted confidence, leading users to believe it is fact.

This reality raises a critical question for boards: how should organisations govern a tool that is both transformative and flawed?

The AI-Literate Board

An AI-literate board is not necessarily composed of technologists. Rather, it is made up of directors who understand AI’s strategic potential and risks, who ask informed questions about data, ethics, and governance, and who stay current through structured education and peer learning. These directors engage with management on AI strategy, oversight, and performance, ensuring that AI is not just a tool but a well-governed asset.

Globally, leading boards are embedding AI into committee charters, conducting quarterly AI risk reviews, and forming ethics committees to oversee high-impact applications. They are treating AI not as a novelty but as a core governance issue.

Aligning AI with Corporate Values

Responsible AI governance is now a boardroom imperative. Directors must ensure that AI use aligns with corporate purpose and stakeholder expectations. Ethical principles — fairness, transparency, and accountability — must be embedded into every AI deployment.

Human oversight remains essential, especially in high-impact decisions such as hiring, lending, welfare, or healthcare. Boards must monitor stakeholder trust through regular reporting, audits, and engagement, and challenge management on how AI decisions are made, what data is used, and how risks are mitigated.

Do Boards Need a Dedicated AI Director?

Not necessarily. Most governance experts agree that boards do not need a dedicated “AI expert director”, but they do need a degree of AI fluency on the board as well as access to deep expertise when needed. External advisors can provide such technical insight, and many boards are integrating AI oversight into existing committees such as audit, risk, or technology.

However, a dedicated AI director may be valuable in certain contexts - particularly if AI is core to the business model, the company faces high regulatory exposure, or the board lacks digital fluency. In these cases, appointing a director with AI governance, data ethics, or algorithmic risk experience can be a strategic asset.

Practical Advice for Directors

On a personal level, directors are encouraged to use AI to enhance their effectiveness — summarising board papers, generating questions, and exploring scenarios. However, they must always verify facts, sources, and assumptions. Curiosity is encouraged, but judgment is essential.

At an organisational level, boards should embed AI into agendas and committee oversight, establish a Responsible AI framework, and ensure management is accountable for ethical and strategic AI use.

For continuing professional development, directors may consider attending AI governance programs such as Quantium’s AI Board Edge or AICD’s AI Fluency Sprint.

So, does AI lie? Not exactly, but it certainly has a flair for fiction, a tendency toward overconfidence, and a habit of inserting ‘em dashes’ where commas once stood. The boardroom must evolve from passive consumers of AI-generated insight to active stewards of its responsible use. That means asking sharper questions, demanding transparency, and embedding ethical oversight into every algorithmic decision.

Whether AI is used to summarise board papers or shape strategic foresight, the human element remains irreplaceable. Directors must stay curious, stay critical, and above all, stay educated. After all, in a world where AI can hallucinate with eloquence, the real power lies in knowing when to trust and when to verify.

________________________________________________________

Get in touch. Ready to strengthen your board’s approach to AI governance? Contact our dedicated leadership experts from your local Odgers office here. We’re here to help leaders navigate the opportunities and risks of AI with clarity, confidence, and purpose.

Follow us

Join us on our social media channels and see how we're addressing today's biggest issues.

Find a consultant [[ Scroll to top ]]