The AI Governance Gap: A Finance Leader’s Wake-Up Call

In August, I wrote that cybersecurity’s journey from an IT concern to a board-level issue was inevitable. A few months later, I find myself watching another transformation unfold, and this time it feels even faster. Generative AI is advancing at a pace that even those of us in senior finance roles are struggling to keep up with.

I am not a data scientist. I am a finance and governance professional who is learning. Over the past few months, I have been researching AI concepts, attending discussions, and speaking to experts. Each time I think I have understood it, I realise there’s still more to learn.

But one thing is clear: when AI goes wrong, the consequences are not technical; they are financial. It can affect brand value, market confidence, and the trust that takes years to build.

The AI Surge: Through A CFO Lens

AI adoption has already outpaced governance and is no longer experimental. It has entered mainstream business functions.

  • 78 per cent of global companies now use AI in at least one business area.
  • McKinsey’s 2025 State of AI Report found that organisations where senior leadership (such as the CEO) directly oversee AI governance tend to show stronger financial outcomes.
  • Private investment in generative AI crossed USD  33.9 billion in 2025, up nearly 19 per cent from the previous year.

These numbers show optimism and sound exciting, but as a CFO, I am asking different questions:

  • Where does the investment end and the oversight begin?
  • What is the cost of a wrong decision made by a machine that sounds convincing?

In many boardrooms, AI proposals are being approved as efficiency tools rather than strategic transformations. The issue is not just adoption, but assurance. Boards often lack frameworks to verify AI outputs, track model reliability, or quantify the cost of error.

As finance professionals, we are trained to measure return on investment. But with AI, we also need to measure exposure, the unseen cost when models produce errors or when employees use unapproved tools.

Our role must be to ask the questions others overlook: What assumptions are inside this model? What is the fallback if it fails?

What I am Learning About Generative AI

At its simplest, generative AI refers to systems that create content based on patterns they have learned. That content can be text, images, code or audio.

Tools like ChatGPT, which many of us have experimented with, are powerful, but they are not encyclopaedias of facts. They predict what is most likely to be true based on patterns in their training data. That means they can sound confident even when they are entirely wrong. That behaviour is called hallucination. Hallucination is when an AI system produces information that looks believable but is false. It might invent a financial ratio, misinterpret a regulation, or create a source that does not exist.

It’s one of the most unsettling things for a finance person to accept: a system that can fabricate data with absolute confidence.

And it’s not a theoretical risk. Recently, a large consulting firm had to refund a major government client after an AI-written report was found full of false citations and fabricated facts. No one had intentionally done anything wrong; they just trusted the AI too much. That single example captures the essence of the AI governance gap: efficiency without verification is not progress; it’s exposure.

The Numbers Behind the Risk

AI promises to make us faster, but not necessarily safer.

  • IBM’s 2025 Cost of a Data Breach Report found that the average global breach cost is USD 4.44 million, a 9% decline from 2024’s USD 4.88 million.
  • The report also reveals that 97 per cent of organisations that experienced an AI-related security incident lacked proper AI access controls, and 63 per cent had no formal AI governance policy in place
  • In the financial sector, the cost per incident jumps to USD 6.08 million (according to IBM’s 2024 financial-industry analysis), which is higher because of regulatory and reputational sensitivity.

For CFOs, these are not IT metrics. They represent financial exposure: potential write-downs, reputational loss, regulatory costs, and investor reaction.

When Brand and Market Value Are on the Line

In finance, we talk about intangible assets like goodwill and reputation, but AI has given those terms a new urgency.

A false AI-generated post, a deepfake video, or a misquoted executive can create panic before the truth catches up. Studies show that 43 per cent of companies hit by reputational crises underperform their peers for at least two years. In emerging markets, startups have already lost investor confidence because of AI-generated rumours and misattributed statements.

That’s why, when I think about AI, I don’t see a tool; I see a new kind of financial liability that needs its own risk framework.

What “Shadow AI” Taught Me

One of the more surprising discoveries in my research is how widespread shadow AI has become.

Employees (or business units) may use AI tools on their own to write reports, summarise contracts, or analyse data, often without IT or governance oversight. The risk arises when internal or sensitive data is uploaded to third-party systems outside our control.

As CFOs, we have long managed gaps between policy and practice. AI has opened a new one: between what we think people are doing and what they are actually doing online.

Shadow AI isn’t about malicious intent; it is about convenience. But it forces us to extend behavioural governance beyond budgets and models.

How I’ve Started Thinking About Controls

Every finance leader has a process-driven mindset, and that’s what AI governance needs. Here’s how I now approach it:

  1. Validation before trust. Every AI-generated number or forecast must be cross-checked against audited data or reliable sources.
  2. Audit trails. Any AI system we use should log prompts, data sources, and user activity. If a figure is challenged, we must be able to trace how it was produced.
  3. Human review. AI can assist in analysis. Decisions remain human. Content that reaches investors, regulators or customers should have a clear owner who signs off.
  4. Transparency. Internal and external documents should identify where AI assisted the work. This is not to create fear. It is to create accountability.

These are not IT controls; they are financial controls, reimagined for a new kind of risk.

Protecting Data Is Protecting Value

AI runs on data. Data is the new inventory of a digital business, and for me, Data is an asset and a potential liability. When we feed it into AI tools, we must know where it goes and who can see it.

Regulators in the UAE, Saudi Arabia, and across the GCC are already tightening rules on data storage and residency. As a CFO, I treat this the same way I would treat treasury control. Every asset, even digital, must be traceable and recoverable.

The questions I use are simple:

  • Where is our data stored? 
  • Who can access it? 
  • When does it get deleted?

If I can’t answer these, I can’t sign off on risk.

Practical steps that help:

  1. Use secure, enterprise AI platforms

Free public tools are designed for open learning, not confidentiality. Enterprise platforms such as Azure OpenAI, Google Vertex, or AWS Bedrock offer data isolation—ensuring company inputs stay private and aren’t reused to train public models. This is vital when handling client or financial data under an NDA.

  1. Restrict public AI use for confidential data

Set a clear policy prohibiting uploads of reports, contracts, or customer details into public AI tools. It’s not about blocking innovation; it’s about preventing accidental data exposure and ensuring compliance with privacy laws.

  1. Classify data and limit access

Sort data into Public, Internal, and Confidential categories, and match model access accordingly. Just as not everyone can post accounting entries, not every model should process sensitive data.

  1. Include retention and deletion clauses in AI contracts

Vendor agreements must define how long data is stored, when it’s deleted, and guarantee it won’t be reused to train other models. Think of these clauses as audit controls for your digital assets.

These clauses are the financial equivalent of audit controls; they ensure that what leaves your books (in data form) doesn’t resurface elsewhere without consent.

Fine-Tuning: Promise and Peril

Many organisations are now training large models on their own data. This is called fine-tuning. Done well, it makes AI more useful for the business. Done poorly, it creates a new surface for risk. 

But it’s also where financial discipline meets ethical responsibility. Fine-tuning costs money, requires infrastructure, and involves sensitive data.

Before approving spending on fine-tuning, as a CFO, we ought to ask these four questions.

  • Who owns the resulting model and the intellectual property?
  • Can the vendor reuse our data or learn from it for other clients?
  • Is the training environment secure and auditable, and has it been tested in a sandbox first?
  • Do the expected benefits outweigh the long-term risk and the cost of ongoing monitoring?

Fine-tuning can be an asset. Without controls, it becomes a contingent liability that sits off the balance sheet until a problem brings it on.

Fiduciary Duty in the Age of AI

I have come to see AI governance as an extension of fiduciary duty. The same principles apply:

  • Duty of care means verifying outputs, managing bias and monitoring model drift.
  • Duty of loyalty means protecting shareholder value by safeguarding brand and trust.
  • Duty of obedience means complying with AI and data regulations, including the EU AI Act, Singapore’s AI Verify guidance and GCC data protection rules.

AI is not just a technology challenge. It’s a test of financial leadership, how we interpret, assess, and manage something we don’t fully control. AI risk is measurable, and what is measurable can be managed. The question is whether boards are ready to act. 

The CFO’s Evolving Role

CFOs are used to being custodians of financial accuracy. Now we must become custodians of digital accuracy.

That means embedding AI oversight into the same structures that protect cash, inventory, and information. It means viewing AI projects not only as cost centres but as potential sources of risk exposure.

Our teams will soon be using AI for forecasting, reconciliation, and even reporting. Before that becomes routine, we must set standards for validation, approval, and disclosure.

AI governance, I’ve realised, is not an IT framework; it’s an accounting one, written in a new language.

In practice, this means:

  • Challenging business cases for AI before funding. The case must include controls and a contingency plan.
  • Embedding AI accountability in internal control frameworks and the enterprise risk register.
  • Partnering with audit and IT to validate data quality and integrity.
  • Setting ROI metrics that include risk-adjusted returns, not only productivity gains.
  • Reporting AI exposure to the board on a regular schedule, the same way we report liquidity and compliance.

The goal is not to slow innovation. It is to make it sustainable.

A CFO’s Reflection

When I sit with AI practitioners from the industry, I often feel like I’m back in my early audit days, surrounded by technical jargon that takes time to decode.

But that’s what makes this phase so interesting. We are all learning. And finance leaders have a unique advantage: we already understand the discipline of governance and the power of questions.

Our job is not to understand every algorithm, but to make sure it doesn’t create liabilities on the balance sheet.

Before we approve any AI initiative, we ought to ask ourselves these three questions:

  • Does it protect value? 
  • Does it limit exposure? 
  • Does it align with our fiduciary duty?

I believe that mindset is how finance professionals can lead responsibly in this new AI-driven world.

Closing Thought

Generative AI is shaping audit, forecasting and decision-making today. It will not wait for us to feel ready. The task for finance leaders is to stay curious, stay humble and keep learning, while placing governance at the centre.

Fiduciary duty today extends beyond financial statements. It includes data, algorithms, and brand trust. If we learn together, ask questions, and apply the same principles that built strong balance sheets, we can make AI work for business, not against it.

Latest

Info Edge commits Rs 250 crore to new B8 Fund I to back growth-stage tech startups in India

Info Edge has approved a commitment of up to...

Scoop confirmed: AI platform MeltPlan raises $10 million to make construction boring

MeltPlan, a pre-construction AI platform, today said it has...

Indian agentic AI startup Gushwork raises $9 million to expand engineering teams

Gushwork, an agentic AI startup raised a $9 million...

Nvidia forecasts upbeat sales on AI chip demand, talks up long-term prospects 

SAN FRANCISCO: Chipmaker Nvidia forecast first-quarter revenue above market estimates...
Satish Bangera
Satish Bangera
A dedicated Financial Strategist, I am passionate about sharing insights and expertise in finance through engaging and informative columns. With a solid foundation in financial leadership across diverse industries in the Middle East and Africa (MEA), I bring over 28 years of experience to the table. As a seasoned financial executive, I have a proven track record of driving transformative financial performance and fostering sustainable growth. My tenure as Group Head of Finance at Emitac was marked by orchestrating strategic initiatives that optimised financial leverage, resulting in substantial savings and margin release. I consistently delivered tangible results through meticulous analysis and proactive measures, driving revenue growth and enhancing financial stability. Beyond my corporate roles, I am deeply committed to contributing to the broader financial discourse. My expertise extends to strategic financial planning, risk management, and investment analysis, all of which are essential components of informed financial decision-making. Whether discussing market trends, investment strategies, or regulatory changes, I aim to provide valuable insights that empower readers to navigate the complexities of the financial landscape. With a Bachelor's degree in Accountancy, Certified Management Accountant (CMA) certification, and a wealth of experience in financial leadership, I am excited about the opportunity to leverage my expertise to deliver compelling and actionable content for financial.me. I aim to engage, educate, and inspire readers, fostering a deeper understanding of finance and empowering them to achieve their financial goals.

Info Edge commits Rs 250 crore to new B8 Fund I to back growth-stage tech startups in India

Info Edge has approved a commitment of up to Rs 250 crore to B8 Fund I, a newly launched scheme under B8 Trust, marking...

Scoop confirmed: AI platform MeltPlan raises $10 million to make construction boring

MeltPlan, a pre-construction AI platform, today said it has raised $10 million in a Seed funding round led by Bessemer Venture Partners, with participation from noa. The...

Indian agentic AI startup Gushwork raises $9 million to expand engineering teams

Gushwork, an agentic AI startup raised a $9 million seed funding round led by Susquehanna Asia VC with participation from Lightspeed, B Capital, Seaborne Capital, Beenext,...