When AI makes things up: Deloitte’s costly lesson in artificial intelligence hallucinations

The Digital Guide breaks down Deloitte's AI generated errors

As The Digital Guide, I have spent decades guiding Australian businesses to understand and leverage emerging technologies responsibly. When headlines broke about the Deloitte AI ‘scandal’ earlier this month, eyebrows rose and boardroom conversations ignited. The consulting giant was found to have used AI to draft parts of a government-commissioned report, and the software had invented facts. In AI terms, this is called a “hallucination”: when a system generates confident sounding but false information.

When confidence outweighs truth

AI hallucinations aren’t new. Large language models (LLMs), such as OpenAI’s ChatGPT or Anthropic’s Claude, generate text by predicting the next most likely word. They don’t fact check; they simulate understanding. In most cases, this works astonishingly well. But when a model’s training data is incomplete or ambiguous, it can simply invent details that sound plausible.

The Deloitte incident is a textbook case of misplaced trust in automation. AI can produce sentences that look authoritative, but without human verification, errors can slip through even in high-stakes contexts.

Labor senator Deborah O’Neill, who served on a Senate inquiry into the integrity of consulting firms, said, “Deloitte has a human intelligence problem. This would be laughable if it wasn’t so lamentable. A partial refund looks like a partial apology for substandard work.” She added, “Anyone looking to contract these firms should be asking exactly who is doing the work they are paying for and having that expertise and no AI use verified.”

Corporate shortcuts and credibility risks

Deloitte is hardly alone in experimenting with AI for productivity gains. A recent report by McKinsey showed that employees are three times more likely to be using generative AI tools than their leaders expect.

That gap exposes organisations to reputational and legal risks. When clients pay for expertise, they expect accuracy backed by human judgment. Using AI as a silent co-author without disclosure undermines that trust. The latest case from Deloitte shows the cost of opacity. If a report bears your logo, you are accountable for every sentence, no matter how it was generated.

The ethics of artificial authorship

AI hallucinations raise a thorny question: how much automation is too much? Many professionals now rely on AI to draft emails, analyse data, or summarise legislation. But the boundary between assistance and authorship is blurry.

As someone who works daily with organisations navigating digital transformation, I believe transparency must underpin all professional AI use. Disclosing when and how AI contributes to work is not only honest; it safeguards credibility. The public deserves to know when a machine had a hand in shaping a message that claims to be human made.

From hype to humility

The Deloitte episode may become a turning point in corporate Australia’s relationship with AI. For years, consultants and businesses have rushed to showcase their digital credentials. Now, the emphasis is shifting from adoption to accountability and transparency.

Consideration should be given to introducing clauses in contracts to disclose AI use and retain human oversight. Universities, too, are adapting: the University of Melbourne recently launched an AI literacy module for postgraduate business students to help them identify hallucinated or fabricated information. Transparency, human review, and clear accountability frameworks are becoming the new hallmarks of responsible AI use.

Sidebar: 3 ways to spot an AI-generated error before it costs you and your executive

  1. Verify every claim. Treat AI outputs as drafts, not deliverables. Check names, figures and citations against credible sources.
  2. Watch for overconfidence. AI tends to write with authority even when it’s wrong. Phrases like “as reported by” or “according to” without linked sources can be red flags.
  3. Keep a human in the loop. Always involve subject-matter experts in review stages, especially for public or policy documents.

A cautious path forward

AI is here to stay, but the Deloitte refund underscores a vital truth: automation without accountability erodes trust. As more Australian organisations integrate AI into everyday work, the challenge will be to pair technological speed with ethical restraint.

In the end, the question isn’t whether machines can write reports. It’s whether we can remain honest about when they do.

This article was researched and structured with assistance from Linus, my personal AI system, whose suggestions I have fully reviewed, checked, and approved. Transparency in authorship matters. It is how we build the ethical foundations for a future in which humans and machines collaborate honestly.

 

Tracy Sheen is The Digital Guide, an award-winning author and media commentator on technology and AI. She travels Australia helping organisations build digital confidence www.thedigitalguide.com.au