ABC News logo
Associated Press logo
Epoch Times logo
6 articles
·6d

Deloitte Australia to Partially Refund Government Over AI-Generated Errors in Report

Deloitte Australia will partially refund the Australian government for a report containing AI-generated errors, including a fabricated court quote and non-existent research, despite the report's substance remaining unchanged.

Overview

A summary of the key points of this story verified across multiple sources.

  • Deloitte Australia will partially refund the Australian government for a submitted report found to contain significant errors generated by artificial intelligence.
  • The identified inaccuracies included a fabricated quote from a federal court judgment and references to academic research papers that were found to be non-existent.
  • Deloitte confirmed these errors and subsequently disclosed that a generative AI language system was utilized in the creation of the report's revised version.
  • Despite the presence of AI-generated mistakes, the department confirmed that the core substance and recommendations of the report remained unaltered.
  • This partial refund highlights the firm's accountability for the quality and factual basis of its deliverables, particularly when incorporating advanced AI technologies.
Written by AI using shared reports from
6 articles
.

Report issue

Pano Newsletter

Read both sides in 5 minutes each day

Analysis

Compare how each side frames the story — including which facts they emphasize or leave out.

Center-leaning sources frame this story by emphasizing Deloitte's negligence and the severe nature of the AI-generated errors in its government report. They highlight the firm's partial refund and the strong criticisms from a researcher and a senator, while presenting Deloitte's response as minimal and evasive. The narrative focuses on the negative implications of AI "hallucinations" and the firm's accountability.

"Deloitte Australia will partially refund the 440,000 Australian dollars ($290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers."

ABC NewsABC News
·6d
Article

"Deloitte Australia will partially refund the 440,000 Australian dollars ($290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers."

Associated PressAssociated Press
·6d
Article

"Deloitte will repay the final instalment of its government contract after conceding that some footnotes and references it contained were incorrect."

ReasonReason
·6d
Article

"Deloitte’s deal with Anthropic is a referendum on its commitment to AI, even as it grapples with the technology."

TechCrunchTechCrunch
·7d
Article

"Deloitte Australia will offer the Australian government a partial refund for a report that was littered with AI-hallucinated quotes and references to nonexistent research."

ARS TechnicaARS Technica
·7d
Article

Articles (6)

Compare how different news outlets are covering this story.

FAQ

Dig deeper on this story with frequently asked questions.

The report assessed the Targeted Compliance Framework (TCF), which is part of the Australian IT system that manages welfare and benefits payments, on behalf of the Department of Employment and Workplace Relations (DEWR).

The errors included nonexistent academic references, a fabricated quote from a Federal Court judgment, and more than a dozen incorrect footnotes and references, which were corrected in an updated version of the report[1].

Deloitte confirmed that their methodology included the use of a generative AI large language model—specifically, Azure OpenAI GPT-4o—hosted on DEWR’s Azure tenancy, as disclosed in the updated report[1].

No, the Department of Employment and Workplace Relations confirmed that the corrections to the report’s references and footnotes did not change its overall findings or recommendations regarding the TCF system.

This incident highlights the risks of relying on generative AI for factual accuracy in professional documents, even when the conclusions remain sound, and underscores the ongoing need for rigorous human review and quality control in AI-assisted consulting work.

History

See how this story has evolved over time.

  • 6d
    Reason logo
    TechCrunch logo
    ARS Technica logo
    3 articles