A new report has raised concerns about the accessibility and quality of artificial intelligence transparency statements published by Commonwealth government departments and agencies, despite the requirement becoming mandatory in February 2025.
The analysis evaluates compliance with the Australian Government’s Policy for the responsible use of artificial intelligence in government, which requires in-scope entities to publish an AI transparency statement outlining how they use AI systems.
Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) found that many statements are difficult to locate and vary widely in quality and detail. Very few were accessible via a clear, direct link, as recommended by the Digital Transformation Agency (DTA), limiting their practical value for public scrutiny.
The report identified 30 government entities potentially within scope of the policy for which no AI transparency statement could be found. While the DTA considers these entities out of scope, the researchers argue that the absence of a clearly defined and publicly available list of bound entities undermines clarity and accountability.
Among the statements that were located, compliance levels were inconsistent. Some agencies published detailed explanations of how AI systems are used, including governance structures and risk mitigation measures. Others provided minimal information that, while technically meeting formal requirements, did not offer meaningful insight into how automated systems affect decision-making, service delivery or public outcomes.
The report concludes that without clearer publication practices and stronger compliance mechanisms, the policy risks failing to achieve its transparency and accountability objectives. In particular, fragmented publication approaches make it difficult for citizens, researchers and oversight bodies to assess the scale and nature of AI use across government.
The authors recommend that AI transparency statements be published in a single centralised location to improve discoverability and consistency. They also call on the DTA to clarify which entities are strictly bound by the policy through an explicit public list, reducing ambiguity about scope.
Further recommendations include exploring enforcement or compliance mechanisms to ensure obligations are met, and revising the Standard for AI transparency statements so agencies cannot satisfy requirements through minimal or purely formal disclosures that lack substantive information.
The report, titled AI Transparency in Practice: An evaluation of Commonwealth entities’ compliance with their obligations regarding AI transparency statements, was authored by Professor Kimberlee Weatherall, José-Miguel Bello y Villarino and Alexandra Sinclair, with research assistance from Shuxan (Annie) Luo. It forms part of the Regulatory Project at ADM+S.
As AI systems are increasingly embedded in public administration — from service eligibility assessments to fraud detection and operational optimisation — the findings highlight the ongoing challenge of translating high-level AI governance principles into consistent, enforceable practice across government.
You can read the full report here.
