When “audit ready” meets ESG reality. What companies and AI reporting tools often miss
Since I began working independently, I have been having more conversations with founders, product teams, and sustainability leads than ever before. A pattern has stood out. Many teams are building AI powered tools to support ESG reporting, particularly greenhouse gas emissions reporting. The ambition is exciting and the need is real. The reporting burden is heavy, and better systems are long overdue.
But recently, during a conversation with a team working on an AI powered emissions reporting tool, one phrase stopped me in my tracks. They described the output as “audit ready”. That is a bold claim.
To me it is a bold claim because I have worked on the other side of that statement. In my previous role, I provided limited assurance over non-financial ESG metrics in sustainability reports. When I hear “audit ready,” my mindset immediately shifts to a very specific set of expectations. Not reporting expectations. Assurance expectations.
So I asked a simple question “what do you mean by audit ready?”. The answer was reasonable, structured data capture, referenced emissions factors, a systematic approach. All good signals. Then I asked the next question “have you tested this with an external assurance provider, or designed it with assurance scrutiny in mind?”. The answer was no.
That moment revealed something important. A lot of ESG reporting innovation is happening quickly, but the assurance reality is often not fully understood. And for companies moving toward external assurance, that gap matters.
Why “audit ready” is not a slogan
In practice, “audit ready” means your data and your process can withstand challenge, and that the information you report is prepared in line with the applicable criteria and is free from material misstatement. That point matters because assurance is not simply checking whether a number exists. It is an evidence based assessment of whether reported information is credible, complete, consistent and prepared in accordance with a defined reporting basis.
For greenhouse gas emissions, that reporting basis is often the GHG Protocol. An assurance provider will typically evaluate whether you have applied the criteria correctly, including decisions on organisational and operational boundaries, emissions source categorisation, calculation methodology, emissions factor selection, and the treatment of estimates and assumptions.
Even limited assurance involves structured procedures such as inquiry, analytical review, and sample based testing. In practical terms, this means an assurer will test whether the numbers can be traced back to source data, whether calculations are performed correctly, whether the methodology aligns to the stated criteria, and whether the evidence supports the disclosures made.
For greenhouse gas emissions reporting, this typically comes down to five themes.
Completeness
Are you capturing all relevant emissions sources and activity data, using defined boundaries with documented decisionsConsistency
Are methods applied consistently across sites, business units and reporting periods, and are any changes explained, justified and controlledAccuracy and reasonableness
Are calculations correct, units and conversions correct, emissions factors appropriate to the criteria, and do results make sense when tested against activity trendsTraceability and evidence
Can you trace each reported figure back to source data and provide an evidence trail showing where it came from, who provided it, and when it was captured or updatedGovernance and control
Who owns the data, who reviews it, who approves it, how changes are tracked, and how judgement calls, estimates and assumptions are documented
This is why the phrase “audit ready” should never be treated as a product feature. It is a discipline across people, process, data, and governance, anchored to the reporting criteria you claim to follow.
Where AI genuinely helps, and where it can create risk
AI can absolutely support sustainability reporting, and I encourage it. I see value in a lot of use cases. It can support with structuring and cleaning messy datasets, automating mapping of activity data to categories, summarising qualitative narrative and drafting disclosures, identifying anomalies and outliers for review, speeding up documentation and evidence collation, and creating faster iteration cycles for reporting.
But for assurance readiness, AI also introduces new questions that need good answers. What is the source of truth for each data point? What assumptions did the AI model make, and are they recorded? How are emissions factors selected, updated, and version controlled? Can you reproduce the same number later, or does the AI model change outputs over time? How are overrides handled, does the AI model do this automatically and who approves them? How is data quality flagged, and what happens when it is poor?
If a tool cannot answer these questions clearly, it may still be useful for reporting, but it should not be described as “audit ready”.
What an assurer will challenge
If you want to reality test whether your emissions data is moving toward assurance readiness, these are examples of the types of questions an assurance team will ask (and I can attest to these because these are the questions I asked my previous clients)
“Show me the evidence for this activity data point?”
“Explain how you ensured completeness across all sites?”
“Why did this emissions source increase while activity decreased?”
“What is the emissions factor used, what is the source, and is it appropriate for this geography and year?”
“How do you handle estimations, and how do you document the basis for it?”
“Who reviewed and approved the final numbers?”
“What controls exist to ensure consistent application of methodology?”
“If I sample ten items, can you produce an audit trail confidently? Can you provide the supporting documents?”
This is not to make assurance sound intimidating. It is simply to highlight that assurance is a robustness test, not a reporting exercise.
What companies should ask any AI-powered ESG reporting tool vendor
If you are considering an ESG reporting tool that is AI-powered, it helps to ask a small number of well chosen questions that quickly reveal whether the system is built for credibility as well as efficiency. Start with traceability and evidence. Can each reported figure be traced back to the original source data, and can the platform store supporting evidence in a structured way? It is also worth understanding whether there is a clear audit trail showing uploads, edits, approvals and any changes made over time.
Next, explore methodology and reproducibility. Ask how organisational and operational boundaries are defined and controlled within the system, and how emissions factors are sourced, referenced and updated. One of the most important questions is whether the same calculation can be reproduced later using the same inputs, with clear version control so that factors and methodologies do not change silently in the background.
Then look at controls and governance. A credible platform should allow roles and permissions to be configured, and it should support review and approval workflows. If users can override calculations or adjust assumptions, those overrides should be logged with clear rationale and approval. It is also valuable if the tool can generate an evidence pack that aligns with typical assurance information requests, because that is often where organisations feel pressure later.
Finally, ask about assurance alignment. Has the tool been designed with input from assurance practitioners. What expectations or assurance standards has it been built to support. And importantly, when a vendor says audit ready, ask them to define exactly what they mean and what evidence supports that claim.
The goal is not perfection. The goal is to avoid buying a system that accelerates reporting but weakens credibility, because in ESG, speed without trust can become a risk.
What an assurance readiness assessment actually is
Many companies only discover weaknesses in their ESG data and processes once an external assurance engagement has already started. When that happens, the experience can quickly become expensive, stressful, and reactive. It often leads to last minute remediation, rushed evidence gathering, and a lot of internal confusion about who owns what and where information actually sits. I have seen this first hand, and it can become messy very quickly, especially when timelines are tight and expectations are high.
An assurance readiness assessment is a practical step that happens before you go to external assurance. It is essentially a diagnostic review designed to test whether your ESG reporting process is likely to withstand assurance scrutiny. The goal is to identify gaps early, strengthen the foundations, and reduce surprises later.
In a greenhouse gas emissions context, this type of assessment typically reviews how boundaries have been defined and documented, how data sources are mapped across the organisation, and whether there are completeness checks to ensure all relevant sources have been captured. It also looks closely at methodology, including the logic behind calculations, the suitability and governance of emissions factors, and how estimates and assumptions are treated. A key focus is traceability, meaning whether there is a clear audit trail and whether evidence can be produced efficiently for sampled data points. The assessment will also examine review and approval controls, including who is responsible for quality checks and how changes or overrides are logged and justified. Finally, it will highlight common failure points such as unit inconsistencies, unclear estimation approaches, or uncontrolled manual adjustments, and it will provide practical recommendations to strengthen controls and documentation before assurance begins.
Done well, an assurance readiness assessment is one of the fastest ways to build confidence in your ESG disclosures, regardless of whether you are using an AI tool, a dedicated platform, or a manual reporting process.
My assurance view on AI-powered reporting
I am optimistic about AI in sustainability reporting. We need innovation. We need better tools. We need less manual friction. But credibility still comes from fundamentals. Clear boundaries. Trusted data. Traceability. Documented assumptions. Strong review controls. And governance that treats ESG information with the same seriousness as financial information.
So when you hear “audit ready,” treat it as an invitation to ask better questions. Not because you want to slow down progress, but because you want to build trust that lasts.
If your organisation is preparing for external assurance, or exploring how to strengthen emissions reporting processes before that step, an assurance readiness assessment can be a practical place to start.