105 claims. All of them checkable.
An ongoing research program into AI capability across Australian organisations. Transparent methodology. Published sample sizes. Open to scrutiny.
The research program
Most claims about AI in the workplace come from vendors or consultancies with something to sell. Ours come from a structured research program that's been running since 2023. We survey organisations, measure capability using the 48-cell framework, and publish what we find — including the parts that are inconvenient.
The sample isn't enormous and we don't pretend it is. What it is: carefully constructed, consistently measured, and transparently reported. Every claim traces back to data. Every methodology decision is documented.
verified claims
retracted findings
adaptive questions
indicators
The evidence chain
Here's how a claim gets verified: raw survey data is collected through the adaptive assessment instrument. Responses are scored against the 1,967 indicators in the 48-cell framework. Statistical analysis produces a finding. The finding is reviewed for methodological soundness. If it holds, it becomes a claim. If it doesn't, it goes in the bin.
Two findings didn't survive review. We pulled them. That's not a failure — it's the methodology working.
We don't hedge with "our research suggests" when the data is clear, and we don't overclaim when the sample is small. Every finding published in our Analysis section includes the methodology, sample size, and confidence level. If you want to argue with the conclusions, you have everything you need to do it properly.
Where to find the findings
The research produces two outputs: verified claims that feed into the 48-cell framework, and published analysis for anyone who wants to understand what's actually happening with AI capability in Australian organisations.