Learning programs that outlast the consultant
Not training courses. Not workshops. Structured programs that build measurable capability over months, tied to the 48-cell framework.
Why training courses don't work
Here's the pattern: an organisation buys AI tools, runs a training day, measures attendance, and reports success. Six months later, usage has dropped to the same three people who would have figured it out anyway. The training didn't fail because it was bad. It failed because knowing about a tool and being capable with it are different things.
A training course is an event. Capability development is a trajectory.
The consultant-who-charms-and-leaves model has the same problem at a higher price point. Impressive workshops, polished slides, a flurry of enthusiasm that dissipates within a fortnight. We've watched it happen enough times to build something different.
Four dimensions of development
Every program is structured around the same four dimensions we measure in the assessment. This isn't coincidence — you can't develop what you haven't measured, and you can't measure what you haven't defined.
Where AI fits in the organisation's direction. Not an AI strategy — a strategy that accounts for AI.
How people actually respond to AI in practice. Adoption resistance, learning appetite, risk tolerance.
What people can do with the tools. Not IT infrastructure — human technical capability.
How the organisation manages AI decisions, risk, ethics, and accountability.
Measurable outcomes
Every program begins and ends with the 48-cell assessment. You can see exactly what changed, where it changed, and by how much. If a program doesn't shift the capability map, that's useful information too — it means the intervention wasn't the right one, and we adjust.
We don't promise transformation. We promise measurement. The measurement tells you whether transformation happened. That distinction matters more than most organisations realise.