The "10-20-70" rule proves AI success depends on change management, not algorithmic sophistication.
Executives love a good ratio.
Attach numbers to strategy, and suddenly even the messiest initiatives look like they’ve been tamed. That’s part of why the “10-20-70” rule has become gospel in AI circles: ten percent success from the tech, twenty percent from process and data, and a sweeping seventy percent from change management.
It sounds tidy. Maybe too tidy.
The idea is noble enough. It’s a reminder that AI, no matter how sophisticated, won't fix your org unless your people and workflows adapt along with it. Fair. But here’s the problem: this ratio has become the go-to excuse when things go sideways. AI initiative flops? “Must’ve been that 70%.” Cue the culture consultants, the upskilling workshops, the org redesign slides.
But let’s be real. Sometimes, the AI just sucks.
When 10% Breaks Everything
There’s this underlying assumption baked into 10-20-70: that the “10% tech” part is trivial. Like it’s plumbing. Like you just hook up your model to some pipes and voilà, insights everywhere.
But the tech isn't solved. Not by a long shot.
Zillow didn’t kill its iBuying program because change management failed. It killed it because the prediction model systematically mispriced homes. Millions were lost not because teams were resistant, but because the AI literally couldn’t predict its way out of a mild market fluctuation.
Rewind to Amazon’s gender-biased recruiting AI. It didn’t fail because HR wasn’t culturally ready. It failed because it ingested historical data soaked in human bias and amplified it. Change all the incentive structures you want—it won’t stop a model from eating bad data and producing worse decisions.
These aren’t people problems. They’re product problems.
A Black Box Nobody Trusts
Here’s another trap: assuming that if the tech is working, people will just naturally line up behind it. As if good results imply good adoption.
Wrong.
AI systems that lack transparency get stuck in what I call the “black box trust loop.” If users can’t see how a model makes decisions—if they can’t test it, flex it, or even understand its rationale—they don’t engage. Or worse, they subvert it quietly.
A financial firm once built a pricing optimization model and bragged about its accuracy. But when actual product managers were asked to use it, they ignored half of the recommendations. Why? The tool didn’t let them see why it reached those conclusions—or let them adjust constraints based on new market dynamics.
Cotton-candy analytics. Sweet on the outside, empty in the middle.
Stop Building for the Slide Deck
One of the worst abuses of the 10-20-70 rule is that it lets lazy design off the hook. Decision-makers assume they’ll handle people problems downstream, when in reality, the experience is the change management.
If using your AI tool makes people feel dumber, less in control, or afraid it will make them irrelevant, they’re not going to lean in. They’re going to sabotage, stall, or simply ghost your expensive system.
And frankly, can you blame them?
I once worked with a global retailer that built an elaborate analytics dashboard tracking store temperature patterns, foot traffic, and even restroom occupancy. The pitch: optimize labor allocation based on behavioral data. The reality: store managers copied last week’s schedule every Monday. Not because they were technophobes. But because the system dumped raw data at them and expected them to derive their own strategic insights from scatterplots.
It’s the worst kind of AI theater—built to impress up, not deliver down.
“Change Management” Isn’t a Band-Aid; It’s a Diagnosis
Let’s be clear: organizational transformation is central to AI success. Getting teams to embrace new tools, rework incentives, rebuild workflows—that’s grueling, political, and slow.
But it only works if the thing they’re being asked to change for… is worth it.
So many AI projects die not because of resistance, but because no one bothered to check if the initiative solved a real problem. Or if it introduced more friction than value.
Look at IBM Watson in healthcare. Years of data science, enormous funding, big headlines. But hospitals didn’t need a system suggesting cancer treatments—they needed help scheduling nurses across shifts. Watson may have had incredible tech, but it didn’t meet the moment. No amount of cultural enablement was going to align teams around an irrelevant solution.
Design mismatch isn't a people fail. It's a leadership fail.
When Dashboards Lie, and Culture Nods Along
Most organizations don’t have data strategies. They have data fetishes.
Storing customer data for 15 years doesn’t make you “data-driven.” It makes you a digital hoarder. And feeding that hoard into an AI doesn’t alchemize nonsense into gold. It just moves errors into production.
I consulted a company that had over a dozen “insight” dashboards stitched onto their operations stack. Real-time display. Forecasting widgets. Machine learning overlays. Looked incredible on a big screen.
Not a single executive used it.
Why? Because the metrics weren’t tied to any decision workflow. They were vanity dashboards—built for show-and-tell, not for action. Worse, people didn’t trust them. Several reports contradicted each other. So middle managers did what they’ve always done: default to gut feel and fight over whose version of truth wins.
This, by the way, is the cultural damage bad AI causes. Not just waste—but eroded trust in future tooling.
The Real Pillars: Not 10-20-70, But...
Let’s kill the ratio.
Here’s a better way to think about AI success that doesn’t collapse complexity into a consulting slide:
- Strategic Relevance: Does the AI solve a real business problem, in the place where pain is actually felt? If not, stop now.
- System Design: Are the model, interface, and experience designed to work with human intelligence, not around it?
- Operational Fit: Does this tool plug into where decisions are actually made? Is the data current, clean, trusted?
- Org Willingness: Are people incentivized (or at least not punished) to adopt it? Does it make their job clearer, better, or faster?
If any of those pillars fail, no cute ratio will save you.
So What Should Leaders Do Differently?
Here’s the part that’s uncomfortable: AI doesn’t fail because executives weren’t inspired enough, or users weren’t trained enough. It fails because most organizations haven’t built the logic, the language, or the patience to redesign how decisions get made.
That’s not a gap you fix with motivational speaking and new titles.
Real AI transformation means:
- Asking harder questions on why the tech exists, not just what it can do
- Investing less in dashboards, more in decision loops
- Architecting systems with humans in the loop—not as an afterthought, but as a core design principle
- Incentivizing uncomfortable truths, not just reporting wins
And when something fails? Don’t reflexively retreat behind the “70% change” line.
Ask: Did we earn the right to change in the first place?
Because change is not the problem. Irrelevance is.
If your AI doesn’t solve something urgent, useful, and human—you don’t have an adoption issue. You have a purpose issue.
And believe me: the smartest people in your company can feel the difference.
This article was sparked by an AI debate. Read the original conversation here

Lumman
AI Solutions & Ops