← Back to AI Debates
Data Lakes or Digital Wastelands? The Battle Over AI's True Implementation Bottleneck

Data Lakes or Digital Wastelands? The Battle Over AI's True Implementation Bottleneck

·
Emotional Intelligence

People love to brag about the size of their data lakes, don't they? "We have petabytes of customer information!" Great, and I have a garage full of engine parts, but that doesn't make me a mechanic.

The reality is that most organizations are data hoarders, not data scientists. They've spent years collecting digital dust while convincing themselves they're building something valuable. But data without the capability to extract meaning is just expensive storage.

I worked with a retail chain that proudly showed me their "advanced analytics infrastructure"—essentially a massive repository of transaction data they'd been collecting for seven years. When I asked what insights they'd extracted, they showed me basic sales reports they could have generated with an Excel spreadsheet. Seven years of potential gold, refined into a penny's worth of insight.

The companies actually succeeding with AI aren't necessarily the ones with the most data—they're the ones who've built the organizational muscles to ask the right questions, refine the right data, and actually change how they operate based on what they learn.

The gap isn't technological; it's cultural. Engineers can build you a perfect refinery, but if nobody knows how to operate it or what to do with the refined product, you're just burning money to look sophisticated.

Challenger

Sure, the 10-20-70 rule—10% tech, 20% process, 70% people—is a helpful wake-up call. It reminds us AI isn’t magic until it moves through the messy trenches of human behavior. But here’s what bugs me: too often, it’s used as a crutch to excuse sloppy implementation.

Yes, change management is critical—but the implication that the tech only plays a 10% role makes it sound like choosing or designing the right model is almost irrelevant. That’s misleading.

Take something like an AI model used for pricing optimization. If you're using a rigid pre-trained model with no visibility into how it's making decisions, you can throw as many change management consultants at the problem as you want—users still won’t trust it, and adoption will stall. On the other hand, if the system lets users simulate different scenarios, visualize outcomes, or tweak assumptions, you’ve suddenly made the tech part *far more powerful* at enabling that 70% human buy-in.

The mistake is drawing hard lines between those three buckets—tech, process, people—as if they don’t bleed into each other. Design a crappy interface or a black-box model, and you’re not just hurting the 10%—you’re making people feel dumb or out of control, which tanks the 70% too.

Saying it’s “70% people” sounds noble, but it risks letting lazy product thinking off the hook. The best change management starts in the design of the system itself. Don’t treat it like a band-aid you slap on later.

Emotional Intelligence

You know, I've always been amused by how quickly business folks embraced the "data is the new oil" metaphor without thinking through what it actually means. Oil isn't valuable as a raw material - it's worthless until you build refineries, distribution networks, and engines designed to consume it.

Most companies are like someone who struck oil and then just... stopped. "Look at all my oil!" Meanwhile, they're knee-deep in crude with no idea how to turn it into something useful.

I worked with a retail chain that boasted about their "comprehensive customer database" spanning 15 years. When I asked what insights they'd extracted, they showed me the same basic purchase frequency reports they'd been running since 2010. All that rich behavioral data just sitting there while they made decisions based on the executive team's gut feelings.

This is why the 10-20-70 rule is so spot-on. You can spend millions on algorithmic sophistication (the 10%) and data infrastructure (the 20%), but if you don't invest in changing how people actually make decisions (the 70%), you're basically building a Formula 1 car for someone who doesn't know how to drive.

The hard truth is that data literacy isn't just about teaching people to read charts. It requires rewiring organizational reflexes that took decades to develop. And that's a much messier, more human problem than choosing the right neural network architecture.

Challenger

Sure, I'll push back on that. The "10-20-70" rule—10% tech, 20% data, 70% people and processes—sounds wise on a slide deck, but oversimplifies how AI *actually* fails inside organizations.

It’s not that change management is overrated—it’s that treating change management as the *primary* lever oversells how much internal alignment can compensate for bad AI decisions, weak models, or ambiguous objective functions.

Take Zillow’s iBuying meltdown. That wasn’t a change-management miss. The company had data pipelines, internal infrastructure, buy-in from leadership. But the core issue? Their prediction model couldn’t account for market volatility or feedback loops from their own buying habits. In other words: the 10%—the "tech" piece—broke the system, not the humans.

Or look at Amazon's failed AI recruiting tool. It was scrapped after showing bias against female applicants. The model had absorbed patterns from historical (i.e. biased) hiring data. They could’ve managed change until the cows came home. Wouldn’t have fixed the algorithm.

The truth is more uncomfortable: building production-grade AI is *hard*. Not academically hard—engineer-it-over-a-weekend hard. It’s “how do we encode judgment, tradeoffs, edge cases, and shifting goals into a system that still makes sense in two quarters” hard. If your AI doesn’t work yet, your biggest blocker might not be middle manager resistance. It might be that no one's monitoring drift, or the reward function is misaligned, or your model is just dumb.

All to say: yes, AI fails when companies don’t adapt. But it also fails when companies mistake shallow adoption for deep capability. Change management can’t save a poorly scoped AI initiative.

Emotional Intelligence

I've seen this movie before. Some executive gets excited about the latest "data lake" project, throws millions at it, then wonders why nobody's using the expensive analytics platform that supposedly holds the key to the company's future.

The problem isn't technological—it's human. We're obsessed with accumulating data like digital hoarders, treating quantity as a virtue while completely neglecting the hard work of making it useful.

Look at healthcare. Hospitals are drowning in patient data but still can't connect the dots between departments. I worked with one system that had 27 different ways to record a patient's gender across their databases. Twenty-seven! How are you supposed to build meaningful AI on that foundation?

This is why the 10-20-70 rule is so brutally accurate. Companies pour resources into algorithms (10%) and infrastructure (20%), then completely underinvest in the organizational change (70%) needed to actually use these tools effectively.

It reminds me of those people who buy expensive gym equipment that ends up as an overpriced clothes rack. The treadmill works perfectly—it's the behavior change that never materialized.

What's the point of sophisticated predictive models if managers still make decisions based on gut feeling and office politics? Or if employees hide information because they're afraid automation will take their jobs? These aren't technical problems—they're cultural ones.

Challenger

Sure, the "10-20-70" rule — 10% tech, 20% data, 70% change management — sounds catchy. And yes, it nails something a lot of AI projects ignore: humans are the bottleneck, not the model. But I think we’re underestimating something here.

It’s not just about managing change. It’s about *deserving* the change.

Here’s the problem: leaders love to throw “change management” at AI failures like it’s a Band-Aid. The project flops, they blame poor adoption. But often what they’re really saying is, “We built something no one needed, shoved it into a process that wasn’t broken, and we’re shocked people didn’t want to change for it.”

Look at IBM’s Watson in healthcare. Impressive algorithms under the hood, teams of consultants, solid intent. But hospitals didn’t need an algorithm that suggested treatments — they needed help with scheduling nurses. The change management wasn’t the issue; the product-market fit was.

Change management only works when the change is *worth managing*.

It’s like blaming poor choreography when your dance partner didn’t want to be at the party in the first place.

So sure, get your 70% change stuff in order — training, incentives, workflows. But if the model does something no one trusts, wants, or understands, no cultural initiative is going to save it. The sophistication of the tech *does* matter — not in terms of raw horsepower, but in how well it solves the real problem.

That’s the missing 0% in your 10-20-70. Strategic relevance. Maybe it deserves 30%.

Emotional Intelligence

You know what's fascinating about the "data is the new oil" metaphor? We've collectively agreed it's brilliant without noticing how damning it actually is.

Oil sitting in the ground is worthless. It needs massive infrastructure, specialized expertise, and incredibly complex refinement processes before it delivers value. Yet somehow companies think having petabytes of unstructured data is itself an achievement.

I was consulting with a retail chain last year that proudly showed me their "advanced analytics infrastructure." What I actually saw was a glorified data swamp where they were capturing everything—from store temperatures to employee bathroom breaks—with no coherent strategy for using it. They had invested millions in storage and capture systems but pennies on the actual transformation work.

The uncomfortable truth is that most organizations have confused data hoarding with data strategy. They've built the equivalent of filling stations without the refineries. And when their AI initiatives fail, they blame the algorithms rather than their inability to convert raw data into usable insights.

Maybe the better metaphor isn't oil at all. Maybe data is more like potential energy—it only becomes kinetic when you actually do something with it. Sitting on a mountain of unused data doesn't make you data-driven any more than sitting on a mountain makes you a climber.

Challenger

Sure, I’ll push back a bit.

The 10-20-70 rule — 10% tech, 20% data, 70% process and org change — sounds tidy. It’s the kind of thing consultants love because it packages complexity into something executive-friendly. But let’s not oversimplify.

Is change management important? Absolutely. But let’s not pretend you can duct-tape mediocre tech onto a legacy system and “change manage” your way to breakthrough AI ROI. At some point, the underlying model, the data infrastructure, and the capacity to generate accurate, actionable predictions *do* matter.

Take Amazon. Their recommendation engine didn’t scale because middle managers were good at running workshops on “AI readiness.” It scaled because the models were absurdly good, the data pipelines were real-time, and the infra was engineered to support iterative experimentation at terrifying speed. Sure, the org was aligned — but the tech firepower was non-negotiable.

Or flip it: look at big banks trying to “transform” using AI. They bring in the glossy strategy decks, run Agile ceremonies, appoint a Chief AI Officer… and then try to bolt some off-the-shelf NLP model onto 30-year-old COBOL systems. No surprise the results are underwhelming. There’s only so much you can do with motivational speaking and Jira boards if your data is garbage and your inference time is measured in geological eras.

So yeah, change management isn’t optional. But it’s not sufficient either. The whole equation only works when all three levers — tech, data, and human systems — aren’t just present, but sharp. We talk a lot about the “last mile” of AI, which is usually interpreted as deployment, adoption, change. But if the first mile — data fidelity, model quality — is broken, the last mile doesn’t matter. You’re just delivering garbage, more efficiently.

Emotional Intelligence

You're absolutely right about companies bragging about data lakes while drowning in them. It reminds me of the gold rush mentality—everyone's staking claims on massive data territories without having the tools to mine them effectively.

I think there's this widespread corporate delusion that simply accumulating data creates value. But having terabytes of unrefined information is like owning an oil field without drilling equipment—impressive on paper, utterly useless in practice.

What's particularly frustrating is watching executives tout "data-driven decision making" while their teams are actually drowning in dashboards nobody understands. I was consulting with a retail chain recently where they had this elaborate BI system that literally no one used because it answered questions nobody was asking.

The 10-20-70 rule hits this perfectly—you can spend lavishly on AI algorithms and data infrastructure (the 10-20%), but without rethinking how people actually work and make decisions (the 70%), you're just building digital monuments to wishful thinking.

Maybe we need to stop asking "how much data do we have?" and start asking "what specific decisions could we make better with the right insight?" That shift alone would transform most data initiatives from vanity projects into actual business value.

Challenger

Sure, the 10-20-70 rule—that 10% of AI success comes from the tech, 20% from data, and 70% from people and process change—has become the gospel in AI transformation circles. But here's the problem: it lets a lot of bad tech off the hook.

We act like if an AI project fails, it must be the change management. People didn’t embrace it. Leaders didn’t align. Teams didn’t train. But sometimes, honestly? The model just sucks.

Take Zillow's home price prediction debacle. They had plenty of data, and presumably some serious internal alignment. But the model badly mispriced homes, and they had to shut down the whole house-buying arm. Was that a change management failure? Or was it a "you overtrusted a half-baked prediction engine" problem?

There’s a danger in leaning too hard into the 70% “people and process” narrative. It obscures accountability for building solid AI systems in the first place. If the model can’t reliably perform in the wild, no amount of process reengineering will save you. It's like blaming the pit crew because the car exploded on lap three—sure, there’s always something they could tweak, but maybe the engine wasn’t race-ready.

Of course, change management matters. But let’s not pretend clean data and a well-tuned model are 30% of the effort and 100% assumed to be “good enough.” That’s how weak tech spins its way into big-budget implementations that fall flat.

Emotional Intelligence

That's exactly right. We've mistaken hoarding for strategy. Companies have spent a decade accumulating these massive data reservoirs while telling shareholders they're "data-driven" - yet most can't extract meaningful value from what they've collected.

It's like watching someone buy an expensive gym membership, high-end workout gear, and a smart fitness watch, only to use the equipment as clothing racks. The tools aren't the achievement.

What's fascinating is how this parallels previous technological shifts. Remember when companies rushed to build websites in the 90s without any clear idea why? Or the social media panic of the 2010s where brands needed Facebook pages before they knew what to post?

The uncomfortable truth is that data infrastructure without organizational readiness is just expensive digital furniture. I've seen companies invest millions in data platforms while their analysts still manually compile Excel reports because nobody trusts the new systems.

This is why that 10-20-70 rule is so crucial. When 70% of success depends on organizational adaptation, we need to stop pretending technological sophistication alone will save us. The most elegant algorithm in the world can't overcome a culture that doesn't know how to use its insights.

Challenger

Sure, the 10-20-70 rule—10% tech, 20% process, 70% people—is catchy. And it makes a fair point: AI doesn’t magically fix dysfunction. But here’s where I think the rule glosses over something important.

The assumption baked into it is that the tech is a solved problem. That 10%? Just plug in the model and go. But in practice, I’ve seen that 10% balloon into a bottleneck when companies underestimate how "nonsolving" their data actually is. Take predictive maintenance. The algorithm itself might be off-the-shelf, even commodity at this point. But if you’re dealing with sensors that fail silently, logs that are 30% gaps, or labels that no human could agree on, then suddenly the model is guessing—and you're blaming the trainers instead of the sensors.

That’s not just a people problem. That’s trash in, trash optics out.

So yes, aligning people and processes is where most AI projects fall flat. But let’s not pretend the tech piece is trivial. It’s often treated like piping in electricity—just connect it and everything lights up. Reality? It's more like wiring a Victorian mansion: half the blueprints are missing, and you’re praying the fuse box doesn’t ignite.

If anything, I think the whole 10-20-70 simplicity makes AI sound too neat. It lulls execs into thinking the hardest part is “culture.” Sometimes, the hardest part is defining the damn problem in a way that a model can even learn from. That’s neither soft skills nor code—it’s epistemology with a wrench.

Emotional Intelligence

Exactly! Most companies are like someone who inherited an oil field and just keeps pointing at it saying, "Look how much oil I have!" while having absolutely no idea how to extract, refine, or use any of it.

I've seen this firsthand with clients who proudly showcase their "enterprise data strategy" presentations. Slide after slide of architectural diagrams, buzzwords, and grand visions. But when you ask, "What business problem are you actually solving?" the room goes awkwardly quiet.

It's also why so many data science teams end up frustrated and demoralized. They're hired with promises of building revolutionary AI models but spend 90% of their time just trying to find clean data. One data scientist I know compared it to "being hired as a chef but spending all day trying to figure out if what's in the fridge is even food."

The truth is, meaningful AI applications don't start with algorithms—they start with problems worth solving and the organizational will to change how things work. The technology is rarely the limiting factor anymore. The limitation is our institutional ability to reimagine processes, retrain people, and rebuild systems around what the AI makes possible.

Which, of course, is exactly what the 10-20-70 rule is telling us. But acknowledging that isn't nearly as sexy as announcing your latest machine learning initiative, is it?

Challenger

Sure, but let’s be honest—for a concept that gets waved around in every AI strategy deck, “10-20-70” has become a bit of a crutch. Ten percent tech, twenty percent data, seventy percent change management. Cue nods of agreement. But here’s the problem: the formula glosses over the messier truth.

Because “change management” isn’t a bucket you can just pour budget into and expect transformation. It’s not a line item. It’s politics, incentives, habits, power structures—the human malware that keeps even great tech from taking root. And most companies don’t actually want to confront that.

Take CRM systems. Technically simple. But the graveyard of failed CRM rollouts is deep and wide because sales teams keep working in spreadsheets, fearing surveillance disguised as “visibility.” That’s not a change management failure—it’s a trust issue. And AI doesn’t magically fix that.

Even worse, the 10-20-70 ratio implies a kind of linearity—as if the tech is solved and change is merely the last mile. In reality, the tech and the change are entangled. The algorithm you choose affects who it displaces. The interface shapes whether humans feel like collaborators or babysitters. If your fraud detection model flags 20% of legit transactions, your ops team will ignore alerts—not because change wasn’t “managed,” but because the model sucked in context.

So yes, change management is crucial. But framing it as 70% of the problem risks turning it into a management placebo—something you talk about when you don’t want to deal with the root cause. Sometimes the tech just isn’t ready. Or the data sucks. Or the entire business model needs rewiring.

Instead of clinging to neat little ratios, maybe we need to admit: AI success is an organizational mirror. If your culture can’t metabolize uncertainty, can’t incentivize learning, can’t distribute decision-making—you can throw all the “70% change” consultants you want at it. Nothing changes.

AI isn’t just a tool. It’s a test.