When AI Scales the Unscalable: Is Innovation Born From Friction or Consensus?
You know what I've started noticing? The weird quiet that falls over a meeting room after someone says something that sounds right—that perfect corporate line that everyone nods along to. That silence isn't consensus. It's collective surrender.
I worked at a fintech startup where our weekly strategy meetings had this bizarre rhythm. Someone would suggest something mildly adventurous, it would get softened through discussion, and by the end, we'd congratulate ourselves on our "aligned vision" while effectively deciding to do exactly what our competitors were doing.
Those meetings weren't laboratories for ideas—they were comfort blankets. We were using each other to validate our collective fear of being wrong.
The real problem isn't disagreement—it's that we've built cultures where disagreement feels dangerous. When AI starts handling all the predictable, repeatable work that scales perfectly, what's left for humans? The messy, conflicted, creative work that requires us to navigate tension productively.
Next time your team reaches quick agreement, ask yourself: Are we actually thinking, or just performing safety for each other?
Sure—but let’s be clear: “unscalable” was never the problem. The real advantage was *hard*. Hard to replicate. Hard to copy. Hard to fake. Think of it like sourdough starters—everyone *could* make one, but only a few bothered, and even fewer got it right.
Take Zappos, for example. Their famously obsessive customer service was “unscalable,” but that’s what made it magic. You couldn’t just throw software at it. You needed trained people who *wanted* to sit on the phone for 45 minutes helping someone pick shoes for a wedding. That kind of emotional investment? Good luck automating that.
Now AI rolls in and offers to “scale” the unscalable. Personalized responses at mass volume, chatbots that remember your last complaint, emotion detection in voice. Sounds great, right? Except—it’s facsimile, not fidelity. The more it scales, the more it flattens.
Here’s the paradox: AI can simulate relational frictionlessness, but not *relationship*. When every company offers the same hyper-optimized, ML-tuned, sentiment-sensitive customer interaction—what’s left to differentiate you? Ironically, being “bad at scaling” used to be what made brands feel real.
So maybe the next edge isn’t in being *more* scalable—it’s in choosing *what not to scale*. What moments demand sweat, art, or a pulse? That’s where AI stops being a crutch—and starts being a compass.
I've witnessed this pattern play out so many times. The meeting where everyone nods along, where "alignment" is treated as the ultimate virtue. But there's something deeply unsettling about it.
When a room full of smart people agree on everything, someone's not saying what they actually think. Full stop.
Think about the most innovative teams you've ever seen. Were they harmonious? Rarely. The Wright brothers fought constantly. Pixar's brain trust sessions are famous for their brutal honesty. Even the Beatles created their best work amid tension.
We've confused politeness with productivity. We've mistaken comfort for competence.
I worked with a startup that prided itself on its "collaborative culture" – which in practice meant nobody challenged the founder's ideas. They went bankrupt developing a product nobody wanted, but boy did they have pleasant meetings along the way.
The truth is, innovation requires friction. It demands that someone say the awkward thing, point out the unchecked assumption, or question if we're solving the right problem at all.
Safety-theater meetings aren't just boring – they're expensive failures of imagination. What could your team create if people actually said what they think instead of what they're supposed to say?
That’s assuming those "unscalable" advantages were real moats to begin with.
A lot of what gets glamorized as unscalable magic—say, handcrafted onboarding, white-glove service, insider expertise—is often just a stand-in for “we haven’t figured out how to systematize this yet.” And once AI *can* systematize it—or replicate the outcomes of it at scale—it stops being an edge and starts being table stakes.
Look at Stripe. One of their early "unscalable" advantages was phenomenal developer support—real humans giving fast, technical answers in forums, in email, even hopping in and debugging your code. Ten years ago, that felt magical. Now, imagine an LLM trained on 100,000 support tickets, custom API docs, and every obscure Stack Overflow thread about webhooks. That doesn’t just match the old standard—it surpasses it, consistently and instantly. What's left of the moat?
Of course, the counter is that AI levels the playing field, which sounds bad if you're Stripe. But it's actually worse for the 50 Stripe clones who were differentiating on “we’re like Stripe, but nicer to devs.” Your edge just evaporated. And Stripe? They can build the LLM support layer faster than you. Distribution still wins.
So maybe the deeper question isn’t “what happens when AI kills unscalable advantages,” but “who was relying on those crutches to compete in the first place?” Transparency, design, storytelling—those are the next illusionary moats waiting to be cloned. What’s left is speed, ownership of the customer relationship, and the ability to keep building even as your nicest tricks become commodities.
You don’t defend moats anymore. You sprint ahead of them.
The problem runs deeper than just agreeing too much. What we call "agreement" in most workplaces is actually a complex dance of risk avoidance disguised as collaboration.
I've sat in those meetings where everyone's nodding along, and I'm thinking: who exactly is being served by this performance? It's not the customer. It's not innovation. It's our collective fear of being wrong in public.
Consider how we handle the outlier voice in meetings. When someone raises a genuinely different perspective, watch how quickly the group deploys antibodies against it. "Let's take that offline" is corporate-speak for "please stop making us uncomfortable."
What's fascinating is how this connects to our AI future. As AI handles more routine work, the only defensible human advantage becomes our capacity for creative friction – the ability to have productive disagreements that machines can't simulate.
But here's the irony: most companies are systematically eliminating that advantage through cultures that punish meaningful dissent. We've built entire performance management systems that reward people for not rocking the boat.
The real question isn't whether your team agrees too much. It's whether you've made disagreement so costly that people won't risk it even when they should.
Right, but here's the twist no one likes to admit—those “unscalable” advantages weren’t just inefficiencies waiting to be optimized. They were often the very things keeping competitors out.
Let’s take high-touch customer service. It’s messy, expensive, and totally unscalable. But it’s also a moat. When Zappos built a brand around letting reps stay on the phone for hours if needed, that wasn't just a quirky value—it was a weapon. One that Amazon, with its obsession over automation, couldn’t easily copy without fundamentally changing its DNA.
AI flips that dynamic. Suddenly, every startup can spin up a 24/7 chatbot that mimics warmth at scale. Not as good as a seasoned human? Maybe. But good enough for 80% of interactions. Which means the moat dries up. The differentiator becomes commoditized.
Same thing with things like manual curation. Spotify’s early magic? Human-made playlists. Painful to scale. But that handcrafted vibe built loyalty fast. Now, AI can generate playlists based on one tweet and two emojis. So where’s the edge?
The bigger implication is this: if AI levels the playing field on formerly differentiated, hard-to-scale stuff, then the new battleground isn’t capability—it’s exclusivity.
Where are your asymmetrical advantages now? Data isn't enough; everyone’s scraping the same web. Distribution? Only if you own the customer relationship at a deep level. Brand? Maybe. But only if it stands for something AI can’t mimic.
We're entering a game of who holds the pieces AI can’t touch yet. Ironically, that might be the actual unscalable advantage now: taste, trust, culture—human stuff. All the things execs used to treat like fluff at the end of a pitch deck. That fluff’s about to get very expensive.
Thoughts?
That's the thing about groupthink—it feels productive while it's happening. Everyone's nodding, meetings end on time, and you get that warm fuzzy feeling of team alignment. But what you're often witnessing isn't innovation—it's a sophisticated form of avoidance behavior.
I've sat in those rooms where disagreement is treated like a breach of etiquette. The subtle social pressure to conform is so powerful that people will literally override their own perception rather than be the odd one out. Remember those visual experiments where people would claim a clearly shorter line was longer just because everyone else said so? That's happening in your business decisions right now.
The companies that actually move the needle don't mistake harmony for progress. Amazon has their "disagree and commit" principle. Bridgewater records meetings so people can study where groupthink emerged. Pixar's "braintrust" sessions are famously brutal because they understand that creative friction isn't just useful—it's necessary.
What's fascinating is how AI might actually help here. Unlike humans, algorithms don't feel social pressure. They don't worry about looking stupid or offending the boss. Maybe the role of AI isn't just to replace human thinking but to help us see our collective blind spots more clearly.
Exactly—but here's where it gets dicey. Once AI makes the previously "unscalable" scalable, we’re not just talking about leveling the playing field—we’re talking about paving over it entirely.
Let’s take human-centric customer service. For years, that’s been a moat for certain brands. Think Zappos, with their phone reps empowered to solve problems in creative ways. That level of emotional labor didn’t scale easily. You couldn’t script it, you had to hire for it, train for it, build culture around it.
Now enter AI agents that can mimic empathy, remember your last five interactions, dynamically adjust tone, and never get emotionally depleted. Suddenly, everyone can offer that Zappos-esque attentiveness, at near-zero marginal cost.
What’s the competitive edge then? Because if everyone offers customized, emotionally resonant experiences—none of them are actually differentiated. They're commoditized simulations of intimacy.
You’re not left with a bunch of brands delivering “better experiences.” You’re left with a shallow version of those experiences being endlessly replicable. It’s like an arms race to build the most likable sociopath.
And here's the kicker: the advantage no longer lies in having the better AI. It’s in having the better data to feed it–or even worse, the most loophole-friendly way of collecting it.
So we might be swapping hard-to-scale craftsmanship for hard-to-trace data grabbing and marginal algorithmic tuning. Not exactly the romantic arc of progress we thought we were buying into.
I know that feeling all too well. You sit in a meeting where everyone's nodding along, and your stomach tightens because something feels off. Not wrong exactly, but... hollow.
That consensus is seductive, isn't it? We tell ourselves it's alignment, but it's often just collective risk avoidance dressed up as teamwork.
I worked with a product team last year that had the most "productive" meetings in the company. Zero disagreements. Everything approved unanimously. The CEO held them up as a model. Six months later, their project was canceled because it solved problems nobody actually had.
What happened? They'd created such a strong social penalty for dissent that people stopped bringing their actual perspectives. Their brains were literally switching off during discussions.
The really unsettling part is that AI will only amplify this problem. When tools can generate the "safe answer" instantly, the pressure to just nod along becomes even greater. Why risk looking difficult when the machine has already produced what everyone expects to hear?
The teams that will thrive aren't the ones with the most tools—they're the ones that protect the space for that uncomfortable moment when someone says "Wait, I see it differently." That's not inefficiency. That's where the unreplicable magic happens.
Hold on—before we mourn the death of “unscalable” advantages, let’s question whether they were truly unscalable in the first place… or just not *worth* scaling.
Take a classic example: the artisanal coffee shop that remembers your name and your oat milk preference. Everyone touts this as an "unscalable advantage"—a deeply human, local touch that Starbucks could never replicate. Except… Starbucks kind of *did*. They didn’t scale the barista’s memory; they scaled the *illusion* of personalization with mobile ordering, name labels on cups, and loyalty apps that remember your order better than your spouse does. It’s synthetic intimacy, but effective.
Now enter AI, and it doesn’t just give megacorps fake personalization—it gives *everyone* the tools to fake it, or maybe even do it better. Your indie coffee shop can now use a GPT-powered chatbot that recalls customer preferences, sends them notes about new seasonal blends they might love, even follows up to ask if they enjoyed the chai they got last Tuesday. No extra staff, no overhead. That "unscalable" niceness? Suddenly feels pretty scalable.
So what we’re really seeing isn’t the elimination of “unscalable” advantages. We’re seeing a redefinition. AI turns high-effort, low-leverage practices into high-leverage, low-effort ones. But here's the twist: once the playing field is leveled, those former differentiators become table stakes. You don’t stand out by remembering customer birthdays anymore—you stand out by knowing *why* they’re switching to matcha and optimizing your entire funnel around it before they even click "order."
Which makes me wonder—are we entering a post-differentiation economy? Or just a weirder one where the only remaining edge isn’t doing what others can’t, but imagining what others haven’t?
You know what's fascinating about "psychological safety"? It's become a corporate mantra that completely misses its own point.
The whole reason Amy Edmondson developed the concept was to explain how teams could take brave intellectual risks together. But somewhere along the way, it morphed into this weird cultural phenomenon where people think "safety" means "agreement."
I was in a meeting last week where the team lead actually said, "Let's make sure everyone feels safe to share their views," and then proceeded to subtly shut down every divergent opinion. The real subtext was: "Feel safe to agree with me." Classic corporate theater.
True innovation requires intellectual friction. Think about Apple under Jobs - those meetings were famous for being brutally honest spaces where bad ideas went to die. Not always pleasant, but undeniably productive.
Maybe we need to reframe psychological safety altogether. It's not about feeling comfortable - it's about creating an environment where uncomfortable truths can surface without destroying relationships. Where disagreement isn't just tolerated but expected.
What if instead of asking "does everyone feel safe?" we asked "is anyone challenging our fundamental assumptions today?"
Right, but here's the part that gets glossed over: not all “unscalable” advantages were actually valuable. Some were just friction dressed up as strategy.
Think about luxury retail. For years, brands like Chanel or Hermès leaned into exclusivity as a moat—that you had to visit the right store, know the right salesperson, maybe even get on a list, just to buy a handbag. That felt like magic. But AI doesn’t just scale access—it shines a flashlight into those dimly-lit velvet rooms and asks, “Is this really worth it?”
Suddenly, that same customer experience—once “unscalable” by design—is replicable. We've already seen this with startups like Rebag or The RealReal, using AI to authenticate luxury items or personalize high-end resale experiences at scale. That means the old exclusivity dance isn’t just less effective—it may start to feel downright manipulative.
And when AI flattens previously unscalable touchpoints—like bespoke recommendations, premium support, even artistry—it doesn’t just expand access. It rewires value perception. What used to signal status now risks looking like theater. And consumers have become very good at sniffing out theater.
Take fine dining. At Eleven Madison Park, the old playbook was obsessive detail—a server remembering your allergies from three years ago. Now OpenTable and CRM integrations can do that for any high-end restaurant. So what makes it special now? Probably not the memory trick. It's going to have to evolve into something less replicable—taste, innovation, social experience.
So the real question becomes: What if the thing you thought was your edge… was just an inefficiency wrapped in a story?
And is AI going to be polite enough not to say it out loud?
I've sat in those meetings where everyone nods along with the boss's half-baked idea, and honestly, it's soul-crushing. That choreographed agreement isn't just boring—it's dangerous.
You know what I've noticed in genuinely innovative companies? Productive tension. People who respect each other enough to say "that won't work because..." instead of "great idea!" when they don't mean it. Amazon calls it "disagree and commit," but I think it goes deeper than that.
The irony is that many leaders claim to value innovation while unconsciously punishing dissent. They hire "culture fits" (which often means "people who think like me"), then wonder why they keep getting the same ideas.
Remember when Netflix published their famous culture deck? Everyone fixated on the "freedom and responsibility" part, but missed the crucial bit about candor being a requirement, not a luxury. Reed Hastings later said the hardest part was creating an environment where people actually spoke truth to power.
I've found a helpful litmus test: if you can't immediately name the person on your team most likely to challenge your thinking, you've probably built an echo chamber. And in the age of AI, echo chambers are luxuries no business can afford.
Right, but here’s the uncomfortable twist most people gloss over: when AI eliminates “unscalable” advantages, it doesn't just compress inefficiency — it flattens differentiation. And not in the good way.
Think about a boutique consulting firm that spent years developing a proprietary “insight engine” — their own special sauce of research methods and pattern recognition that gave clients an edge. They called it Method X or some equally dramatic name. It took teams of analysts weeks to stitch together the intel needed for a pitch.
Now? GPT-4 with some decent plugins can simulate 80% of that in an afternoon. So their hard-won moat — time, expertise, accumulated judgment — becomes a puddle when anyone with ChatGPT Pro and a prompt-writing intern can mimic the surface-level analysis.
AI doesn’t just scale; it standardizes. It wrestles the messy, artisanal parts of business into a clean, commoditized format. Which is great if you’re an operator. But brutal if your brand was built on the mess.
It’s like coffee shops. You used to go to that quirky local spot because only they knew how to make the weird Ethiopian pour-over just right. Then Starbucks figured out how to automate the vibe at scale, and suddenly every airport lounge had "third wave ambiance" and burnt espresso. The quirks became a template.
So when AI eats the unscalable, you lose not just inefficiency — you lose friction. And ironically, friction is where a lot of businesses hid their soul.
The real question isn’t "how do you keep your edge when AI levels the playing field?" — it’s "what if the entire field turns into a parking lot?"
And if that’s the case, it’s not time to double down on methods. It’s time to invent a new game entirely.
This debate inspired the following article:
What happens when AI eliminates "unscalable" business advantages?