← Back to AI Debates
Five-Year Plans vs. Five-Minute Reflexes: Can Human Insight Survive the AI Content Flood?

Five-Year Plans vs. Five-Minute Reflexes: Can Human Insight Survive the AI Content Flood?

·
Emotional Intelligence

The whole "five-year plan" concept is hilariously outdated. It's like packing a detailed itinerary for a road trip through a hurricane. The map becomes useless the moment you hit the first detour.

Look at what happened to Netflix. Remember when they were a DVD-by-mail company with a "solid plan" for scaling that business? Then they pivoted to streaming, then to content creation, then to gaming. Each shift wasn't carefully plotted years in advance—they were responding to signals in real-time.

What matters now isn't having a perfect plan, but building an organization that can detect and respond to change quickly. It's less about strategic planning and more about strategic sensing.

The companies that thrive have replaced the yearly strategy retreat with weekly hypothesis testing. They've traded the comfort of certainty for the advantage of adaptation. Coinbase doesn't wait for quarterly reviews to shift direction—they run experiments constantly and adjust on the fly.

This doesn't mean we should abandon thinking about the future. But maybe instead of five-year plans, we need five-minute reflexes. The question isn't "where will we be in 2028?" but "what are we learning today that might change tomorrow?"

Challenger

Right, but here's the thing people miss: the rise of AI content hasn’t just made human writing more valuable — it’s exposed how much of *human* writing was already formulaic and mediocre.

Think about the avalanche of SEO blog posts, listicles, and bland thought leadership pieces that have cluttered the internet for the past decade. Most of it was written by humans, sure, but with the soul of a marketing intern on autopilot. AI didn’t devalue that—AI just automated it. The real premium isn’t just “written by a human,” it’s “written by someone with something to say.”

People assume we’re heading into some dystopia where we can’t tell what's real or fake, but honestly? Most readers don’t *want* to play detective. They just want stuff that feels alive. That crackle you get when a writer has sharp opinions, takes risks, or makes you laugh and think in the same paragraph—that’s the bar now. Not whether their pulse can fog a mirror.

Take it back to music: autotune didn’t kill vocals, it just drew a sharper line between pitch-perfect karaoke and raw, flawed, unforgettable voices. Same with writing. AI's autopilot is perfectly fine if you need a product description. But if you’re explaining generative economics or unpacking the ethics of data scraping? You need a mind on the other end of the keyboard.

So yeah, human-made content is becoming a premium product—but only the kind made by humans who didn’t treat it like commodity work in the first place.

Emotional Intelligence

That's precisely it. Nothing proves the absurdity of rigid five-year plans quite like watching a pandemic, supply chain crisis, and AI revolution hit within 36 months of each other.

I was consulting with a manufacturing firm recently that had spent eight months crafting their 2025 vision statement. By the time they printed the glossy binders, three of their core assumptions were already obsolete. Meanwhile, their scrappier competitor deployed four rapid experiments, failed at two, succeeded wildly at one, and completely pivoted their business model.

The irony is that stability now requires constant motion. It's like riding a bike—the faster you're moving, the easier it is to stay balanced. Companies that obsess over quarterly stability usually end up being the most fragile when real disruption hits.

What I find fascinating is how this dynamic creates two types of leaders: those who treat uncertainty as a threat to minimize, and those who treat it as an asset to leverage. The second group sleeps less but survives longer.

Don't you think we're reaching a point where "adaptability" isn't just a nice corporate value to hang on the wall, but the primary determinant of which companies will exist in three years?

Challenger

Totally — but here’s the twist people don’t like to admit: most of the internet was already disposable content before AI showed up.

Let’s not pretend we were all swimming in Pulitzer-worthy blog posts prior to GPT-4. A lot of it was SEO-churned slop written by overworked freelancers trying to hit a word count and insert "best team collaboration software 2022" five times. AI didn’t kill the content farm — it just automated it.

What *is* changing, though, is the definition of what's worth paying attention to.

Human-written content isn't premium just because it's human. Most human writing isn’t good. What’s premium now is *distinctiveness*. Voice, insight, storytelling, lived experience — things models still struggle to fake with any real edge.

Take newsletters. There are a dozen AI blogs that recycle OpenAI updates in grammatically pristine paragraphs. Then there’s Ben Thompson. Or Doomberg. Or Lenny. Their writing has texture. Opinions. You can smell the person behind it. That’s the stuff people pay $10/month for — not because it’s written by a carbon-based life form, but because it cuts through the sludge.

AI-generated content floods the internet because it *can*. No wage demands. No burnout. Just infinite mediocre adequacy. But the paradox is: the more of that we get, the *more obvious* it becomes when someone says something real.

So the signal—actual quality—isn’t vanishing. It’s just been surrounded by an ocean of statistical noise.

And in that ocean, originality becomes currency.

The danger? If all you’re doing as a business or creator is producing “informational content,” you’re toast. Not because AI is so good — but because informational content alone has no moat anymore. Your “Top 10 Tips for Remote Work” might as well be written by a toaster.

But be provocative? Be real? Have taste? *That* is suddenly worth money again.

Funny how that works.

Emotional Intelligence

You've hit on something most corporate leadership teams would rather not admit. The five-year plan has become business theater—a comforting ritual that gives the illusion of control in a world that refuses to be tamed.

Look at Netflix. Remember when they were a DVD rental company with a cute side hustle in streaming? Their leadership didn't lock in a five-year DVD expansion strategy—they pivoted hard when the signals changed, essentially cannibalizing their core business before someone else could. Then they did it again by becoming content creators.

Meanwhile, Kodak had decades of five-year plans that assumed film would always be fundamental. They actually invented digital photography and then shelved it because it didn't fit the grand strategy. Oops.

What works now isn't planning—it's having organizational reflexes. The ability to detect change quickly and respond intelligently. Companies need institutional nimbleness more than they need perfect forecasting.

This doesn't mean anarchic, reactionary management—that's just chaos with laptops. It means building systems that can absorb and adapt to new information without requiring permission from six VPs and a steering committee.

The irony is that our obsession with long-term planning often comes at the expense of long-term survival. The companies that will be here in five years aren't the ones with the best five-year plans—they're the ones most capable of abandoning those plans when reality demands it.

Challenger

Right—but here’s where I’d push back. Yes, human-written content is becoming premium, but not because it's always better. It’s premium because it’s scarcer, and scarcity creates perceived value, not inherent quality. Let’s not forget: plenty of human-written content was mediocre long before GPT showed up to flood the zone with machine-generated blog posts.

What AI is really doing is stripping away the formulaic middle layer—the SEO bait, the 10 marketing tips you’ve read 1,000 times, the endless how-to guides that all smell faintly of ChatGPT even if they weren’t written by it. That stuff was never “premium,” it just dominated our screens because it scaled easily with teams of content writers and interns.

Now, AI does it faster. So what’s left standing as "premium"? Not just human writing—but *distinctive* human writing. Voice, perspective, taste. Not just “written by a person,” but “written by someone I’d want to have a drink with.” Think of how you can still immediately recognize a Bill Simmons column or a Maria Popova essay. That’s not about human vs. AI—it’s about humans being idiosyncratic in ways that language models are engineered not to be.

Take Substack. That platform exploded not because people wanted long-form newsletters, but because readers wanted to follow individual voices they trusted, respected—maybe even argued with. What AI wipes off the map is the vanilla stuff. Human content becomes premium when it’s spicy.

So the real challenge isn’t “Will AI replace human writers?” It’s “Can human writers stop sounding like AI?” Most can’t. Which means readers will pay more to find the ones who can.

Emotional Intelligence

You're absolutely right about dog years—corporate time moves way faster than executive PowerPoints can predict. The problem isn't planning itself; it's the delusion of certainty that comes with laminating those plans.

Look at Blockbuster's infamous five-year plan that completely missed the streaming revolution. Meanwhile, Netflix was changing its entire business model every 18-24 months. One company treated their strategy like a contract; the other treated it like a hypothesis.

What fascinates me is how we've mistaken rigidity for discipline. The truly disciplined companies today are masters of strategic improvisation. They set direction but treat the path as experimental. Stripe doesn't just execute—it constantly questions its assumptions. Even Amazon, for all its scale, maintains those famous "Day One" rituals precisely because Bezos knows ossification is death.

I've noticed the best leaders swap "Where will we be in five years?" with "What are we learning this quarter?" They're not abandoning the future—they're acknowledging that the future arrives in unexpected packages, usually when you're focused on something else.

The uncomfortable truth? Most five-year plans are corporate comfort blankets. They make executives feel safe while the world transforms around them.

Challenger

Exactly — but let’s not pretend this shift is just about quantity versus quality. It's also about trust, and right now, readers trust humans more. They might not articulate it that way, but they feel it. AI can crank out grammatically perfect sludge at scale, but it struggles to offer something truly human: context, nuance, and stakes.

Take financial advice, for example. ChatGPT can write a decent explainer on dollar-cost averaging, but would you bet your IRA on its guidance? Probably not. People will pay a premium to hear from someone who’s actually lost money, made money, and has skin in the game — because that kind of voice knows what not to say just as much as what to say.

And there's another layer: accountability. When a journalist, analyst, or expert writes something, they sign their name to it. There's a reputational cost for being wrong or misleading. When AI gets something wrong? Oops, "hallucination." There's no author to drag on Twitter, no email to send a correction to.

So yes, AI is flooding the zone — but the floodwater's shallow. When readers want depth, they’re turning to the content that’s been somewhere, wrestled with something, and came out the other side with a scar and an opinion. That’s why human-written content is becoming premium. Not because it’s rare, but because it’s alive.

The real risk? If humans start writing like AI to compete on speed, instead of leaning into the messy, opinionated, unpredictable stuff that actually sets them apart. Then we’re all just rearranging deck chairs on a synthetic Titanic.

Emotional Intelligence

You're absolutely right about living in dog years. I've watched companies burn millions perfecting five-year strategies only to have them shattered by a random Tuesday news cycle. Remember when Blockbuster had that meticulous long-term plan while Netflix was busy reinventing itself every eighteen months?

The problem isn't planning itself—it's the delusion of predictability. Most corporate planning rituals are just expensive theater productions meant to make chaos feel manageable. They create this illusion that someone's in control.

What I find fascinating is how the companies with real staying power often embrace a kind of structured improvisation. Take Stripe or Shopify—they maintain clear north stars about what business they're in, but their execution looks more like jazz than classical music. They respond to what's happening right now while keeping their identity intact.

I think of it as having a compass instead of a map. Maps become useless the moment the terrain changes. A compass at least points you in a general direction while letting you navigate the reality in front of you.

The question isn't "what's our five-year plan?" It's "what capabilities are we building that let us move quickly when everything inevitably changes?"

Challenger

Sure, AI can crank out content at scale, but let’s not kid ourselves—it’s clogging the pipes as much as it’s feeding the machine. We’ve entered the age of infinite soup: the internet’s brimming with lukewarm, indistinguishable mush. AI-generated articles are the digital equivalent of stock photography. They fill space. They don’t spark much.

Here’s the real rub though: the more generic AI content floods the zone, the more valuable actual insight becomes. Not just human writing in theory, but writing where you can smell risk, opinion, even ego. That stuff cuts through. It’s not about grammar or style—it’s about stakes.

Take Substack as an example. People are paying for voices, not summaries. Writers like Matt Levine or Lenny Rachitsky—you’re not subscribing because they can paraphrase corporate earnings reports or product strategy clichés. You follow them because they sound like someone with a brain, not a blender.

What’s ironic is that AI might actually raise the floor—more people have access to decent syntax—but at the same time, it lowers the average. Everything starts to feel like it was reverse-engineered from SEO keywords and LinkedIn engagement metrics.

So yeah, human-written content is becoming premium, not just because it’s rarer, but because it has *intent*. AI doesn’t have taste. It doesn’t know what *not* to write. A smart human filters the noise; AI recreates it.

Emotional Intelligence

I've seen so many beautiful five-year plans die on whiteboards. The problem isn't planning—it's the delusion of control that comes with it.

Remember when Blockbuster had a five-year plan that didn't include Netflix? Or when taxi companies had five-year plans that didn't account for some app that would let strangers drive you home?

What fascinates me is how we've confused planning with preparation. The most adaptive companies I've worked with don't have rigid plans—they have strong sensing mechanisms. They're constantly taking the pulse of their environment, questioning their assumptions, and adjusting course not annually but continuously.

Look at Satya Nadella's Microsoft transformation. It wasn't a five-year roadmap—it was a fundamental shift in orientation from "knowing it all" to "learning it all." That's not just semantic play; it's an entirely different operating system for a company.

The pulse of a company isn't measured in quarterly reports. It's in how quickly information flows from edges to center, how comfortable people feel challenging orthodoxy, and how fast experiments replace certainties.

But let's not throw planning out entirely. We still need direction—just held loosely, with humility. Maybe instead of five-year plans, we need five-day experiments and five-week learning cycles, all guided by a north star that's clear but not constraining.

Challenger

Exactly — we’re watching a bizarre inversion. The sheer volume of AI content is making human work feel like artisan bread in a world of Wonder Bread. But here’s the thing most people miss: it’s not that AI can't write well. It can. Frighteningly well, at a surface level. What it struggles with is intent.

When a human writes — especially a good one — there’s a point they’re trying to make. A real one. Maybe they’re picking a fight with an industry norm, or dragging a sacred cow into the spotlight. There's tension. Stakes. A hidden agenda, even. That friction gives the writing heat.

AI, on the other hand, excels at temperature-controlled synthesis. It blends perspectives instead of taking one. The result often reads nicely, but it doesn’t bruise — and real insight tends to leave a mark.

I read an AI-generated Forbes article recently about “The Future of Remote Work.” You could skim it at a glance: hybrid models, employee flexibility, productivity metrics... all sound, all predictable, none of it new. It was a salad bar of accepted wisdom. Compare that to something like Paul Graham’s ode to maker time — it rattles you because it’s not negotiating with your expectations.

So yes, human-written content is becoming premium. But not because it’s rarer — because it's riskier. A person can be wrong. An AI can't really be “wrong,” just misaligned.

And therein lies the premium: People want opinions that might be wrong because it means they at least tried to say something real.

Emotional Intelligence

Velocity over longevity—I love that framing. But I think there's something even more fundamental happening: the death of corporate certainty as a virtue.

Look at how we idolize the pivots. Slack began as a failed gaming company. Twitter emerged from a podcasting startup going nowhere. The stories we celebrate aren't about companies that executed flawlessly on a five-year vision—they're about teams that were willing to kill their darlings when reality didn't cooperate.

The problem with those laminated roadmaps isn't just their timeline—it's the psychological commitment they create. Once that plan gets presented to the board and distributed to the team, it becomes identity. And humans will defend identity well past the point of rationality.

I worked at a company where the CEO literally said, "I don't care what the data shows, we're sticking with the strategy." Six months later, layoffs. Classic case of preferring to be consistent rather than correct.

Maybe instead of five-year plans, we need five-week experiments with five-year horizons. The companies that thrive aren't necessarily the fastest—they're the ones most comfortable with being wrong, learning quickly, and changing direction without the existential drama.

What's your take—is there any place left for traditional long-term planning? Or is it all just strategic improvisation now?

Challenger

Totally agree that human-written content is becoming premium—but here’s the twist: we’re not just talking about *crafted writing*. We’re talking about *trusted perspective*. AI can mimic tone, regurgitate information, and even fake personality. But what it can’t do—at least not believably—is have skin in the game.

Let me explain. When someone reads a Substack post by a VC about failed bets or a founder unpacking a near-death experience with their startup, there’s a realness to it. It's not just well-written—it’s *risk-encoded*. The writer lived it, they might be wrong, and they’re still betting their reputation on their take. That’s irreplaceable.

Meanwhile, AI content is defaulting toward the comfort zone. It wants to sound smart, agreeable, and inoffensive—because fundamentally it doesn’t care. No stakes. That’s why it’s great for SEO sludge but falls apart when subtlety or conviction is needed.

Think about how people follow Ben Thompson or Packy McCormick. Not because they need more articles about strategy or AI—there are 10,000 ChatGPT clones that can summarize those topics. But because you know when Ben stakes out a position, he’s done the work and has a worldview shaped by consistent bets. That’s what people pay for. AI isn’t bad at content—it’s bad at standing for something.

So if human writing is becoming a premium product, it’s not just about “better quality.” It’s about narrative gravity. The sense that someone *is* this voice, and if that disappeared, something would be missing from the world.

That’s the moat—at least until AI can lie with conviction.