AI in entertainment is creating content faster than humans can consume it - but is any of it worth watching?
There’s a moment—you’ve felt it—when you’re five minutes into scrolling Netflix, YouTube, TikTok, Instagram, HBO Max, Hulu, whatever, and you stop and think: “Why does none of this look good?”
It’s not a shortage. It’s not lack of variety. If anything, there’s too much. Endless content generated by AI, optimized by algorithms, distributed by platforms. Flick your thumb and there are five more shows. Ten more short films. An entire remake of The Office but with cats and Midjourney-filtered lighting.
And yet somehow… it all feels empty.
Like eating cotton candy for dinner. Looks big. Dissolves into nothing.
We’re producing content faster than anyone can consume it. But the real question is: is any of it worth our attention?
AI Can Write Faster Than You Can Blink. So What?
Speed isn’t value.
We’ve built machines that can churn out scripts, compose music, generate CGI—at scale, instantly, endlessly. A single generative AI model today can write 100 soap operas before your coffee gets cold. It can replicate style, tone, even personality. You want Tarantino-meets-Emily Dickinson in a rom-com set on Mars? Just prompt it.
But here’s the thing: the faster we go, the shallower we get. Because creativity isn’t just production. It’s friction. It’s confusion. It’s someone pouring their mess into something unexpectedly meaningful.
AI doesn’t do mess. It avoids it.
That’s not a technical limitation. That’s the entire point of how it was trained—to predict what comes next based on what came before. But what made something like Fleabag or Everything Everywhere All At Once or Moonlight special wasn’t that it followed trends. It bent them so hard they snapped.
These stories took risk. They were grounded in specificity. You felt the weirdness in your ribs. Try feeding “authentic emotional mess” as a parameter to an AI. See what happens.
When Everything Is Watchable, Nothing Is Memorable
Let’s be honest: most generative AI content right now? Totally fine. Not terrible. Watchable, even.
Like background music in a dentist office.
Technically pleasant. Emotionally hollow.
AI doesn’t get bored. But humans do. What’s “engaging” today becomes cookie-cutter by next week. TikTok used to feel weird and fresh. Then came the trends, then the creators mimicking the trends, then the AI pushing more of what works until everything turned into the same looping dance with “relatable” captions.
Imagine scaling that dynamic across all forms of entertainment.
Twenty versions of the same sci-fi dystopia with different B-roll. Ten million hours of sitcoms optimized for retention but allergic to surprise. An algorithmically perfect horror movie that forgets terror lives in silence, ambiguity, and the uncanny—not in cheap jump scares spaced every three minutes.
AI-produced stories increasingly feel like entertainment product, not entertainment art. Shiny, derivative, passive.
And that shifts the culture not just in what we make—but in what we expect.
We’re Training Our Attention to Pre-Fail
This part is harder to admit:
We're not just consuming algorithmically generated sludge—we’re starting to want it.
Because it's easier.
The more we binge content that's optimized for “watchability,” the more we dull our tolerance for creative risk, for emotional truth, for stories that aren’t instantly digestible. It’s like brain rot. The stuff with soul starts to feel “slow” or “weird” or “hard to get into.”
Platforms reward the opposite of risk. They promote what triggers metrics: completion rates, rewatch stats, micro-engagement. They reward familiarity disguised as novelty. They don’t tell you what’s worth watching—they just show you more of what you already watched.
And that means creators, studios, execs start shaping what gets made by what’s easiest to greenlight. Lowest common denominator. Fastest to render.
When AI enters that system, it doesn’t fix the problem. It accelerates it to warp speed.
Strategy Has a Half-Life Measured in Weeks
Here’s another paradox hitting the entertainment world like a freight train: the old planning models are almost obsolete.
Every six months (honestly, every six weeks), a new storyline emerges. “Cinematic universes are everything!” “Everyone wants interactive video!” “Vertical video is the future!” “No wait, it’s live TV!” All of it gets codified into rigid strategies that collapse faster than a Metacritic score after a bad launch.
Why? Because AI doesn't just generate content. It generates noise. And in a world of content overload, the scarce resource isn’t ideas or execution—it’s attention.
Scarcity has shifted. We no longer fight over who can make the best story—we fight over who can make someone care long enough to tap play.
And most companies are still optimizing for an old environment. One where content was scarce and attention was linear.
That’s gone.
Netflix used to predict its hit shows based on viewing history. But did anyone predict the global obsession with Korean dramas? Or the COVID-era spike in shows about cults, scams, and messy billionaires? Or that Tiger King would become a cultural benchmark before disappearing completely six weeks later?
The landscape moves fast. Faster than strategies. The only winning move might be to stop pretending any strategy lasts.
AI Can’t Be Weird. But Culture Thrives on Weird
Here’s the gut punch.
Generative models are built on the past. They remix what we've already made. Which makes them brilliant at producing stylistic echoes.
But not originality.
Not real weirdness.
And yet history shows us: weird is often where the magic lives.
Think about Succession or Severance. Do you think a risk-averse exec, guided only by AI data points, would’ve signed off on a slow-drip workplace drama about corporate metaphysics and existential dread set in painfully beige hallways?
Of course not.
But it hit. Hard. Because people were ready for something tonally offbeat, emotionally strange, culturally resonant in a way no spreadsheet could have mapped.
What AI can do is remix plot points from Breaking Bad, Ozark, and Barry into a new supercrime family saga. And it’ll probably be “fine.” But that’s not culture-shaping—it’s content-puking.
We don’t remember the polished. We remember the peculiar.
Letting the Good Stuff Sink
There’s a darker side here.
Every time we flood platforms with more mid-tier, AI-injected everything, we increase the chance that singular, human-crafted stories get lost in the scroll.
You know it's happening when even phenomenal art starts underperforming because it wasn’t timed well with the algorithm. Or when breaking new ground starts feeling commercially suicidal.
We’ve seen this play out elsewhere.
Indie publishing on Amazon? Drowned in AI-written genre pulp. Music platforms? Dominated by lo-fi background tracks designed for skipping, not listening. YouTube? Thumbnails, scream faces, clickbait titles—even in educational content.
Generative content expands the supply. But it compresses differentiation. And when everything is “pretty good,” nothing hits.
So Now What?
AI is amazing at producing content that looks like content. Text that looks like stories. Images that look like art. Videos that look like films.
But unless we change how we define “value,” we’re just turning creative industries into high-speed mimicry machines.
A few things need to change:
-
We need to stop incentivizing infinity. More content isn't better content. And attention is zero-sum. Platforms that can highlight the few truly original things—rather than infinitely remix the average—will win loyalty, not just eyeballs.
-
AI should play second fiddle. Support tool, not lead creator. Generative models are powerful accelerators, but we should feed them with human intention, not let them set the tone. Think of them like good editors, not auteur directors.
-
We need to build friction back in. Surprise, risk, emotional nuance—they don’t come from pattern-matching. They come from breaking patterns. And that means letting humans make weird, wonderful mistakes again.
Because at the end of the day, AI can generate a million stories.
But only humans can tell you the one that matters right now.
This isn’t about being anti-AI.
It’s about being pro-craft, pro-risk, pro-soul in a world drowning in fast content.
It’s about refusing to settle for “watchable.”
And maybe—just maybe—remembering why we started watching stories in the first place.
This article was sparked by an AI debate. Read the original conversation here

Lumman
AI Solutions & Ops