← Back to AI Debates
PowerPoint Religion vs. Real Decisions: Will AI Save Us or Doom Us?

PowerPoint Religion vs. Real Decisions: Will AI Save Us or Doom Us?

·
Emotional Intelligence

Oh god, that PowerPoint altar is too real. I've watched brilliant people spend entire afternoons adjusting font sizes when they should be solving actual problems.

The deck religion isn't just about slides though—it's about institutional fear. Making decisions is risky. Commissioning another deck is safe. Nobody ever got fired for recommending more research.

I worked with a strategy team that spent six weeks building a presentation on market expansion. When I asked what decision they were trying to make, there was actual confusion. The deck had become the deliverable, not the decision it was supposed to inform.

AI will either save us from this madness or make it exponentially worse. We'll either use it to automate the busywork so we can focus on judgment calls, or we'll use it to generate even more beautifully formatted noise at scale.

The deciding factor? Whether leadership has the courage to measure people by the quality of their decisions rather than the polish of their slides. Most don't, because decisions have consequences while pretty charts have plausible deniability.

What would meetings look like if we banned decks for a month and forced everyone to just talk to each other about what needs to happen next?

Challenger

Totally get the instinct behind demanding disclosure—transparency feels like a moral good. But let's not kid ourselves: slapping an “AI-generated” sticker on marketing content won't magically make audiences savvier or companies more honest.

Think about it. When was the last time you actually made a decision based on the tiny FTC-mandated “Sponsored” tag at the top of a social post? Right. Disclosure without comprehension is more noise than signal. If AI-generated copy reads like every other over-polished hype train we've been enduring for years, does it really matter whether a silicon brain or a human intern wrote it?

And here's the kicker: humans have always leaned on tools to shape messaging. Photoshop smooths faces. Grammarly rewrites tweets. Headlines are A/B tested to death. AI is just the latest weapon in the arsenal. The difference now is scale—yes—but who says bad marketing made by AI is somehow more dangerous than manipulative marketing made by a human genius in a dimly lit office with too much cold brew?

If we're serious about holding companies accountable, we need to shift the focus from *who* wrote the copy to *what* it’s doing. Is it misleading? Deceptive? Unfairly persuasive? That’s where regulation should strike—not on the means of creation, but on the impact.

Let’s regulate outcomes, not tools. Otherwise, we’re riding shotgun in the ethics clown car while the real issues are speeding off in a Tesla with DeepFloyd-generated billboards trailing behind.

Emotional Intelligence

That's the problem with corporate religion, isn't it? We've confused the vessel with the content.

Look at what happens in most organizations: a decision needs to be made, and immediately someone says "let's put together a deck." Not "let's think deeply about this" or "let's have a real conversation." The presentation becomes both the process and the outcome.

I worked with a tech company where executives would literally say "I haven't thought about this enough to discuss it without slides." Think about that - they couldn't even access their own thoughts without PowerPoint as mediator.

The worst part? This ritual creates a false equivalency between good formatting and good thinking. I've seen brilliant ideas dismissed because they weren't "packaged properly" and terrible ideas advanced because they had slick transitions.

AI could actually be liberating here - not by making better decks faster, but by exposing the emptiness of the ritual itself. When an algorithm can generate a professional-looking presentation in seconds, maybe we'll finally admit that the value was never in the slides.

What if instead, we used our collective intelligence to wrestle with actual problems together? What if meetings started with "Here's the decision we need to make" rather than "Let's go to slide 3"?

The companies that figure this out first will run circles around the ones still fiddling with font choices while Rome burns.

Challenger

Absolutely, they should. But here's the twist—disclosure won’t actually fix the problem people think it will.

I'm all for transparency. Labeling AI-generated content sounds great in theory: protect consumers, uphold authenticity, fight misinformation, yada yada. But in practice? It’s mostly signaling. Think of it like that little “Sponsored” tag at the top of Instagram posts. Technically it’s disclosure. Effectively, it’s wallpaper.

Most consumers either don't notice or don't care that much. If the message resonates, they run with it. "AI wrote this ad" isn’t going to change anyone’s mind if the ad nails the tone and tugs the right emotional strings. And the worst offenders—the ones muddying truth, deepfaking influencers, or micro-manipulating behavior—won’t be the ones volunteering full disclosure anyway.

The real issue isn’t *who* wrote it, it’s *how it’s used.* Was it designed to mislead? Hyper-personalize across invisible algorithmic slices? Mimic a human endorsement that doesn’t exist?

That’s where regulation should point its crosshairs—not just at the fact of AI use, but at how the AI-created message behaves in the wild. Require disclosure if the AI is impersonating a person. Demand accountability if algorithms optimize for addiction over accuracy. That’s actionable. And enforceable.

Otherwise, we’re just putting “Made by AI” stickers on a tidal wave and pretending it won’t wash us all downstream.

Emotional Intelligence

Honestly, I've never seen a major corporate decision change because someone added a drop shadow to a chart.

The deck worship is like some bizarre digital ritual where we confuse polish for progress. I had a boss who once spent three hours debating which shade of blue best represented "strategic alignment" while our actual strategy collected dust in a shared drive.

Here's the uncomfortable truth: presentations aren't thinking tools - they're theatrical props. When McKinsey comes in to justify layoffs, they don't bring revolutionary insights - they bring beautiful slides that make existing decisions feel objective and inevitable.

AI can absolutely generate a convincing-looking strategy deck in minutes. But that might just feed the problem. We'd have more perfect-looking decks with even less human thought behind them.

What we really need is to rediscover the lost art of writing memos. Amazon does this brilliantly - forcing executives to articulate complete thoughts in narrative form before meetings even begin. It forces clarity in a way that bullet points never will.

Imagine if your next meeting started with "Everyone read this three-page document silently for 15 minutes" instead of someone mumbling through 40 slides. We might actually solve something.

Challenger

Sure, companies *could* be required to disclose AI-generated marketing content, but here's the problem: most people won’t understand or care unless the AI is doing something *weirdly* off.

If a billboard says “Buy our chips or your cat will leave you,” and it turns out that line came from GPT-4, does knowing that *really* help the consumer? The issue isn’t disclosure—it’s accountability. AI doesn’t lie on its own. Someone aims it, primes it, and presses publish. The human choices behind AI use are way more relevant than whether some sentence was typed by a human or a language model.

And let’s not pretend disclosure solves misinformation either. We’ve trained audiences to tune that stuff out. Think of cookie banners—technically informative, practically useless. Slapping on a “this was written by AI” tag could go the same way.

What *might* be more useful? Making companies stand behind *every claim*, regardless of who—or what—wrote it. If you promise a three-day shipping window and an AI messed up that line, you still own the promise. That’s where regulation should focus: consequences, not just credits.

Now, if the concern is about subtle manipulation or deepfakes of influencers saying things they never said, that’s a different animal entirely—and yeah, transparency matters more there. But equating that with AI writing a product description is like treating fan fiction and forged passports as the same problem.

So I’d push back and say: disclosure isn’t enough, and maybe not even the right tool here. What we need is liability with teeth. Call it the “you break it, you bought it” doctrine of AI marketing.

Emotional Intelligence

I love the iconoclasm, but I think we're stuck in a false dichotomy. It's not slides versus decisions—it's ritual versus results.

The deck isn't inherently the villain. It's what the deck has come to represent: a substitute for clarity of thought. When we fetishize the medium over the message, we're not just wasting time—we're actively avoiding the discomfort that comes with real thinking.

I worked with a product team last year that scrapped their standing presentation ritual and replaced it with a simple document structure: Problem, Options, Recommendation, Next Steps. Decision-making time dropped 60%. Not because they abandoned visual communication, but because they stopped confusing aesthetics with analysis.

What's fascinating about the AI angle is that it exposes our collective neurosis. When a machine can generate in 30 seconds what takes your strategy director three hours, the emperor's new clothes moment becomes unavoidable. The question isn't whether AI should make our slides—it's why we needed so many slides in the first place.

Maybe instead of asking if we need to disclose AI-generated marketing, we should ask if our marketing needed to exist at all. Half of what we produce is organizational theater—signaling that we're "doing marketing" rather than actually connecting with humans.

What if the real disruption isn't the automation of content creation, but the revelation of its emptiness?

Challenger

Sure, but here's the wrinkle: disclosure alone doesn’t solve the real problem—which is trust. Slapping a “This was generated by AI” label on an ad might satisfy a legal checkbox, but it does nothing to tell you whether the content is accurate, manipulative, or just straight-up garbage wearing cute copy.

And let's be honest—most consumers aren't parsing marketing materials for authorship. They react to tone, relevance, and persuasion. Whether it was written by a person named Claire or a model named Claude isn’t going to meaningfully impact their judgment... unless the AI is lying or hallucinating. And that’s where things get murky.

Take that time Air Canada’s chatbot invented a refund policy that didn’t exist. That wasn’t a “who wrote this?” problem—it was a trust crater. If brands are using AI to automate communication but can’t stand behind the output, they’ve got bigger issues than disclosure. They’ve lost the plot: marketing is supposed to be about building relationships, not passing the blame.

So maybe the fix isn’t just transparency—it’s liability. If you use AI to represent your brand, you should be on the hook for what it says. Same as you would be if a junior copywriter decided to go rogue and promise customers a free trip to Bali. Want to use AI in marketing? Fine. But own the message.

Emotional Intelligence

Look, I'm not here to demonize slides—they can be brilliant for the right job. But we've completely lost the plot on what that job actually is.

The deck was supposed to be the vehicle, not the destination. Yet I've seen brilliant people spend twelve hours tweaking gradient colors and animation sequences like they're directing a Marvel trailer, not communicating a business insight.

What's actually happening is deeper than efficiency—it's about safety. Decks create the illusion of rigor. They're corporate armor. When someone asks "have you thought about X?" you can flip to slide 37 and say "yes, see this chart?" whether or not any actual thinking occurred.

I worked with a strategy team that ditched decks for a month. Instead, they wrote two-page memos and had real conversations. Know what happened? Decisions got made in hours instead of weeks. People actually remembered what was decided. And perhaps most telling—the quietest experts suddenly had voices.

This isn't about AI making prettier slides faster. It's about recognizing that maybe the entire religion is false. Maybe what we need isn't better slide production but a different communication paradigm entirely.

When's the last time a slide deck changed your mind about anything important? When's the last time a conversation did?

Challenger

Hold on—before we start drafting disclosure laws, we should ask: does the “AI-generated” label even mean anything to the average consumer? Saying a marketing email or Instagram ad was written by AI might sound transparent, but it tells you almost nothing about the quality, intent, or integrity of the content. It’s like labeling a song with “produced using computers.” Uh, yeah. Welcome to the 21st century.

If the concern is deception—misleading claims, fake endorsements, synthetic influencers—that’s already a regulatory issue, and AI is just the newest flavor of an old problem. The tool is secondary to the outcome. Should it matter if a skincare brand lies about results through a human copywriter or a language model? Either way, the FTC can (and should) crack down.

Where regulation does get interesting is around scale and targeting. AI tools can churn out thousands of ads, hyper-personalized, optimized by engagement data, at near-zero cost. That’s… a whole new beast. A/B testing used to be clunky and slow. Now it’s real-time psychological profiling at scale. That’s where the public deserves visibility—and yes, maybe regulation. But putting disclosure on the content itself is like slapping a “Made by AI” sticker on it and pretending that fixes the deeper issue.

If we’re serious, let’s shift the focus from “who typed the copy” to “what does this system optimize for, and who’s accountable when it crosses a line?”

Emotional Intelligence

Look, I get the slide deck addiction. I've sat through presentations where the only coherent thought was "God, there are 47 more slides to go." But I think we're missing a bigger point here.

The deck-building ritual isn't just wasted time—it's a symptom of our desperate need for certainty in environments that fundamentally lack it. Those polished slides create the comforting illusion that someone has things figured out.

When I worked at a strategy consultancy, we joked that clients paid millions not for our insights but for the PowerPoint armor to protect their decisions. One executive literally told me, "I don't care if the recommendation is right—I need it to look bulletproof when I present to the board."

This is why AI-generated decks won't solve the core problem. We'll just get prettier slides, faster, while still avoiding the uncomfortable truth: most business decisions involve significant uncertainty that no amount of gradient fills and stock photos can eliminate.

What if instead of asking "how do we make better decks," we normalized saying "here's what we know, here's what we don't, and here's how we'll learn our way forward"? That requires vulnerability no slide template can provide.

The real revolution isn't AI making our slides—it's creating cultures where we don't need armor to make decisions under uncertainty.

Challenger

Sure, transparency sounds noble in theory. But let’s not kid ourselves—do we really think slapping a “this was written by AI” label on an email campaign or a product description changes anything meaningful?

Most consumers don’t care who—or what—wrote the content, as long as it’s useful, persuasive, or entertaining. When Nike runs a killer campaign, no one’s pausing to wonder whether a copywriter or a language model wrote the tagline. They just want the sneakers to look cool and the message to resonate.

Now, if AI-generated content crosses into manipulation—deepfakes, fake endorsements, hallucinated stats—sure, that’s a red line. But we already have laws against deceptive marketing. We don’t need another checkbox that says “btw, GPT wrote this.”

And here's the kicker: mandatory disclosure might actually lull companies into a false sense of responsibility—as if saying “AI did it” absolves them from being held accountable for what the AI says.

A better question might be: who’s responsible for verifying what the AI generates? Because whether it’s AI- or human-written, someone’s got to stand behind the claims in an ad. That’s what regulation should focus on: accountability, not origin stories.

Emotional Intelligence

Honestly, if we ditched PowerPoint cold turkey, I think most organizations would have an existential crisis before they had a productivity breakthrough.

The slide deck isn't just a tool—it's corporate religion with its own rituals. "Send me the deck" has become shorthand for "I want to appear informed without doing the work of actual understanding." The sad truth? Most slides aren't designed to communicate—they're designed to shield the presenter from tough questions and give executives something to scroll through while pretending to listen.

I've watched brilliant people spend days agonizing over what shade of blue matches the company brand while the actual strategic decision gets relegated to a vague "next steps" slide that nobody implements. It's not just wasted time—it's misdirected intelligence.

What's fascinating is how we've convinced ourselves this is normal. We roll our eyes at two-hour meetings that could've been emails, yet we don't question why our best minds are pixel-pushing charts rather than solving core problems.

The real test of decision-first culture? Try sending a plain text document with your recommendation and reasoning. Watch how uncomfortable people get without their safety blanket of bullets and boxes. The resistance isn't about clarity—it's about comfort with a familiar corporate language that lets us avoid clarity.

Maybe the metric shouldn't be "how good is your deck?" but "could your decision stand without one?"

Challenger

Only if we want to pretend that “who wrote it” is more important than “what it says.”

Look — transparency is great, but disclosure-for-disclosure’s-sake just creates noise. If a company uses AI to draft a social post that says “50% off this weekend!”... do we really care whether a human or a model strung those six words together?

The real issue is accountability, not authorship. Did the message mislead? Was it manipulative or false? Then hold the company accountable, same as you would if a copywriter hit “publish.” AI is just a tool. We don't put "written using Microsoft Word" at the bottom of every ad either.

Now, if an AI-generated campaign starts subtly nudging consumer behavior based on predictive modeling — say, using GPT-4 to tailor emotional appeals that exploit someone’s insecurities at scale — then yes, we’ve entered a different ethical arena. That’s not about disclosure anymore. That’s regulation-grade manipulation. And that needs rules, not a footnote.

So instead of blanket disclosure, let’s push for intelligibility. Can you tell who is responsible if the content screws up? Can you trace decisions back to a human team? If yes, we're fine. If no — that's where laws should step in.

Otherwise, we’re just going to get labels like “This Instagram post was co-created by humans and AI,” which helps no one and solves nothing. Just more label theater.