Should companies be legally required to disclose when AI tools generate their marketing content?
If you’ve ever sat in a conference room watching someone mumble through Slide 37 of a 94-slide deck on “synergistic market realignment,” you already know the truth: a disturbingly large portion of corporate life is theater.
The deck isn’t just a visual aid. It’s armor. It’s ritual. It’s how we make indecisiveness look like due diligence.
And AI? It’s about to pour gasoline on this bonfire of fake productivity.
The Religion of the Deck
Let’s start with the cult we’ve quietly built around PowerPoint.
Slide decks have become corporate incense—burned ceremonially to ward off real thinking. Deals, strategies, even entire reorganizations are often pitched not through conversation, but via carefully animated charts paired with buzzwords so divorced from clarity they might as well be in Klingon.
One exasperated strategist shared how their team spent six weeks crafting a presentation on market expansion—only to realize they hadn’t even defined the decision they were trying to make. The deck itself had become the deliverable. Not the insights. Not the judgment. The deck.
We treat slides like sacred texts rather than communication tools. “I haven’t thought about this enough to talk without a deck,” one tech exec admitted. Imagine needing a slideshow to access your own ideas.
In this world, polish often trumps substance. A mediocre idea in a well-designed format beats a brilliant one in plain text. And decisions? Those get buried under transitional animations and gradient fills—anything to avoid actual accountability.
Now enter AI: capable of cranking out these polished decks by the dozens in minutes. The temptation? More “work,” faster. More theater. More armor. And way less thinking.
When In Doubt, Add a Disclosure
Now let’s talk about disclosure. Because the new corporate anxiety isn’t just “Will AI take my job?” but “Do we have to tell people when AI writes our marketing copy?”
The instinct is understandable. Transparency = good, right?
But let’s be honest: slapping a little “This was generated by AI” footnote on an Instagram ad isn’t going to make audiences smarter, safer, or more skeptical. It’s the legal equivalent of a cookie consent banner—technically truthful, practically ignored.
After all, when was the last time you saw a “Sponsored” tag at the top of a post and suddenly reevaluated your entire worldview? Exactly.
The Lie of the Label
Here’s what we’re really seeing: a proxy war between means and outcomes.
Is AI-generated content inherently deceptive? Of course not. AI is a tool. It doesn’t wake up one morning and decide, “Today I will make people feel bad about their skin so they buy toner.”
Humans do that. Sometimes with keyboards, now more often with models. And as long as someone is pointing the AI toward certain goals—clickthroughs, conversions, whatever—the tool is just executing strategy at scale.
Blaming the tool for deception is like blaming Photoshop when someone airbrushes a CEO to look more “authentic.” The problem isn’t that a neural network wrote your email blast. The problem is what it said—and whether someone can be held to account for it.
AI at Scale = Old Problem, New Weapon
Here’s where regulation should live: not in tagging content with “written by AI,” but in evaluating impact.
- Did the AI generate false medical claims?
- Was the content designed to mimic a human endorsement that doesn’t exist?
- Is the system optimizing for engagement at the expense of truth?
These aren’t philosophical hypotheticals. Air Canada’s chatbot literally invented fake refund policies. That’s not a “who wrote this?” issue—it’s a “who owns this?” failure.
Or take the hypothetical where a language model writes “Buy our chips or your cat will leave you.” That’s funny-ish. But labeling it “AI-generated” tells you nothing about credibility or ethics. The issue isn’t authorship—it’s behavior.
If a message lies, manipulates, or exploits vulnerabilities, someone—AI or not—needs to answer for that. And no, “our model wrote it” doesn’t count as an excuse.
Disclosure Won’t Save Us—Accountability Might
We’re at a weird inflection point. Companies are outsourcing more of their marketing copy, taglines, emails, and social posts to generative AI tools. Public backlash hasn’t been enormous—because, frankly, most people can’t tell and don’t particularly care.
But here’s where things get murky: when these tools start crossing ethical lines.
Not with “50% off this weekend!” messages. With stuff like:
- Hyper-personalized emotional targeting that subtly exploits people’s insecurities
- AI influencers “speaking” in the voices of real celebrities—without consent
- Auto-generated testimonials that look human, but aren’t
That’s where disclosure starts to matter—somewhere between honesty and impersonation. But even then, the label isn’t enough. We need systems of accountability that go deeper.
Say it with me: AI is the assistant, not the defendant.
The real measures should be:
- Traceability: Can we know who approved this content, or at least who was responsible for review?
- Intent transparency: Was this content optimized for persuasion or accuracy—or something murkier?
- Liability: If the AI lies, misleads, or breaches laws, who pays the price?
Not the model. The company. The brand. The humans behind the curtain.
AI Doesn’t Kill Trust—Humans Do
The deeper truth hiding here is uncomfortable: our obsession with disclosure might be a defense mechanism.
It lets companies handwave responsibility while pretending they’re being mature and transparent.
“We disclosed that our chatbot might hallucinate.” Cool. But when it does hallucinate and someone makes a major financial decision because of it, what then?
Consumers don’t want a gear-by-gear rundown of how your website chatbot is powered. They want to know that if it gives wrong answers, someone’s got their back.
Bottom line: From a consumer trust perspective, disclosures are thin bandaids. Meaningful trust comes from consistency, clarity, and ownership. Not labels.
AI in the Room: The Mirror We Didn’t Want
There’s one final irony in all this.
When AI starts generating decks, emails, taglines, and talking points faster than we ever could, we finally see how much of our work was...well, just formatting.
It shines a light on just how performative much of “content creation” really is.
Do we need that weekly newsletter, really? Or are we just hitting send because marketing needs to “show momentum?”
Is the campaign built to change minds, or is it just checking a box on a brand calendar?
AI won’t just create content. It will expose content that shouldn’t exist.
So… Do We Disclose?
Only if it matters.
If the content fakes human emotions or impersonates a person, yes—disclose. If it was written by a model but says something benign, like “50% off ends Sunday”—save us all the labeling theater.
More importantly, restructure the laws around consequences, not creation. And build internal processes assuming that whatever your AI writes, you’re on the hook for it. Because you are.
The Real Revolution Isn’t in the Tools
Here’s where we land:
-
Corporate decks are a safe haven from decision-making. AI might help kill them—or just mass-produce prettier distractions. It depends on leadership’s courage.
-
Disclosure laws for marketing content are theater unless tied to real-world accountability. Knowing AI wrote it is less useful than knowing whether it’s true or fairly used.
-
AI reveals the emptiness of content-for-content’s-sake. The more we automate, the more we'll have to ask: Did this need to exist at all?
We’re not afraid of AI. We’re afraid of what it reveals:
That a lot of our workflows were procrastination in drag. That we’ve equated polish with insight. That maybe—just maybe—the real threat to trust isn’t machines mimicking humans.
It’s humans outsourcing their judgment to machines.
Own the message. Or don’t publish it at all.
This article was sparked by an AI debate. Read the original conversation here

Lumman
AI Solutions & Ops