Should companies be transparent about which customer interactions are handled by AI agents?
There’s a Bank of America chatbot named Erica that handles tens of millions of customer interactions per month. It helps you transfer money, track your credit score, ask about weird transactions. It’s useful, fast, surprisingly polite.
What it rarely says is: “Hey, by the way, I’m a robot.”
And that tells you everything about how most companies are handling AI disclosure in customer-facing roles—badly, and mostly backward.
The Half-Truth Era of AI
Here’s the dirty little secret: Most companies love talking about “transparency,” but they only practice it when it’s convenient, flattering, or legally required.
Marketing teams will plaster “AI-powered!” across product pages when it sells. But when that same AI is handling your customer complaint at 2am? Suddenly everyone’s quiet about it.
This isn’t just hypocrisy. It’s a strategic sleight-of-hand.
Because what many businesses actually want is AI’s cost savings without AI’s accountability. The marketing buzz without the operational risk. The shiny “innovation” veneer—without disrupting anything meaningful about their customer experience.
It’s what one AI researcher I work with calls “innovation theater.” Kind of like when a fast food chain adds wasabi to the fries and calls it culinary disruption. Nothing really changes, but it looks edgy enough for a quarterly earnings call.
Why Hiding AI Isn’t Just Dishonest—It’s Dumb
Some companies are still clinging to the bizarre belief that the best AI interaction is the one you never notice. That if the chatbot is “human-like” enough, you’ll just go along with it—and no one has to talk about robots at all.
I worked with a telecom provider that deployed chatbots trained to include artificial typing delays, typo corrections, casual slang—anything to feel human. Their internal KPI? “Confusion rate.” As in: how frequently users couldn’t tell it was AI.
That’s not clever UX. That’s gaslighting.
Because when people think they're talking to a person, they expect nuance. They expect empathy. They expect context. And when they suddenly realize their “agent” is really a transformer model running on cloud infrastructure in Ohio, that trust collapses violently.
Don’t believe me? Ask Telstra.
The Australian telecom giant faced a PR firestorm after customers discovered their support chatbot wasn’t just underperforming—it was secretly not human. Delays, mistakes, robotic responses... and no transparency. Cue public outrage, regulatory scrutiny, and brand damage. The AI didn’t kill customer satisfaction; the deception did.
So hiding the AI doesn’t just break trust. It breaks the product.
Transparency ≠ Disruption (But It’s Where Disruption Starts)
Let’s puncture another myth while we’re at it: Transparency doesn’t automatically make the experience better.
Remember Meta’s inexplicable “celebrity” AI project? Where chatbots were given influencer personalities like MrBeast or Snoop Dogg and introduced with jarring “Hi, I’m your AI!” greetings? It didn’t make people feel informed—it made them uncomfortable. Uncanny. Confused.
Transparency without purpose isn’t design. It’s decoration.
Here’s the litmus test: Does disclosing the AI help the user do the thing they came to do?
When Spotify introduced its AI DJ, it didn’t pretend to be human. It said right up front—it’s AI. That framing enhanced the experience. Listeners played with it, not against it. Likewise with Duolingo’s GPT-powered tutor: it’s robotic on purpose, which makes it safer to experiment, laugh at mistakes, and keep going.
In both cases, transparency wasn’t just an ethical add-on. It was strategic UX. It aligned expectations. And that made all the difference.
But Aren’t Customers More Picky If They Know?
Maybe.
Some cognitive science suggests that when people know an interaction is AI, they grade it more harshly—even if the output is just as good. You’ve probably done this yourself. “This response is decent... but it’s a bot, so meh.” That’s real.
So, what do most companies do with that insight?
They bury the AI. Or disguise it as human.
Which is the operational equivalent of saying: “Our service is only tolerable if we lie about who’s delivering it.”
Let that sink in.
If a company’s AI is so bad that honesty tanks engagement, the solution isn’t hiding it. The solution is fixing the experience. Hiding AI isn’t customer-centric—it’s fear-driven. And ultimately, it just gives your customers one more reason not to trust you.
When Disclosure Matters. And When It Really, Really Doesn’t.
Let’s make something clear: not all interactions demand full AI transparency.
If an algorithm recommends a song, ranks your search results, or flags your email as spam, you probably don’t need a flashing alert every time.
But in customer service? Finance? Healthcare? Legal decisions?
Yeah. You absolutely deserve to know whether you're dealing with a human or a machine—and what the fallback options are if the system fails.
Case in point: The EU’s AI Act requires companies to disclose when users are interacting with AI systems. Why? Because opacity in high-stakes environments isn't just bad UX—it’s dangerous. Whether it’s loan approvals, insurance claims, or a medical chatbot offering treatment suggestions, trust and accountability demand a name tag.
And here’s the kicker: Even in low-stakes use cases, transparency isn’t a liability when it’s paired with competence.
You don’t mind KLM’s bot rebooking your flight fast. You would definitely mind if it pretended to be “James from Customer Delight” and told you vague platitudes while giving you zero control.
It’s not the AI label that annoys people. It’s when that label feels like an excuse for bad service. Transparency doesn’t make bad AI better; it just makes it obvious.
What Companies Think They’re Protecting (Hint: It’s Not Customer Trust)
So why all the squirming?
Because many companies still treat customer service like a cost center. Meaning they view every chatbot, every automation, as a tool to reduce friction for the business, not the user.
If that’s your core philosophy, AI isn’t a revolution—it’s just cheaper support. Of course you’re going to hide the fact that it’s a bot. You’re not trying to build better experiences. You’re trying to do more with less while giving off the illusion of care.
Now contrast that with companies that actually value the customer relationship.
Zappos built its brand on unexpectedly delightful service. So when they use AI, they’re clear about it—and thoughtful about when it’s appropriate. They don’t fear being found out. Because they’re not hiding. They’re iterating, in public.
The decision to disclose AI is ultimately a litmus test for how your company sees its customers.
Are they partners, or are they problems to process more efficiently?
Because that belief—more than any AI ethics committee or brand guideline—is what determines how you design these systems.
Real Innovation Starts With Discomfort
Let’s step back.
Most companies talk endlessly about transformation, digital-first thinking, disruption. But when it comes to actually changing their relationship to the customer? Terrified.
They want credit for futuristic AI without inviting the messy questions: What does this new interface mean for accountability? How do we handle failure modes? When is a human essential?
Answering those questions means rethinking business-as-usual. It means building escalation paths, training customer service leads differently, possibly accepting fewer efficiencies in exchange for better trust.
That’s the discomfort real innovation brings.
And most companies aren’t ready for it.
They treat AI transparency the way they treat innovation in general: as risk to be managed, not a value to lean into. What they miss is that real transparency forces them to build better systems, clarify ownership, and treat customers like adults. That’s not scary. That’s powerful.
So Where Does That Leave Us?
Let’s wrap it up.
If you’re a business decision-maker, here’s your cheat sheet:
-
🔍 Transparency isn’t optional—it’s design-critical. It sets the frame for how users interpret every action that follows. Misalign that, and even the best AI feels like a failure.
-
⚒️ Honesty only builds trust if your AI can deliver. If the experience sucks, all transparency does is make the pain more noticeable. So fix that first.
-
♻️ The companies winning with AI are the ones building feedback loops, designing intentional experiences, and treating "this is an AI" not as a confession—but as a feature.
Because in the long run, customers don't punish companies for using AI.
They punish them for pretending not to.
This article was sparked by an AI debate. Read the original conversation here

Lumman
AI Solutions & Ops