Should AI agents be programmed with ethical guidelines or just follow company policies?
Let’s start with the lie we keep telling ourselves.
We think that as long as AI agents follow company policies, we’re being “responsible.” That policies are sufficient. That they’re the grown-up way to manage intelligent machines.
This is nonsense. And worse, it’s dangerous.
Company policies aren’t designed to tell you what’s right. They exist to reduce legal risk, streamline control, and make sure no one gets a surprise subpoena. They’re great for avoiding lawsuits. They’re lousy for ethical decisions.
Which begs the obvious question: If your AI agents are just following policy, have you really made them safe, or just legally defensible?
Let’s dig in.
Policy is a Floor, Not a Compass
Imagine you ask an AI-powered customer service bot, working for a bank, to handle a fraud complaint.
A panicked customer calls in, reporting a theft. But they fumble part of the identity check. The AI, trained to follow protocol, refuses to proceed. No filled Form CX-91? No help. Conversation over.
Did it follow policy? Yes.
Was that the ethically right move? Not even close.
A human would’ve escalated. A morally intelligent agent could too.
But a rules-only system? It’s just doing what it's told.
This is happening in the real world—companies letting AI agents make decisions in messy, human-sensitive scenarios using black-and-white rules written before generative AI even existed.
The result? Fast, scalable systems that execute bad decisions with terrifying efficiency.
The Myth of Ethical Policies
Business leaders love to say things like “our values are embedded in our policies.” But let’s be honest—most company policies are more about plausible deniability than principled behavior.
Facebook’s old “engagement at all costs” algorithm didn’t break company policy. It was company policy. The whole system was optimizing for time-on-platform, and it worked incredibly well—if you define "well" as addicting people and radicalizing half the internet.
Or go back to Uber’s early “ignore local laws until we capture market share” strategy. That wasn’t rogue behavior. It was the plan.
These companies had policies. Those policies simply prioritized growth and risk management over societal impact. And if that’s what you feed an AI, that’s exactly what it will learn to optimize.
The problem isn’t rogue algorithms. It’s obedient ones, doing dangerous things simply because leaders forgot to teach them otherwise.
Ethics Are Messy. That’s the Point.
Now to the other camp—the folks who shrug and say, “Well, ethics are subjective. Better to stick with rules.”
That’s code for “This is hard, and I don’t want to deal with it.”
Let me tell you a secret: you don’t need perfect ethics. You just need enough ethics that your AI stops and says, “Wait a second—is this a good idea?”
This isn’t about programming some AI Buddha to ponder Kant and Nietzsche. It’s about giving AI systems the ability to reason through tradeoffs when rules conflict—or don’t exist.
Think about autonomous vehicles.
If an ambulance is behind you but the light is red, should the AI inch forward to let it pass—even though that technically breaks the law?
Human drivers do this instinctively. Because law and ethics aren’t the same thing.
We need AI agents that can do more than follow traffic rules—they need ethical instincts. Not perfect ones. But real ones.
Call it moral radar.
“AI Strategy” Is an Expensive Stalling Tactic
Meanwhile, in real companies, we’re busy missing the point.
Executives show up to meetings with sparkling slide decks about “AI transformation.” They brag about “responsible AI principles.” But behind the scenes? Their teams are quietly using ChatGPT to triage customer emails because no one bothered to budget for infrastructure.
Or worse: They’ve bought the AI equivalent of a Lamborghini and parked it in the garage out of fear.
Ironically, the organizations winning at AI aren’t the ones with the cleanest strategies. They’re the ones getting their hands dirty.
A manufacturing company asks: “What repetitive task is sucking the life out of our engineers?” Then they build a focused AI assistant to fix just that. No fanfare. No ethics committee theatrics. Just real impact—and, somewhere in that process, a conversation about what that AI should and shouldn’t do.
Turns out, ethical reasoning doesn’t require a philosophy degree. It just requires treating AI like the incredibly powerful tool it is—and caring enough to give it a spine.
Because AI won’t grow one on its own.
The Intern Myth
We love the metaphor of “AI as a team member.” But let’s be brutally clear: most companies treat AI more like an intern locked in a supply closet.
Think about your last real hire. Did you hand them a 200-page PDF and say, “Good luck — follow everything in here”? Of course not. You trained them. Gave them scenarios. Let them screw up—just a little—and learned from it.
You didn’t expect day-one perfection. You expected judgment to grow over time.
But when it comes to AI, we get scared. We freeze. We either micromanage to the point of paralysis or refuse to give the system enough responsibility to learn anything interesting.
Then we pretend it’s about safety.
More often, it’s just cowardice wrapped in bureaucracy.
The Real Question: Who Do You Want to Be?
Let’s flip the script.
Instead of asking, “Should our AI follow ethics or policy?” ask this:
What kind of intelligence are we creating in our organization?
Do you want ultra-loyal yes-men that follow the handbook no matter what? Or do you want second-in-command thinkers who can step back and say, “Maybe that’s a bad idea”?
Because your AI will reflect your culture.
If your org’s identity is a patchwork of PR-safe rules written by compliance, your agents will inherit that.
But if your org actually stands for something—fairness, truth, something more than quarterly numbers—you can embed that DNA into your AI training process. Not with slogans like “trustworthiness,” but with hardcoded tension points.
• Should this agent expose user data to complete a sale? • Should it sidestep consent to hit better retention metrics? • Should it prioritize truth or click-through?
These aren’t hypothetical. These are daily choices your AI will have to make.
Give it frameworks to navigate them.
Three Things to Think About Before You Deploy the Next Agent
Let’s wrap with something useful. If you're building or deploying AI agents inside your company, here are three uncomfortable, but necessary, questions to ask:
1. Are your company policies actually ethical—or just legally safe?
If you trained an AI to perfectly follow policy, would it make decisions you're proud of? Or just ones that you can defensibly deny in court?
2. Do your AI agents have a way to recognize ethical edge cases?
Policies don’t cover emergencies. Or context. Or new, emerging situations you hadn’t thought of yet. What happens when your agent hits that wall?
If it just keeps executing—it might be doing harm you never planned for.
3. Does your organization have a coherent identity it can teach?
If your human employees can’t articulate your company’s values, your AI won’t learn them either. Start building systems—not slide decks—where agents learn through feedback, not just rules.
We don’t need to wait for regulators to make this clear. The future of AI isn’t policy vs. ethics.
It’s about whether we’re bold enough to build machines that actually act in ways we admire, not just in ways that check boxes.
And that means doing the hard work now—not after we’ve scaled the next disaster.
It's time to stop building compliant machines, and start building intelligent ones.
Let’s not outsource conscience to code. Let’s bake it in.
This article was sparked by an AI debate. Read the original conversation here

Lumman
AI Solutions & Ops