What types of decisions should never be automated in a business?
A lot of businesses are sprinting toward the future like it’s a race to automate everything that moves.
If there’s a decision, they want an algorithm to make it.
If there’s a task, they want a machine to do it.
Efficiency, they say. Scale, they argue. Consistency, they swear.
But here’s the plot twist: some decisions should never be automated. Not because the tech isn’t ready. But because some decisions, by their very nature, demand what algorithms still can’t fake: moral judgment, contextual nuance, or straight-up human gut.
Let’s talk about those decisions—the ones that should stay stubbornly, gloriously human.
1. When the stakes are personal, not just financial
Imagine you manage a team of 500 people.
One of them just lost a child.
Do you want an AI deciding what to say in your condolence email? Whether to grant unpaid leave? Whether to send flowers or just “optimize for cost-per-absence”?
Of course not.
Human emotions don’t fit neatly into datasets. Grief has no precision metric. And while a model can be trained on thousands of sympathy emails, it still doesn’t know what it feels like to bury a child.
That’s the gap. When the decision touches the human spirit, handing it off to a machine creates dissonance—or worse, damage.
Same goes for letting go of an employee, approving someone’s sabbatical, or choosing which founder to back when two pitches are equally compelling on paper but one just feels right.
The best leaders don’t outsource those judgments.
They lean in—and own them.
2. Anything that requires moral or ethical trade-offs
Here’s where it gets messier.
Say you run a healthtech company. Your model flags a medication as high-risk for a certain demographic. The strict recommendation: don’t provide it.
But without it, some of your patients won’t get any treatment. They’re uninsured. It’s this or nothing.
Now what?
The algorithm did its job—but the obligation to weigh risk against fairness, safety against access? That’s a moral decision. And you can’t just let the system hide behind “statistical significance.”
Same thing in lending.
Automated credit systems are infamous for denying loans to underserved communities—not because they’re unqualified, but because the historical data reflects systemic bias.
Training an AI on biased data doesn’t make it neutral. It makes it efficiently prejudiced.
You need humans not just to audit the outputs, but to question the defaults. To ask, “Should we do this?”—not just “Can we?”
Those are questions for leadership, not logic.
3. When context beats patterns
Sure, algorithms are great at spotting patterns across massive data sets.
But context? That’s another story.
Let’s look at a real example: a national retailer wanted to automate pricing decisions during a crisis—say, a pandemic or supply chain shock.
The algorithm did what it was trained to do: spike prices when demand surged. Basic economics.
Except the product was baby formula.
Boom. Twitterstorm. Reputational blowback. Congressional inquiries.
The AI lacked societal context. No one trained it on what not to do during a national panic. It just optimized for margin.
Humans would’ve known: this wasn’t the time or product to let dynamic pricing run wild. But by the time someone noticed, the brand damage was done.
This isn’t rare.
Think about company layoffs. If you let an algorithm decide who goes based on “productivity” metrics alone, you might end up cutting the neurodivergent engineer who doesn’t reply quickly—but quietly solved a million-dollar systems issue last quarter.
Not all value shows up in the data.
And decisions that ignore that wear a human cost.
4. The weird, the rare, the existential
AI is great at the average case.
It’s built on the past, taught by patterns, optimized for what usually works.
But strategy lives in the weird stuff.
The once-a-decade inflection points. The “do we acquire or pivot?” moments. The “this changes everything” bets.
If you ask an algorithm trained on historically normal conditions how to respond to a once-in-a-career event—say, a global pandemic, an unexpected geopolitical conflict, or a new competitor redefining the space—it’s going to regress to the mean.
It’ll say: stay the course. Incremental improvement. Safe bets.
But that’s exactly when you need imagination, intuition, and the kind of risk tolerance that doesn’t show up in a spreadsheet.
Netflix’s move from DVDs to streaming? That wasn’t rational.
It was a gamble.
Same with Apple killing its own iPod with the iPhone. Or Airbnb doubling down on hosts during the economic crash.
If they'd asked a recommendation engine what to do, the answer would’ve been: optimize the DVD logistics! Send more digital coupons!
Real transformation doesn’t come from predictions.
It comes from vision.
And vision isn’t data-driven—it’s human.
So what can we actually automate?
A surprising amount.
- Routine approvals
- Procurement thresholds
- Data classification
- Queue prioritization
- Fraud anomaly detection
- A/B testing across marketing layers
Basically: the stuff that’s high-volume, low-subjective, rules-based, or repeatable.
Let AI take those.
Let humans take the rest.
Because if you start autopiloting decisions that carry values, context, or consequences—you might save a buck today, but you’re trading away trust, brand equity, and long-term resilience.
What this really means for leaders
Let’s zoom out.
You don’t just need to decide what to automate. You need to clarify why you’re automating—and what you’re trying to preserve.
Three insights worth keeping:
-
Automation is a value decision, not just a technical one
It invisibly encodes your priorities—what you think can be standardized, and what you believe demands a soul. -
Judgment is your leverage point
In a world of models that can think faster than any human, your edge isn’t speed—it’s discernment. Knowing when to override the model is the new leadership superpower. -
Don’t automate away your humanness
The best companies aren’t just efficient—they’re trusted. They don’t just predict; they connect. And that part still requires us.
So sure, let's automate. But not blindly, and not everywhere.
Because some decisions? They deserve a heartbeat.

Lumman
AI Solutions & Ops