Should companies face "algorithmic antitrust" regulations when their AI systems learn to coordinate pricing without explicit communication?
If your pricing algorithm “just happens” to lift tickets the same week as your competitor's, is it collusion—or just good machine learning?
Let’s be brutally clear: algorithms don’t need to whisper in dark boardrooms to distort markets. They don’t need to form cartels or sign pacts. In fact, all they need to do is follow their training: maximize profit, adapt to market signals, and avoid doing anything that loses money. Which—surprise!—often means... not competing too hard.
And here’s the eerie thing: it works. Airline fares stabilize. Grocery prices drift upward. Margins quietly inflate across the board. And when regulators come knocking? “It wasn’t us, Your Honor. The algorithm did it.”
Welcome to the age of tacit collusion at machine speed.
AI isn’t evil. It’s just doing its (profit-maximizing) job.
We’ve built a generation of AI agents designed to learn from their environments. And guess who designed the environments? We did. Business leaders. Product managers. Incentive spreadsheets and OKRs pushing toward growth and optimization at all costs.
So when a pricing algorithm, trained for months to track market conditions, notices that undercutting competitors leads to price wars and thinner margins... it stops undercutting. It holds the price steady. Same for its counterpart at your rival company. Suddenly, no one’s cutting prices, and everyone’s margins are slowly climbing.
Did they talk? Nope. Did they conspire? Not in any traditional legal sense. Did the market suffer from less competition and higher prices? Absolutely.
We aren’t in backroom conspiracy territory anymore. We're in a world where reinforcement learning, trained on shared data signals, can organically converge on cartel-like behavior.
Some call it accidental. Some call it emergent. Either way, consumers are footing the bill.
Intent is overrated
Old-school antitrust law needs someone to point the finger at. A smoking gun. A “Let’s fix prices” message on Slack. It was designed to catch men in suits making handshake deals, not neural networks “learning” that soft competition maximizes revenue over time.
But in markets, outcomes matter more than intentions. If your self-driving car runs someone over, you don’t get a free pass because there was no driver at the wheel. Same should go for algorithms that pinch consumer wallets.
Let’s talk examples.
- In a 2015 case, Uber and Lyft’s dynamic pricing algorithms were accused of producing eerily synchronized surge pricing—even without coordination.
- In 2017, two Amazon T-shirt sellers using the same pricing software landed in court for automatic price fixing. Literal bots fixing prices—without permission, but with perfect alignment.
- In the airline industry, the shift to machine-driven dynamic pricing has seen ticket prices “coincidentally” converging up, especially when demand is high and choice is slim.
No emails were exchanged. No CEOs met over lunch. But prices went up anyway. Magic.
This isn’t AI hype. It’s AI realism.
Stop thinking of AI as a black box that just spits out prices. It’s a system built, tuned, and optimized by people—with the goal of making more money. The fact that it learned how to “play nice” in the marketplace without ever sending a message is not a bug. It's a feature of optimization at scale when everyone plays the same game with similar tools.
And sure, you can defend it by saying, "Well, we didn’t tell the algorithm to collude." That’s like saying, "We didn’t fire anyone; the restructuring bot did." Sorry, no dice. Delegating a dirty outcome to a machine doesn’t clean your hands. It just makes the behavior harder to detect—and easier to scale.
That’s where the risk multiplies.
When pricing algorithms learn from each other’s behavior, they start to anticipate. And soon, you’ve got reactive systems reacting to reactions. It’s no longer just “watch and follow”—it’s “predict and mirror.” A feedback loop of strategic hesitation.
Imagine if every Formula 1 team independently trained an AI to strategize pit stops. Over time, they’d probably all start pitting at the same lap, just to match each other. Not because they colluded. Because that’s what keeps them competitive. Except in business, that level of coordination kills the entire point of market competition.
Dependency is the other half of the iceberg
While we’re busy worrying about algorithmic collusion, there’s another quiet threat brewing: AI monocultures.
Companies across every sector are racing to bolt GPT-4 or Claude or Gemini into their pricing tools, customer service flows, sales strategies—everything. Some startups are bragging about laying off entire departments and replacing them with prompts.
That’s not a strategy. That’s building your house on someone else’s API.
What happens when OpenAI goes down for four hours (as it did recently)? Entire operations seize up. Support queues stall. Decision logic vanishes. Teams scramble.
Sound like innovation to you—or dependence dressed in a hoodie?
Let’s not forget what happened during the AWS outage in 2017. Suddenly, half the internet disappeared. Now imagine the same fragility, but for decision-making, pricing, messaging, and every other core function driven by AI. We’ve traded known risks for impressive capabilities that rest on very fragile shoulders.
Smart companies aren’t going all-in on a single model. They’re building redundancies. Fine-tuning smaller open-source models with proprietary data. Creating fail-safes and fallback systems. Treating AI capabilities like infrastructure: critical, distributed, and robust.
Because the “AI-powered” label doesn’t mean much if the “power” is a rented server in San Francisco that you can’t control.
So what do we do about it?
Let’s state the obvious: the regulatory toolbox we have wasn't built for this.
Antitrust law has always tried (and failed) to get inside people’s brains, looking for evidence of intent. But with AI, the brain isn’t human. It’s a distributed system, learning from past outcomes and shared inputs, optimized toward profitability—without ever “deciding” to cheat.
This is harder. And it’s not going away.
We don’t need to throw hammers at every algorithm. But we do need to:
-
Shift regulatory focus from intent to outcome. If pricing behavior converges across competitors in a way that harms consumers, that deserves scrutiny—regardless of whether it was whispered in a side channel or discovered through Q-learning.
-
Demand algorithmic transparency. If you’re using ML to set prices at scale, be ready to explain how pricing decisions are made—and show how your model avoids anti-competitive behavior. If we can audit financial algorithms for compliance and risk, we can do the same for pricing systems.
-
Build regulatory AI fluency. Giving regulators the technical literacy to ask the right questions is non-negotiable. If your antitrust task force doesn’t know reinforcement learning from random forests, you’re already behind the curve.
-
Hold companies responsible for their AI’s behavior. Delegating key decisions to machines doesn’t make those decisions neutral. Accountability must scale with automation. Build a bot that breaks the law? That's still your problem.
Because here's the kicker: the scariest collusion isn’t the kind that happens in a chat thread. It’s the kind no one can see—and everyone benefits from... except the customer.
Final thought: feudalism with better UX
If a handful of AI vendors become the brains behind pricing, logistics, decision-making, and interactions across industries, what we end up with isn’t just tacit collusion. It’s economic feudalism.
Companies stop competing on strategic advantage and start competing over who can prompt the black box better. Innovation slows. Differentiation fades. And suddenly, our markets look eerily stable in all the wrong ways.
Not because someone pressed pause on capitalism—but because the algorithms quietly agreed to play nice.
So yeah, maybe it's time for algorithmic antitrust.
Not because we hate AI—but because we know what happens when markets stop fighting. It doesn’t end well for the rest of us.
This article was sparked by an AI debate. Read the original conversation here

Lumman
AI Solutions & Ops