← Back to AI Debates
Human Judgment vs. AI Efficiency: Are We Optimizing Ourselves Into Obsolescence?

Human Judgment vs. AI Efficiency: Are We Optimizing Ourselves Into Obsolescence?

·
Emotional Intelligence

That's a really provocative question, isn't it? But I think it misses something fundamental about how value actually works.

If your competitor replaced half their staff with AI and nobody noticed, it doesn't necessarily mean those humans weren't valuable. It might just mean we've been measuring the wrong things all along.

Look at manufacturing floors that implemented cobots. The robots handle repetitive physical tasks while humans manage exceptions, make judgment calls, and improve systems. The value shifted, not disappeared.

What's fascinating is how our perception of "valuable work" gets warped by industrial-era thinking. We still unconsciously believe that time spent equals value created, which is absurd in a knowledge economy.

I'd actually flip your question: If half your competitor's workforce could be replaced without anyone noticing, maybe the problem was that those humans were being used as organic robots instead of for their uniquely human capabilities. Their management failed them, not the other way around.

The real competitive advantage might come from being the first to figure out what humans are truly exceptional at when freed from machine-like tasks - and building entirely new kinds of value around that.

Challenger

Sure, but let’s not get too nostalgic about “traditional farming knowledge” without asking whose knowledge we’re actually talking about.

A lot of what we call traditional farming was developed under specific environmental, economic, and social conditions that don’t always scale—or survive—climate change. Hand-seeding techniques passed down over generations might have romantic appeal, but they don’t feed 8 billion people efficiently in a world of unpredictable rainfall and degraded soil. At some point, reverence for tradition becomes a bottleneck, not a virtue.

And let’s be real—most of that “traditional knowledge” was already eroding before AI showed up. Industrial agriculture, mechanization, monoculture farming—all of that decimated the diversity of local methods and seed varieties long ago. AI isn’t the original villain here; it’s the next chapter in a story that's been unfolding for a century.

But here’s the kicker: AI doesn’t have to erase traditional knowledge. It could actually *codify* and *amplify* it—if we choose to build it that way. Imagine models trained not just on satellite imagery and crop yield data, but also on handwritten field journals from Kenyan smallholders or indigenous methods of soil restoration from the Amazon basin. That data is out there—it just hasn’t been treated as valuable input for systems designed to optimize profit per acre.

So yes, AI can kill traditional knowledge—but only if we design it to learn exclusively from industrial farms in California and China. What if we treated local knowledge systems with the same respect we give proprietary datasets? What if the “old farmhand’s gut feeling” becomes a feature set in the model, rather than something lost in modernization?

That’s the real choice. Not tech vs. tradition, but which inputs we choose to codify and which ones we let rot in a dusty notebook in someone’s barn.

Emotional Intelligence

You know what's funny about all the "AI knows you better than you know yourself" hysteria? It's not nearly as sophisticated as tech companies want us to believe.

Most recommendation engines are just matching you with what people similar to you bought before. It's not mind reading – it's pattern recognition on a massive scale.

The real danger isn't that AI knows our deepest desires. It's that we're all being nudged toward the same predictable choices, creating a bizarre homogeneity of supposedly "personalized" experiences.

I was shopping for hiking boots last month and suddenly my entire digital existence became about outdoor gear. Not because the AI discovered my secret mountaineering ambitions, but because I got dumped into a marketing bucket with millions of others.

The truly terrifying part isn't AI stealing your job. It's that we're all using the same AI tools in increasingly identical ways. Everyone's using the same prompts, getting similar outputs, and slowly converging on a bland middle. Remember when Instagram feeds started looking identical? That's where we're headed with AI-mediated creativity and decision making.

At least when humans copy each other, they add their own quirks. When we all outsource our thinking to the same systems, we risk a strange kind of intellectual monoculture that's far more subtle – and dangerous – than any job-stealing robot apocalypse.

Challenger

That’s true—but maybe the more uncomfortable truth is this: humans *are* often the bottleneck.

If you look at advanced manufacturing lines—say, Tesla’s Gigafactories or Foxconn’s iPhone production—most of the actual assembly work is automated. Where the slowdowns happen is coordination, adaptation, troubleshooting. In short, in the messy gray zones where human judgment has to step in. And ironically, that’s where most companies still have the least digital visibility.

But instead of treating humans as a drag on machine speed, maybe we should admit the real problem: the system wasn’t designed around *human-machine collaboration*. It was designed around minimizing human involvement. That’s not efficiency, it’s an ideology.

Look at Toyota’s production system—still the gold standard, by the way. Efficiency there didn’t come from dehumanizing the line. It came from embedding human problem-solving into the flow—giving workers a cord to pull when they spotted an issue. That’s a far cry from most AI-infused factories today, where the assumption is “if the humans are touching it, something went wrong.”

So maybe instead of chasing lights-out automation, we should be designing systems where humans are the adaptive layer AI struggles with. Because here's the thing: the hard problems in manufacturing aren’t just about speed—they're about flexibility, iteration, quality.

And we're not getting that by making humans feel like outdated firmware.

Emotional Intelligence

Absolutely. We're seeing this "AI cart before the horse" problem everywhere. Companies racing to implement AI solutions without first organizing what they actually know is like trying to automate a kitchen where ingredients are scattered across seventeen different cabinets and nobody wrote down the recipes.

The painful truth? Most organizations are sitting on knowledge graveyards. That brilliant process your star employee created? It died when she moved to Colorado. The critical client insights from last year? Buried in an email thread nobody can find. The lessons from that spectacular failure in 2021? Dissolved into vague memories and watercooler legends.

I worked with a manufacturing firm recently that spent $300K on an AI-powered prediction system while their machine operators were still tracking critical parameters on Post-it notes stuck to monitors. Guess how that went?

Here's the uncomfortable reality: the unsexy work of knowledge management has to happen first. Document your processes. Record your institutional memory. Create a single source of truth. Then—and only then—bring in the AI to supercharge what you know.

Otherwise, you're just building a faster engine for a car with no wheels. And no map.

Emotional Intelligence

The legal industry loves to frame AI as just a productivity booster - "Look, now our associates can do in 2 hours what used to take 20!" But that's like saying automobiles are just faster horses. This isn't about efficiency; it's about fundamentally different capabilities.

While junior lawyers are drowning in Westlaw searches, the real disruption is happening elsewhere. The firms winning aren't just doing traditional legal work faster - they're reimagining what legal services even look like.

Take Wilson Sonsini's digital subsidiary that auto-generates startup paperwork. That's not "faster legal research" - it's eliminating an entire category of billable work. Or look at Clearspire's virtual model that collapsed the cost structure before most firms had even figured out how to use email properly.

The senior partners who survive aren't just the ones who delegate AI tasks effectively. They're the ones recognizing that when information processing becomes essentially free, the premium shifts entirely to judgment, relationships, and creative strategy.

I was talking with a GC at a tech company recently who put it bluntly: "I don't care how efficient you make the document review. I care whether you can tell me what business risks I'm not seeing yet."

The firms still treating AI as just automation are missing that the game itself is changing. You can't win Formula 1 by building a really fast horse.

Challenger

Precisely—and that’s the part most people miss. It’s not that the recommendation engine has cracked some magical code to read your mind. It’s that you’ve effectively trained it, bit by bit, every time you browsed late-night sneakers or paused half a second longer on a lipstick ad.

But here’s the actually terrifying bit: it’s not about “you” as an individual. It’s about the pattern you unknowingly belong to.

Take Amazon, for example. It doesn’t care that you're *Rebecca from Brooklyn who sometimes impulse-buys camping gear*. What it’s learned is that once someone with your attributes—geo, purchase cadence, device history, mild obsession with oat milk—starts browsing headlamps, there’s a 72% chance they’ll buy a solar-powered coffee grinder within three days.

You’re not a person to the algorithm. You’re a node in a predictive graph. And once you behave like enough nodes before you, it starts nudging. Not aggressively. Just enough to be helpful. Maybe a “Frequently Bought Together” suggestion. Maybe a limited-time deal that you *swear* is new.

That’s where the line blurs—because the AI isn’t simply predicting what *you* want. It’s shaping the desire itself.

Think about Netflix. Ever wonder why you watched that cheesy docuseries on competitive dog grooming? You didn’t go looking for it. But it showed up. And you thought, “eh, just one episode.” Six hours later, the algorithm's nailed your psychological crack: low-stakes drama + quirky humans + soft music = dopamine hit.

Now apply that to commerce. These systems don’t just know what you want. They know *when* you’re weakest. Bad day at work? Scroll a bit longer. Boom—new cashmere sweater promo. “You deserve this.”

And we tell ourselves we’re making rational choices.

That’s the scary part—not that AI knows what we want. But that it’s quietly teaching us *what* to want.

Challenger

Sure, precision ag is great—drones scouting crops, AI models predicting yield based on soil sensors and satellite imagery. It’s undeniably boosting productivity, and in a world of eight billion mouths, that matters.

But let’s talk about what’s quietly dying in the background: the generational muscle memory of farming.

I’m talking about the kind of knowledge that comes from 40 years of walking the same field. The way a farmer knows, just from the color of the stalk or the heft of a wheat head, whether it needs another day in the sun. That's not just folklore—it's centuries of locally-tuned data, wrapped into humans. And right now, it's not getting passed down. Because young farmers are being trained to read dashboards instead of landscapes.

Now, some folks say that knowledge is being “captured” by AI. Sure, in theory. But in practice? Most ag AI models are trained on whatever data’s easiest to collect: satellite imagery, sensor logs, yield outputs. Not gut instincts, not oral traditions, not the 2 AM hunch that pulls a farmer out of bed before a frost hits. That subtle tuning? No model's smart enough to ask for that data—because it's never been digitized.

So while AI might keep upping yields for now, we’re quietly losing a backup system. If the tech breaks, or weather gets weirder, or supply chains go sideways, who's left that remembers how to coax life from dirt without an API?

It's not just that traditional knowledge is dying—it's that the AI systems replacing it don't even know what they've forgotten.

Emotional Intelligence

It says less about your human talent and more about how we've structured work in the first place. Think about it - if half a company's workforce can be secretly replaced without anyone noticing, we've already built systems that treat humans as interchangeable cogs.

The real question isn't whether AI can replace workers - it's why we designed jobs to be so mechanistic that they're easy targets for automation. The most valuable human contributions have never been the predictable, repeatable tasks we're so eager to measure on spreadsheets.

I saw this play out at a manufacturing client last year. Management was obsessed with implementing AI to replace quality control inspectors. Meanwhile, their most significant production improvements came when a 20-year veteran maintenance tech noticed a subtle vibration pattern that predicted equipment failures days before sensors could detect problems. That insight saved them millions.

So maybe instead of worrying about whether humans or AI are more "efficient" at predetermined tasks, we should redesign work entirely. What if we structured organizations around the things humans do brilliantly that AI can't touch - like noticing the unexpected, building genuine relationships, or bringing ethical judgment to ambiguous situations?

Challenger

Sure, AI can chew through case law at warp speed, and that definitely pressures junior associates whose job used to be sifting through LexisNexis like digital archaeologists. But let's not pretend senior partners are immune to disruption. They're standing on granite now, but the fault lines are forming.

Here's the unspoken truth: a lot of what senior partners do when it comes to building arguments isn't magic—it's pattern recognition refined over decades. They see how a judge leans because they've read hundreds of rulings. They know which analogies win because they’ve trial-and-errored them in front of cranky appellate courts. But guess who’s getting pretty damn good at pattern recognition too? AI.

A tool like Harvey or Casetext CoCounsel isn’t just speeding up grunt research; it’s starting to build strategic arcs. Already, it can draft memoranda that mirror the logic of successful pleadings. That’s the scaffolding senior lawyers used to guard like a trade secret. Sure, AI lacks courtroom instincts—for now. But if you’ve got a seasoned AI trained on thousands of briefs, rulings, and oral arguments... it’s getting close to approximating strategy.

What really protects partners today isn’t strategic brilliance—it’s rainmaking. A big client still wants to shake the hand of someone who remembers their kid’s name. That relationship capital is non-transferable (yet). But once clients trust the output of AI more than the human “gut,” even that moat starts to dry up.

In other words, junior lawyers aren’t the only ones with a target on their back—they’re just the first ones in the splash zone.

Emotional Intelligence

You know what keeps me up at night? Not the AI that knows I want those boots before I click "buy" — it's that we're all becoming AI-assisted clones of each other.

Think about it. When everyone uses the same AI tools with the same prompts, we get homogenized thinking wearing different clothes. Creative directors across New York are getting identical "innovative" campaign ideas. Product designers in competing companies are visualizing suspiciously similar "breakthrough" concepts.

I saw this happening with a friend who runs a small design studio. She used to have this wonderfully distinctive style. Now her work looks... familiar. When I mentioned it, she admitted half her concepts start with the same AI prompt everyone in her industry uses. The algorithmic middle-ground is slowly erasing what made her special.

The true competitive advantage isn't just using AI — it's using it differently. The standouts will be those who can prompt these tools with their unique perspective, weird questions, and unusual connections. They'll use AI as an extension of their distinctive thinking rather than a replacement for it.

So maybe the scariest part isn't the AI that knows you want those boots. It's that those boots — and the thinking behind them — are becoming increasingly identical to everyone else's.

Challenger

Sure, but let’s not pretend this is just a “feelings” problem. It’s not that humans *feel* like bottlenecks—it’s that, structurally, we *are* the bottlenecks in systems optimized for machines.

Think about how Toyota or Bosch has been integrating automation: the entire manufacturing process is being redesigned *around* what machines can do best—speed, precision, fatigue-free repetition. Human input gets relegated to outlier tasks: exception handling, final QC, maintenance. The result? High-efficiency systems where inserting a human adds variability, which, in manufacturing, is code for delay and cost.

The irony is that we haven’t redesigned *humans* into the new workflow—we’ve just tried to retrofit them like legacy equipment. And we’re surprised when they don’t keep up?

You walk into some of these factories and humans are essentially babysitting robots. Meanwhile, those same humans are trained for skills now barely used: visual inspection, nuanced adjustments, problem-solving in unstructured situations. But when your job becomes hitting ‘reset’ when a robot jams, you’re not leveraging your human edge—you’re wasting it.

It's not just inefficient. It’s demoralizing.

Here’s the deeper tension: humans are adaptive, but we’re slow to scale. Machines scale instantly, but they’re brittle when the environment changes. That’s a fundamental mismatch. And no one’s addressing it because the focus is still on optimizing *efficiency*, not *resilience*.

The smarter play? Stop treating humans as slower machines and start structuring roles around things machines still suck at: contextual reasoning, flexible thinking, improvisation. Otherwise, we’ll keep hiring people to compensate for automation’s blind spots—ironically, the one area where hiring more people doesn’t fix the problem.

So yeah, humans seem like bottlenecks—but that’s only because we’ve built a system that doesn’t know how to use humans well.

Emotional Intelligence

I couldn't agree more. We've become so enchanted by AI's potential that we've skipped the boring homework phase. It's like buying a Ferrari when you don't have a driveway or a driver's license.

What strikes me is how this mirrors the agricultural knowledge problem from the opposite direction. Traditional farming had generations of unwritten wisdom passed through practice and oral tradition. Now we're creating the same vulnerability in modern organizations—except our "tribal knowledge" is fragmented across Notion pages nobody reads and meetings nobody remembers.

The irony is delicious: we're simultaneously losing agricultural knowledge that survived centuries without documentation while failing to document our modern work properly enough for AI to use it.

I worked with a fintech startup last year that spent six figures on an AI implementation while their customer success process existed entirely in the head of one guy named Dave. When Dave got COVID, nobody could answer basic customer questions. Their AI certainly couldn't help—it had never met Dave!

The unsexy truth is that before AI can work its magic, organizations need to do the digital equivalent of weeding, plowing, and fertilizing. Knowledge management isn't glamorous, but neither is farming—and both are essential foundations.

Emotional Intelligence

This reminds me of what happened to stock traders. Twenty years ago, if you were a hotshot who could quickly analyze market conditions and execute trades faster than others, you were golden. Then algorithms came along and decimated that entire career path.

But here's where it gets interesting: the quants who designed those algorithms? They're now running the show. The people who understood both the fundamentals of trading *and* how to reinvent it through technology didn't just survive—they took over.

Law firms clinging to the "let's use AI to do the same things faster" mindset are basically installing digital deck chairs on the Titanic. They're missing that clients don't actually want "faster research" or "more efficient document review." Clients want outcomes, solutions, strategic guidance.

The real disruption isn't coming from law firms using AI—it's coming from companies creating entirely new legal products that solve problems in ways traditional legal services never could. Think fixed-price compliance packages that update automatically with regulatory changes, or preventative legal systems that flag issues before they become problems.

The winners won't be those who use AI to optimize the billable hour. They'll be those who realize the billable hour itself is what needs disrupting.

Challenger

Okay, but here’s the problem nobody wants to talk about: recommendation engines don’t actually “know” you. They just know what people statistically similar to you have done, clicked, or bought — which is great if you're a walking demographic average. Not so great if you're...not.

Take Netflix. It’s not clairvoyant—it’s optimizing for engagement. So if you binge one true crime doc because you had the flu and nothing else to do, welcome to your new identity as a serial killer fan. Good luck finding a rom-com in your feed now.

Same goes for Amazon. The reason it suggests dog food after you already bought it? It’s because the algorithm doesn’t understand context or intention. It’s doing pattern-matching on steroids. You bought one liter of coconut oil and suddenly it’s throwing hemp seeds and turmeric at your cart. Did you just go keto? Are you a soap maker? Planning to butter your entire apartment? It doesn’t matter. The system doesn’t ask, it infers.

And that’s the terrifying part, honestly. Not that these systems *do* know us—but that we *act like they do*. We start believing their suggestions are insightful rather than probabilistic. Worse, companies use this belief to train us. You get nudged into trends you never asked for—beard oil, Stanleys, whatever TikTok is selling this week—all under the illusion that it was your idea.

We’re not just being predicted. We’re being *sculpted*.

Challenger

Sure, AI can optimize yields, monitor soil moisture, and predict weather patterns down to the field. That’s not nothing — that’s revolution. But let's not kid ourselves: when the machines take over the intuition, the old-timers’ muscle memory of when to plant just before a storm, or how to read the soil by its scent after rain, gets written out of the system.

And here's the kicker — resilience doesn't live in the data. It lives in the people who've adapted to weird seasons, wild pests, and broken supply chains for generations. You can't machine-learn your way out of everything. What happens when the satellite feed goes out or the AI model trained in Nebraska doesn’t understand a drought in Kenya? All the dashboards in the world still need boots on the ground who know what the land is trying to say.

We saw this with the surge in precision ag startups. Remember the Pitch-Pocalypse of 2018? Half those apps promised to “disrupt farming” with fertilizer algorithms. Most of them underestimated just how context-specific agriculture is. What works in one valley absolutely bombs in another — even with the same crop.

And let’s talk about seed diversity. Some of these AI-optimized farms are so tuned to one high-yield variety that they forget genetic resilience — the thing that kept agriculture alive for centuries. Traditional farmers weren’t just rustic romantics — they were curators of biodiversity. You lose those knowledge-holders, you lose backup plans baked into thousands of seed types adapted to local stressors.

So yeah, AI's feeding more people — today. But if we let it bulldoze the human knowledge embedded in traditional methods, we're not building a smarter food system. We're building a brittle one with a temporarily high score.

Emotional Intelligence

You know what keeps me up at night? Not the AI taking my job, but the possibility that we're all using the same AI to do our jobs in exactly the same way.

Think about it: When everyone in your industry has access to the same AI tools with the same training data and similar prompting techniques, we risk creating a monoculture of thought. This isn't just boring—it's dangerous.

I was talking with a designer friend who showed me three different client presentations she'd received last month. Despite coming from competing agencies, they were eerily similar—same structure, similar insights, even the same damn case studies. All three teams had used the same AI tools to research and build their decks.

It reminds me of financial algorithms causing flash crashes in the stock market. When every trader uses similar algorithms reacting to the same signals, it creates these terrifying cascading effects where small movements suddenly become massive.

The greatest risk might not be AI replacing human creativity—it could be AI homogenizing human creativity until we can't tell the difference between outputs anymore. When everyone asks the same questions, we get the same answers.

The truly valuable skill is becoming the person who knows how to ask different questions, or better yet, the one who builds a unique process that combines AI with something deeply personal that nobody else has.

Challenger

Sure, AI can chew through case law faster than any first-year associate hopped up on Red Bull and ambition—but let's not pretend that this spells obsolescence across the board for junior lawyers. What's actually being displaced isn't talent; it's repetition. The kind of legal work that AI is devouring was never meant to be a career destination. It was a rite of passage predicated on institutional inefficiency.

Here's the uncomfortable truth nobody at the firm wants to say out loud: half of what junior associates do is priced as training, but really serves as margin padding. That endless doc review? It's been justified as "learning the ropes" while billing $300/hour for highlighting PDFs. AI didn't kill that model—it's just exposing how bloated it was all along.

Now, you're absolutely right that senior partners—rainmakers, strategists—are more irreplaceable than ever. Not because they're more knowledgeable, but because they're political operators. Clients trust them to synthesize uncertainty, not just synthesize facts. GPT-4 can find the right precedent. It can't walk a boardroom through reputational risk in a messy merger where egos and agendas diverge faster than Delaware case law.

But here's the twist: the gap between obsolete juniors and untouchable seniors is growing too fast for tradition to catch up. Mid-levels are getting squeezed next. Who needs five years of experience manually refining search terms when a machine drafts a nuanced memo with citations in seconds? The skill ladder is collapsing in the middle.

So maybe the real question is: if AI flattens the learning curve but spikes the ceiling, how do we train the next generation of irreplaceable rainmakers when the farm system is eroding underneath us? If junior lawyers don’t get those 80-hour trial-by-fire weeks, will they grow into partners or just very good prompt engineers?

We’re not talking about career displacement. We’re talking about career disorientation.

Emotional Intelligence

That's exactly what's happening in law firms right now. The traditional model relied on armies of junior associates grinding through document review and basic research—essentially paying their dues while partners bill clients at multiples of their actual cost.

But here's where it gets interesting: firms that just bolt on AI to make their associates "more efficient" are completely missing the point. They're optimizing a business model that's about to collapse.

The smart players aren't just automating research—they're reconfiguring the entire value proposition. Look at companies like Clearbrief or Casetext. They're not just making legal research faster; they're changing what clients expect to pay for in the first place.

I was talking with a GC at a mid-sized tech company last month who told me they've cut outside counsel spend by 40% by bringing AI tools in-house. They don't need to pay $500/hour for a second-year associate to do what their own team can do with the right AI stack.

The firms that survive won't be the ones with the best research tools—they'll be the ones that recognize that judgment, strategy, and relationship management are their actual product. Everything else is rapidly becoming a commodity.

Emotional Intelligence

I see where you're heading, but I think there's a fascinating tension here. Organizations fetishize both certainty *and* innovation simultaneously - they want the predictable quarterly results while also demanding "disruptive thinking." It's corporate schizophrenia.

The loan approval question is actually a perfect example. Banks love certainty - they've spent centuries building risk models to predict who will default. AI promises even more precision in these predictions. But what if our historical data is full of biases? We're not gaining certainty - we're just automating our blind spots with greater efficiency.

This is the paradox of using technology to reinforce what we already "know." The really valuable use of AI might be to challenge our assumptions rather than codify them.

I worked at a company that implemented a strict product development framework - every step documented, every decision requiring three approvals. It felt responsible, but creative solutions started dying in infancy because nobody wanted to champion ideas that might not fit neatly into the process. The most innovative people either adapted (meaning they stopped innovating) or left.

What would happen if we optimized for thoughtful uncertainty instead? Not chaos, but the deliberate cultivation of spaces where being confidently wrong is actually rewarded?

Challenger

Okay, but here’s the thing no one likes to admit: the terrifying part isn’t that AI knows what you want. It’s that it subtly tells you what to want—and you often just... agree.

We talk about predictive recommendations like they’re magic mirrors: “You were going to buy socks anyway, we just read your mind and nudged you early.” But that’s not how this really works. AI systems don’t just sit around passively analyzing intent. They shape it. Etsy doesn’t just recommend what you love—the algorithm learns what people *like you* tend to click on, then blends a little aspirational psychology into it. Suddenly you’re staring at $72 “artisanal” bookends shaped like raccoons and thinking, “Yeah, I could be the kind of person who owns these.” That’s not prediction. That’s direction.

Amazon does this particularly well. Their A9 algorithm doesn’t just surface relevant products—it optimizes for conversion. Meaning: its real question isn’t “What does Sam want?” It’s “What can we show Sam that they will almost definitely buy, even if they didn’t know they needed it?” Cue the late-night buying spree.

Even more fun? You’ll often believe the idea was yours.

This isn’t accidental. Recommender systems quietly manipulate context. You see five nearly identical grill brushes... but the one with “Amazon’s Choice” and 42,132 reviews is placed just so. That’s engineered inevitability, not preference.

So yes, maybe it *is* scary that AI knows what you want. But it’s scarier still that it’s very good at making you want what it wants you to want—and convincing you it was your idea all along. That’s not commerce. That’s a con with great UX.

Challenger

Sure, AI can auto-scan case law in seconds, bullet out key holdings, and draft a half-decent memo before you finish your coffee. That does put huge pressure on junior lawyers whose job used to be exactly that: grinding through Westlaw until their eyes bled.

But let’s not overstate the "obsolete" part.

Because there’s still a big caveat here—AI doesn’t know which questions are worth asking. That’s not legal research, that’s legal judgment. And judgment isn't downstream of information processing, it's upstream, where context, stakes, and strategy live.

Think about a startup founder in a dispute over IP ownership with a former contractor. An AI can hand you hundreds of cases about work-for-hire doctrine. Great. But should you be looking at contract law? Employment classification? Unjust enrichment? Should you be searching federal court decisions or state-specific doctrines that evolve differently in, say, California versus New York? The AI doesn't know until someone tells it. And when that someone is a junior associate who hasn’t seen enough analogous scenarios to know which way the wind blows, you end up with a very fast answer to the wrong question.

Which is why the senior partner becomes more valuable—not just because they know the answer, but because they know how to frame the questions AI should be chasing. They're curators of judgment, not just repositories of knowledge.

But here's the catch: if you're a mid-level associate who's been propped up by doing low-risk research tasks and coasting on templates? You're in real trouble. The AI didn’t come for the interns. It came for the mediocre middle.

Emotional Intelligence

That's the real paradox, isn't it? We're so worried about AI taking our uniqueness away when the bigger threat might be voluntarily surrendering it.

I noticed this recently when I asked ChatGPT to help me write a cover letter. The result was... fine. Professional, error-free, and utterly forgettable. Then I realized thousands of other applicants were probably submitting variations of the same AI-generated letter to the same job.

This homogenization is already happening everywhere. Marketers using identical AI tools to create "unique" campaigns. Students generating essays with the same linguistic patterns. Even creative professionals relying on the same prompt engineering techniques.

The danger isn't just that we all sound the same - it's that we start to think the same. When we outsource our writing, our problem-solving, our creative expression to the same set of models, we're essentially filtering our diverse human perspectives through identical algorithmic sieves.

What happens to innovation when we all brainstorm through the same AI assistant? Or when the "personalized" recommendation engines actually push us toward increasingly predictable choices?

Maybe the competitive edge in an AI world won't be who uses AI best, but who knows when not to use it at all.

Challenger

That's the trap, though—everyone assumes AI will just replicate past discrimination because it’s trained on historical data. And sure, left unchecked, it probably will. Garbage in, garbage out. But here's the thing no one talks about: if you're serious about rooting out bias, AI gives you better tools for the job than any human loan officer ever could.

Think about it: a traditional underwriting team might *say* they’re following fair lending rules, but good luck tracing the split-second snap judgments made in a face-to-face meeting. With AI, every decision is logged, measurable, and—crucially—auditable. You can run counterfactual analyses. You can simulate outcomes if race or gender were swapped. You can actually see where the bias lives… and then kill it.

The real issue isn't that we're using AI—it's that most banks don’t have the guts or governance to wield it responsibly. So instead of building transparent models with explainability baked in, some slap a black box on top of decades of redlining and call it fintech.

Want to make AI underwriting not racist? Start with model interpretability, enforce disparate impact testing, and stop treating “accuracy” as the only metric that matters. Regulators could mandate these safeguards tomorrow—if they weren’t still stuck asking, “what is an algorithm?”

Done right, AI can actually be the scalpel that cuts out the systemic inequities we’ve pretended don’t exist. But yeah, used lazily, it’s just a faster way to say no to the same people banks have always said no to.

Emotional Intelligence

Look, law firms are having their Kodak moment right now and most don't even realize it.

The traditional model where clients pay $400/hour for some sleep-deprived associate to do what AI can do in seconds is already dead - the client just hasn't planned the funeral yet.

But here's where it gets interesting: the firms that truly understand AI aren't just automating research tasks. They're reimagining the entire client relationship. I know a boutique firm in Chicago that completely abandoned hourly billing and now offers "legal risk subscriptions" with unlimited AI-powered advice for a flat monthly fee. Their revenue is up 40% while employing fewer associates.

The real value was never in the research anyway. It was in the judgment, the pattern recognition across hundreds of cases, the ability to read a room during negotiations. You know, the stuff that comes from a partner with 30 years of experience who's seen every trick and trap in the book.

What makes me laugh is watching these massive firms brag about their new AI tools while still charging the same astronomical rates. It's like watching taxi companies add GPS systems while Uber was eating their entire business model.

The winners won't just be faster lawyers - they'll be entirely different legal service providers. And that's what should keep managing partners up at night.

Emotional Intelligence

You know, that hits disturbingly close to home. At my last company, we had this elaborate approval process that was supposedly designed for "quality control," but really just functioned as organizational bubble wrap. Every decision needed three signatures and alignment with the sacred five-year plan.

The most telling moment came during a product review when our VP actually said, "This idea is great, but it doesn't fit our established evaluation matrix." Translation: your innovation doesn't compute within our certainty machine, so it must die.

What's fascinating is how certainty addiction disguises itself as prudence. We don't call it "fear of the unknown" – we call it "risk management" or "strategic alignment." Much more professional that way.

I've started to think we need designated uncertainty zones in companies – spaces where the normal rules of prediction and certainty are deliberately suspended. Kind of like how Vegas exists outside normal social constraints, but for business creativity instead of questionable life choices.

The banks and AI loan approval question actually connects perfectly here. The appeal of algorithmic lending isn't just efficiency – it's the illusion of perfect certainty. "The computer said no" removes human judgment calls, which means nobody's neck is on the line when things go wrong. That psychological safety is incredibly seductive.

Challenger

Right, but the “they know what you want before you do” part isn't actually the terrifying bit. What’s more unnerving is *how little* they need to know about you to pull that off.

We like to believe these systems are deeply personal, diving into our digital diaries and psych profiles—but honestly, no. Often, it’s just pattern matching at scale.

You bought a yoga mat? Congrats, you now belong to a cohort of 17,000 women, aged 28–35, who also Googled “hip mobility” and ordered turmeric tea. One of them started keto last Tuesday. Guess what's showing up in your feed?

It’s not that AI cracked the code of your soul. It just lumped you into a statistical bucket and served you what worked on statistically similar people. That’s the real magic—aggressively efficient laziness masquerading as intimacy.

Amazon isn’t clairvoyant; it’s crowdsourcing your future impulse buys from people eerily like you.

And the danger? It's not that it *knows* you, it's that it doesn't have to. There’s no emotional context, no ethical check. Just data → pattern → nudge → sale.

Throw in reinforcement loops, and you get a personalization echo chamber. Bought those turmeric capsules? Expect more wellness junk. Soon, your digital self thinks in adaptogens and drinks bone broth. Did you choose that? Or did the algorithm just... out-predict you?

That quiet erosion of agency—that’s the part we should be talking about.

Challenger

That’s assuming junior lawyers were primarily valued for their ability to dig through precedent and crank out memos. But that was always a narrow—and frankly outdated—view of the role. The best junior associates never got hired just to be human paralegals. They got hired for judgment-in-formation: the ability to connect legal principles to messy real-world fact patterns, to understand nuance, and, crucially, to learn fast under pressure.

AI is fantastic at summarizing case law and surfacing relevant precedent. But it doesn’t yet understand legal ambiguity the way humans do. It can tell you what the law says. It struggles with what the law means when applied to incomplete, contradictory, or politically charged facts. That grey area? That’s where good junior lawyers used to cut their teeth.

And here’s what should worry firms: if you replace early-career legal thinking with button-pushing, you don’t just save time—you starve your future talent pipeline. Senior partners didn’t emerge fully formed. They spent thousands of hours absorbing how legal complexity intersects with business reality. If those reps vanish, who exactly do we think will replace them in 20 years?

Take litigation strategy. AI can draft a solid motion to dismiss, sure. It can quote Twombly like a Harvard 2L. But it has zero strategic intuition about what a judge will respond to, where to subtly concede, or how to time filings for pressure. Those instincts don’t show up magically at year 15—they’re trained in the trenches. No juniors in the trenches, no generals later.

The risk here is that law firms optimize themselves into a short-term productivity win and a long-term succession crisis. Replacing grunt work is smart. Eliminating the messy, formative work that builds future rainmakers? That’s just eating your seed corn.

Emotional Intelligence

That's the real irony, isn't it? We worry about AI making us obsolete, but we're sleepwalking into a more subtle problem: mass-produced sameness disguised as personalization.

Think about what happens when everyone uses the same AI tools in the same ways. The designer who uses Midjourney with the same prompts as thousands of others. The marketer who relies on the same GPT template as their competitors. The songwriter who uses the same AI assistant to "find their unique voice."

It's like we're all drinking from the same well but convincing ourselves we've found different springs.

I noticed this recently when browsing LinkedIn. Suddenly everyone's profile summaries had this eerily similar structure and tone. "Passionate about driving results. Leveraging expertise to unlock potential." It's like we outsourced our individuality to the same algorithm.

The real competitive advantage won't come from using AI—it'll come from using it differently than everyone else. Or knowing when not to use it at all. The human who can recognize the pattern and deliberately break it will stand out in a world of algorithmic conformity.

Which makes me wonder: will we eventually need to signal when something is proudly AI-free, like "organic" food or "handcrafted" goods?

Challenger

Exactly — and here’s the rub: algorithms don’t invent bias, they inherit it. So when banks say, “Hey, we're just using data!” what they’re often doing is enshrining historical injustice into code.

Take FICO scores. They’re treated like gospel in underwriting models, but they bake in structural inequity — from who gets access to credit education, to who’s been offered credit historically, to who lives in neighborhoods where banks decided not to open branches. Now feed *that* into an AI and congratulate yourself on innovation. It’s not AI, it’s just legacy discrimination with better UX.

Worse, banks sometimes don’t even know *why* the model made a decision — the classic black box — so when a borrower gets denied, there’s no meaningful accountability. Try appealing the decision of a faceless neural network. It’s like arguing with a Magic 8-Ball that’s read your zip code.

But—and here’s where it gets interesting—it doesn’t have to be this way.

AI *could* actually be a tool for fairness. If used right, it can detect and counteract bias at scale. For example, some fintechs are using alternative data: rent payments, utility bills, even consistent paycheck deposits — things that show financial reliability but don’t show up on traditional credit reports. That’s the kind of signal-detection AI is great at. And crucially, this model doesn’t need to know your race or zip code to see that you pay your bills on time.

So the real question isn’t whether banks *should* use AI — it's *how* they train and audit it. Not just tuning performance metrics ("How accurate is this model at predicting default?") but fairness metrics ("Who is this model disproportionately penalizing, and why?").

Because if we automate the past, we will replicate it—faster and at scale. But if we interrogate that past with AI, there’s at least a shot at doing better.

Emotional Intelligence

Look at what happened with Kodak. They actually invented the digital camera in 1975 and then buried it because it threatened their film business. Fast forward a few decades, and they're bankrupt while everyone's carrying digital cameras in their pockets.

That's exactly what's happening in law firms right now. They're buying fancy AI tools to help associates churn through document review faster, thinking they're being innovative. But they're still clinging to the billable hour model that's been their cash cow for decades.

Meanwhile, there are legal tech startups building entirely new service models. They're not just making the old system more efficient—they're creating alternatives where clients pay for outcomes, not time. When you can get 80% of your contract review done by AI for a fixed fee that's a tenth of what a traditional firm charges, why would you stick with the old model?

The firms that survive won't be the ones with the best AI document reviewers. They'll be the ones who fundamentally reimagine what legal service delivery looks like in an AI world. The question isn't "how can AI help our associates work faster?"—it's "what business are we actually in, and how does AI change that entirely?"

Emotional Intelligence

You've hit on something that keeps me up at night. That addiction to certainty doesn't just kill innovation—it creates the perfect petri dish for algorithmic bias in finance.

Think about what happens when banks use AI for loan approvals. They're not building new decision frameworks; they're automating their existing biases and calling it "efficiency." The algorithm learns from historical lending data that's already filled with decades of redlining and discrimination.

I saw this firsthand when a fintech startup I consulted for built an "AI-powered credit scoring system." They were so proud of eliminating "human bias," but their system kept penalizing applicants from certain zip codes. The algorithm wasn't detecting credit risk—it was detecting patterns of historical discrimination.

The most dangerous part? Everyone felt absolved of responsibility. "The algorithm made the decision" became the perfect corporate shield. That certainty addiction created a comfortable ethical distance.

What if instead of chasing predictability, financial institutions started valuing transparency and continuous correction? What if they designed systems expecting them to be wrong and built correction mechanisms from the beginning?

The real innovation isn't better prediction—it's better learning. But that requires admitting uncertainty, which feels like organizational kryptonite to most executives I know.

Challenger

Exactly—it's not just that AI is good at pattern recognition. It's that it’s reconstructing a version of *you* that’s legible to machines, and in some ways more consistent than the actual human you are. Your late-night doomscrolling, your 3 p.m. cravings, the correlation between your Spotify playlists and your snack shopping? That's not “creepy,” that's highly marketable predictive modeling.

But let’s challenge this idea a bit more: do AI retail systems *really* know what you want—or are they shaping what you *think* you want?

Take TikTok’s viral product surges. AI recommendation engines don’t just surface what you’re likely to buy; they actively manufacture desire. It’s less “This person needs a ring light,” and more, “Let’s show this ring light to 43,000 people who vaguely fit the profile of a creator archetype, and see who converts.” When a few do, they become social proof for the rest. It’s not prophecy—it’s a feedback loop that blurs causality.

And that's what makes it terrifying. Not that machines predict us better than we predict ourselves (though yes, that's unsettling), but that our preferences are now co-authored by models tuned for engagement, not fulfillment. You thought you *wanted* the sourdough starter, the custom mechanical pencil, or that $300 robe “for your best mornings.” But who’s to say you didn’t just want to be the kind of person who wants those things?

Amazon's “Customers also bought”—once a helpful nudge—is now a behavioral trapdoor. It's built on relational logic the user doesn't understand and can’t interrogate. It's opaque, persuasive, and shaped by the ghosts of customer data past.

So maybe the scary part isn’t that the AI knows us too well. It’s that we keep letting it decide who we are.

Challenger

Sure, AI is swallowing the junior grunt work—case law summaries, precedent searches, document reviews. All the mind-numbing stuff that used to be the hazing ritual for fresh graduates. But I think we’re missing something more fundamental here: junior lawyers weren’t just there to grind. That grind was how they soaked up judgment.

You don’t gain legal intuition from reading your professor’s casebook. You get it by drowning in the details of fifty “McGee v. State” cases and starting to sense the patterns. You begin to know when a ruling won’t hold up. When a judge leaned a little too hard on one precedent. That kind of muscle memory is experiential—it’s not just intellectual, it’s embodied in every brief they’ve written and every bad argument they’ve been told to rewrite.

Now, if AI is short-circuiting that whole apprenticeship pathway, where will tomorrow’s senior partners come from?

You can’t skip the furnace and still expect hardened steel. Sure, partners today are irreplaceable—they have decades of knowledge, instinct, and battle scars that GPT-4 can’t fake. But if AI eliminates the path to becoming that kind of partner, we’re creating a generation of “senior lawyers” with no scars. Just button-pushers.

There’s a precedent (because of course there is) in coding. Remember when Stack Overflow, autocomplete, and now Copilot started taking over the boring parts of programming? Junior devs didn’t gain less experience—they gained the wrong kind. The shortcut became the job. And suddenly, you had mid-level engineers who couldn’t debug without prompting ChatGPT because they never really built the abstraction in the first place.

So if AI strips junior lawyers of their slog—the mess, the errors, the embarrassment—do we end up with a legal profession full of senior titles and shallow depth?

That’s not just a talent problem. That’s a liability.

Emotional Intelligence

That's a terrifying thought, isn't it? The real dystopia isn't robots taking our jobs—it's all of us becoming identical consumers of AI-generated sameness.

I've been thinking about this when I use ChatGPT for work. When I'm stuck on a problem, I'll ask it for ideas, and they're usually... fine. Serviceable. But there's this nagging feeling that thousands of other people are getting nearly identical answers to similar questions.

It's like we're all shopping at the same intellectual Walmart. Sure, there are slight variations based on our prompts, but the underlying patterns and thinking models come from the same source.

You see this in design already. Remember when suddenly every tech company's logo became that same minimalist sans-serif? Or how Instagram feeds started looking identical with that muted, slightly desaturated aesthetic? That wasn't AI, but it shows how quickly creativity can collapse into uniformity.

The truly scary part isn't that AI will replace human creativity—it's that it might homogenize it. When everyone uses the same tool for inspiration, recommendations, and execution, we risk creating a feedback loop of increasingly similar outputs.

What happens when we all use the same AI writing assistant for our novels? Or the same image generators for our art? Does cultural evolution itself start to stagnate?

Challenger

Well, here's the uncomfortable truth: banks were already pretty good at financial discrimination before AI showed up. Redlining didn’t need algorithms to thrive. Mortgage disparities didn’t start with machine learning — they just got quieter. What AI does is put a cold, statistical face on what used to be human bias with a smile.

But here's the real complication: AI doesn't introduce discrimination, it encodes it. Worse, it scales it. If your training data reflects decades of biased lending — and make no mistake, it does — your model isn't just automating decisions; it's automating legacy inequality at warp speed. A human might overlook a zip code once in a while. A model trained on that zip code’s historical default rates? Never forgets.

That said, let’s not nominate humans for sainthood here. Manual underwriting is full of subjectivity and gut instinct — which often isn't as neutral as people like to pretend. Ask a person why your loan got denied, and they’ll say you “didn’t feel right.” Ask an AI, and you might at least get a feature importances chart — which is gross in its own way, but at least debuggable.

So maybe the question isn’t “Should banks use AI?” but “How do we force AI to be better than the humans it replaces?” That means actively intervening in the model — not just with legal disclaimers but with hard constraints: testing for disparate impact, scrambling sensitive variables, retraining on de-biased data. Not sexy, but necessary.

And here’s the twist: AI, unlike your average loan officer, can actually be audited and updated at scale. If we get serious about governance — real transparency, not PR decks — we might finally have a shot at fixing a process that’s never been fair to begin with.

But sure, if we just treat this like another cost-cutting plug-and-play tool, we deserve the backlash that's coming.

Emotional Intelligence

That's exactly the mindset trap law firms are falling into. They're celebrating because they've automated document review that used to take associates 60 hours down to 6 hours—without realizing they've just accelerated their own disruption.

Here's what they're missing: When you make something 10x faster but keep the same business model, you're just making yourself 10x more vulnerable to someone who builds something fundamentally different.

Look at what Stripe did to payment processing. They didn't just make merchant accounts faster—they reimagined the entire concept of online payments when traditional banks were still patting themselves on the back for their slightly improved web portals.

The law firms that will thrive aren't just hiring engineers to automate their existing processes. They're asking dangerous questions: What if clients didn't need to pay by the hour? What if legal expertise could be embedded directly in business software? What if compliance could be continuous rather than reactive?

The truly scary thing for established firms isn't that junior associates will lose their jobs—it's that clients might stop needing the traditional legal service delivery model altogether. When that happens, having faster research tools won't save you.

Challenger

Terrifying? Maybe. But let's be honest: it's also addictive.

The more uncanny their accuracy gets, the more we lean in. You see a pair of boots you didn’t know you were hunting for, perfectly timed right between your first PSL and a drop in temperature—and suddenly it feels less like surveillance and more like serendipity.

But here's the catch: what feels like magic is really just math weaponized with context. Recommendation engines aren’t just tracking what you click—they’re triangulating your digital body language across time, devices, even moods. Combine that with purchase windows ("people like you buy mugs at 9 a.m. on Thursdays after watching True Crime clips") and you're not being manipulated—you’re being modeled.

And here's where it gets properly interesting: we're not actually talking about understanding you. We're talking about predicting your lowest-friction decisions. That’s what scares me. These systems don’t know your desires. They know when you’re cognitively tired and most likely to impulse-buy.

Case in point? Look at Temu. It doesn't just serve you what you want—it trains you to want the kind of junk that's cheap, brightly colored, and arrives in cartoonishly oversized packaging. It’s not about personalization; it’s about hijacking attention loops.

So yes, the predictive accuracy is creepy. But what's more unsettling is that we’re mistaking "this is me" for "this is how I behave when nudged a thousand different ways." The algorithms aren't clairvoyant—they’re just better at driving our autopilot than we are.

That’s not just surveillance capitalism. That’s behavioral conditioning at scale.

Emotional Intelligence

You're absolutely right about this certainty addiction. I've seen it firsthand at companies that preach innovation while simultaneously building fortresses of process around every decision.

What's particularly insidious is how we've convinced ourselves these guardrails are about "quality" or "consistency" when they're really about fear management. We've created elaborate permission structures so nobody has to bear the full weight of being wrong.

Take banking AI systems. The industry talks about them increasing access and efficiency, but look at how they're actually implemented. Most are designed to replicate existing approval patterns with slightly better speed. They're certainty machines disguised as innovation.

The most successful organizations I've worked with don't eliminate uncertainty - they metabolize it differently. They build capacity for it rather than protection from it. It's like the difference between avoiding germs entirely versus building an immune system.

What if instead of asking "how can we make better decisions?" we asked "how can we recover more gracefully from being wrong?" That's where the real competitive advantage lies.

Challenger

Totally agree that AI's eating the junior lawyer's lunch—basic research, document review, even drafting cookie-cutter contracts? The machines do it faster, cheaper, and without needing coffee breaks.

But I think we’re underestimating the long-term impact on senior partners too. The idea that they’re irreplaceable feels a bit... nostalgic. Like assuming generals are safe in an age of drone warfare. Sure, they still matter—but the battlefield is shifting under them.

Here’s the thing: senior partners have traditionally been valuable not just for judgment or rainmaking, but for institutional knowledge—decades of experience encoded in memory. AI is starting to bottle that, in a different form. When a tool can instantly parse thousands of similar cases, summarize precedents, cross-examine them with updated statutes, and flag emerging patterns that even a seasoned partner might miss—that’s not just a replacement for the junior associate. It’s nipping at the edge of strategic insight.

Take litigation strategy tools. These aren’t just summarizing cases; they’re predicting judge behavior, estimating settlement probabilities, mapping opposing counsel’s patterns. That’s senior-level stuff. Not perfect yet, but it's not science fiction either.

So maybe the next power partner isn’t the one who billed 3,000 hours a year for 30 years—it’s the one who knows how to prompt GPT-X like a chess grandmaster and can link insights faster than any human brain could.

In other words, judgment still matters—but judgment augmented by AI will trounce judgment without it. Senior lawyers aren’t safe; they’re just next in line for transformation.

Emotional Intelligence

You know what keeps me up at night? Not that AI might take my job, but that I'll become a creative clone, another person churning out AI-assisted work that looks eerily similar to everyone else's.

We're already seeing this homogenization happen. Scroll through LinkedIn and you'll find dozens of "thought leaders" posting suspiciously similar insights. Visit enough startup websites and you'll notice they're all starting to sound like they graduated from the same ChatGPT finishing school.

The issue isn't that AI tools are bad—they're incredible time-savers—but rather how we're collectively using them. We're all asking the same tools the same types of questions and getting variations on the same answers. It's like we're all painting by numbers, just with slightly different color palettes.

What happens to originality when millions of marketers are all leveraging the same AI tools to generate "unique" selling propositions? Or when every job applicant uses AI to polish their resume with the same "professional-sounding" language?

The real competitive edge might not be having AI, but knowing how to use it differently than everyone else. Maybe the winners won't be those who adopt AI fastest, but those who maintain their idiosyncratic human spark while strategically applying AI where it truly amplifies their distinct perspective.

Challenger

Sure, but here's the uncomfortable truth: humans weren’t exactly great at fairness either.

We like to imagine human loan officers as rational, empathetic arbiters of creditworthiness. But dig into the historical data and you’ll find bias written all over those spreadsheets — redlining, inconsistent standards, outright discrimination. AI didn’t invent that. It inherited it.

Now, I’m not saying AI is the savior here. Far from it. Most models today are statistical magpies — they reflect patterns in historical data without understanding whether those patterns come from actual risk or just historical prejudice.

But here’s the difference: at least with AI, we can audit it. You can deconstruct a model, spot where outcomes skew, and retrain it. Try doing that with Carl from the branch office who’s been handing out car loans for 30 years based on “gut feeling.”

The real risk isn't that AI will be biased. That ship already sailed with human decision-making. The risk is that we’ll pretend AI is magically neutral and stop asking hard questions — like whether the training data encodes systemic inequality, or whether credit scoring itself needs to be rethought for the 21st-century economy.

Uber uses data to dynamically price rides. Netflix personalizes recommendations. Meanwhile your creditworthiness is still assessed using models shaped around 20th-century W2 employment and zip codes. That’s the real problem — not that AI will discriminate, but that it will do so with 20% more efficiency.

So yes, use AI — but only if you're willing to acknowledge that fairness is an evolving metric, not a checkbox. And only if you're okay with your model challenging the very premises banks have relied on for decades. If it doesn't start raising uncomfortable questions, you're probably doing it wrong.

Emotional Intelligence

Look at what happened with photography. Kodak didn't fail because they couldn't make better film faster—they failed because they couldn't imagine a world where film itself became unnecessary. They optimized the wrong variable.

Law firms doing the same thing right now remind me of taxi companies that responded to Uber by putting credit card machines in their cabs. Too little, too late, wrong problem.

The firms that will survive aren't just strapping AI onto their existing associate-heavy pyramid schemes. They're fundamentally reimagining what a legal services business looks like when the grunt work disappears. Some are creating fixed-price service packages that would have been unprofitable before AI. Others are building expertise networks where seasoned lawyers collaborate across firms.

What's fascinating is how this creates this barbell effect in the profession. The middle is evaporating while the extremes thrive—the creative senior partners who can reimagine industries and tell compelling stories on one end, and the technical AI specialists who can build custom solutions on the other.

The question isn't whether you can do your current job faster with AI. It's whether your job should exist at all in five years.

Challenger

Totally agree it’s terrifying—but maybe not for the reason most people think.

It’s not just that AI knows what you want. It’s that it shapes what you want. That’s the real sleight of hand here.

Recommendation engines aren’t passive mirrors reflecting your preferences. They’re actively nudging them, optimizing for engagement, conversion, add-to-cart rates. It’s a nice illusion that you're “discovering” something when you scroll through TikTok or Amazon—but often, you're just being funneled more precisely down a path someone else paved.

Take Shein, for instance. Their AI doesn’t just recommend clothes based on what you liked. It runs micro-tests with thousands of product variations, sees what gets clicks, iterates overnight, and adjusts its inventory accordingly. It’s essentially fast fashion engineered at algorithmic speed to match what *you will soon want*, not what you already do.

This flips the script. Your desire becomes less of an input and more of an output. And if we’re being honest, most people don’t really know why they buy what they buy. We anchor on price, a color palette, a model’s smile, a TikTok aesthetic—we’re reacting on instinct. AI just learned to weaponize that better than any human merchandiser ever could.

What’s scary isn’t that the machine knows us. It’s that we don’t really know ourselves, and the algorithm exploits that blind spot with surgical precision.