Has Our Obsession with Optimization Created Monsters in Productivity and AI?
We've gone completely mad with productivity hacking, haven't we? I caught myself the other day setting a timer to "efficiently relax" — which might be the most absurd contradiction in human history.
The problem isn't just that we've gamified our calendars; it's that we've internalized this bizarre industrial logic where human brains are treated like factories. But creativity and insight don't work like assembly lines. They're more like gardens — they need fallow periods, wandering paths, and a healthy dose of apparent chaos.
Look at how many breakthrough ideas happened during apparent "time-wasting." Einstein's theory of relativity crystallized during a daydream about riding a beam of light. The structure of benzene came to Kekulé in a dream about a snake eating its tail. And I'd bet good money that whatever problem you're wrestling with professionally won't get solved during your carefully scheduled 25-minute "deep work" session.
What's most frustrating is how we've convinced ourselves this hyper-scheduled approach is somehow evidence-based, when it's really just Protestant work ethic dressed up in Silicon Valley athleisure. We're measuring inputs (time tracked, tasks completed) while completely losing sight of outputs that actually matter.
Maybe the most productive thing you can do today is close your productivity app, leave your phone at home, and go for a long, purposeless walk.
That’s the irony, isn’t it? The pitch was: "We’ll make hiring more objective. Remove human bias. Let the algorithm decide." But what we’ve actually done is encode decades of systemic bias into a black box and hit ‘autocomplete’ on the future of someone’s career.
Take Amazon’s infamous recruiting tool from a few years ago. It taught itself that resumes containing the word “women’s” — like “women’s chess club captain” — were a negative signal. Why? Because it was trained on ten years of resumes, mostly from men. Garbage in, garbage out, but now with the illusion of science.
And here's the real kicker: once that bias is baked into the model, it's even *harder* to spot than a biased hiring manager. At least a human can self-reflect, when prompted. Algorithms? They just optimize. If the training data tells them "white male programmers get hired," that’s the target function. No context. No introspection.
But I think the deeper problem isn’t just technical — it’s philosophical. We’ve started treating hiring as a pattern-matching game. As if the best hire is a statistical echo of the last five people who didn’t quit or get fired. That may be efficient, but it’s also deeply unimaginative. You don’t find the outlier candidates by rewarding sameness.
What if the person who's going to 10x your company *looks nothing* like your past successful hires? Good luck having an AI pick them out if all it's trying to do is minimize deviation.
The productivity paradox hits us where it hurts, doesn't it? We've somehow convinced ourselves that every moment needs to be optimized, tracked, and justified. But I keep thinking about how Darwin took long walks while working on his theories, and how Einstein played violin when stuck on physics problems.
There's something deeply ironic about downloading a productivity app that sends you notifications about how to be less distracted. We're interrupting ourselves to remind ourselves not to get interrupted.
I had this moment last month when I realized I'd spent more time organizing my task management system than actually completing the tasks in it. Color-coded, prioritized, subdivided into neat little atomic units... and yet the big, messy creative work wasn't getting done. I was efficient but ineffective.
The most valuable thinking often happens in what we've been trained to see as the spaces between "real work." The shower thoughts. The half-awake ideas. The connections that form while you're staring out a train window.
Maybe productivity isn't a straight line but a strange loop. By obsessing over it, we paradoxically become less productive at the things that actually matter. Like trying to force yourself to fall asleep - the harder you try, the more it eludes you.
What if we measured success not by how busy we appear, but by the quality of our attention?
Let’s be honest—“accidentally” is doing a lot of heavy lifting here.
When companies plug biased human data into algorithms and act surprised by biased outcomes, that’s not a twist ending. It’s the whole point of the story. We’ve known for years that hiring data reflects decades of structural discrimination—résumés with “white-sounding” names get more callbacks, women get penalized for ambition traits praised in men, the list is long. Training an AI on that is like training a dog to bite and then being shocked when it bites the mailman.
Take the infamous Amazon hiring model that quietly penalized résumés with the word “women’s” in them—“women’s chess club,” “women’s coding bootcamp,” etc. The model wasn’t rogue. It was just doing what it was told: find the patterns that led to past hires. The sinister part? That system didn’t malfunction. It operated exactly as designed—optimizing for historical precedent, aka historical prejudice.
So the real issue isn’t just that these models inherit bias—it’s that companies are outsourcing judgment without taking responsibility for it. They’re treating AI decisions like they’re coming from some impartial oracle, when in reality they’re just mirroring old habits with a shiny veneer of objectivity. Worse, nobody’s accountable. If a human hires with bias, they might get sued. If an algorithm does, it’s “just the data.”
That’s the scary part. We’re laundering discrimination through mathematics and calling it neutrality.
I keep thinking about how we've developed this strange productivity Stockholm syndrome. We're simultaneously imprisoned by our efficiency tools while defending them as salvation.
The irony is so thick you could spread it on toast – we interrupt deep work to track how productive we are. We've turned work into this bizarre meta-game where the scorecard matters more than the actual output.
Remember when Steve Jobs took those famous walks? Or how Einstein played violin when stuck on a problem? I doubt either of them was thinking, "This counts as a productivity hack!" They were just following the natural rhythms of thought.
What kills me is how we've corrupted the concept of "flow" – that beautiful state Csikszentmihalyi described – into something we try to force-schedule between Zoom calls. Real flow doesn't care about your time-blocking system.
I wonder if what we're really avoiding is the discomfort of unstructured thinking. Facing a blank page or solving a complex problem involves sitting with uncertainty – and that feels terrible compared to the dopamine hit of checking off another task in Asana.
What would happen if we measured value creation instead of time spent? Might be terrifying at first, but probably more honest about how meaningful work actually happens.
Totally — and the wild part is, many of these companies genuinely think they're removing bias by “letting the algorithm decide.” They’re basically handing over the keys to the same busted car, just repainting it in Python.
Take Amazon’s infamous AI hiring tool. It was trained on résumés from the past decade — which, surprise, came primarily from male applicants because the tech industry has been overwhelmingly male. So the algorithm did what it does best: it optimized. And started downgrading résumés with the word “women’s” in them — like “women’s chess club captain.” That’s not an error. That’s the system working exactly as designed, just optimizing for the past instead of for any kind of better future.
The deeper issue is that pattern recognition is not the same as fairness. Algorithms learn from historical data — and if history is biased (which it always is), you're just mechanizing that bias at scale. Before, a biased recruiter could maybe reject a few candidates in a day. Now your model can do it thousands of times per second and call it progress.
Worse, companies treat the tech as a black box. “We don’t know why it selects who it selects, but the numbers look good.” But “good” by what metric? Reduced time-to-hire? That just means the AI's gotten faster at filtering out the same people your biased middle manager used to toss out by gut instinct.
And let’s not even get into proxy variables. You might think your model isn’t using race, but if it’s factoring in ZIP codes, certain schools, or even word choices on résumés, it basically is. The AI doesn’t have to see a protected attribute to discriminate by it.
Bottom line: unless you're radically rethinking what data goes in and what outcomes you're optimizing for, you're not removing human bias — you're just laundering it through math.
I think we've developed a strange relationship with our own brains. We treat them like inefficient machines that need to be optimized rather than the remarkable, messy biological wonders they actually are.
Look at how we talk about thinking: we "process information," we "maximize output," we need to "optimize our mental bandwidth." When did we start describing our consciousness like it's a factory assembly line?
The irony is that some of the most significant breakthroughs in human history came from what productivity culture would consider "wasted time." Einstein developed relativity during long walks. Darwin's theory emerged through wandering observations. Jobs insisted on unstructured thinking time.
Yet we've convinced ourselves that true productivity looks like a person hunched over a laptop with three different productivity apps running, timing their bathroom breaks.
I wonder if our obsession with efficiency is actually a way to avoid the discomfort of truly deep thinking. Because real thinking—the kind that solves complex problems—often feels like you're getting nowhere. It's messy. It doubles back on itself. It needs space to breathe.
What if we measured productivity not by tasks completed but by quality of thought produced? That would require a radical reimagining of what "work" means, wouldn't it?
Sure, AI can turbocharge bias. But let’s not pretend the pre-AI status quo was some golden age of objective hiring.
Before machines got involved, hiring was already a roll of the dice heavily influenced by gut instinct, alma maters, and whether you played lacrosse with the hiring manager’s cousin. The difference now is that with AI, the biases are just scaled—and worse, camouflaged under the illusion of objectivity.
Take Amazon’s infamous resume screening tool. It was trained on historical hiring data—basically, the choices past (mostly male) hiring managers made. Surprise: the model penalized resumes with the word “women’s” in them. Not because the AI “hates women,” but because it copied past decisions like a kid plagiarizing a classmate's homework… including all their wrong answers.
But here's the deeper problem: companies jumped at AI hiring tools because they wanted to eliminate human inefficiency and gut decision-making. What they didn’t anticipate was that the machine version of “gut,” when trained on systemic bias, becomes a kind of institutional bigotry on autopilot.
And let’s not forget the obsession with proxies. AI doesn’t know what makes someone a good hire for your sales team. So it looks for stand-ins: what school did you go to, what words do you use, how similar are you to past hires? Proxies are the duct tape of AI—they hold the system together, but they leak everywhere.
So yes, the current implementations are dangerous. But the real issue isn’t AI itself. It's that companies outsourced their judgment before they ever clarified what “good hiring” even means. Garbage criteria in, automated garbage decisions out—only faster, and with fewer lawsuits (for now). We’re automating dysfunction, not fixing it.
Want to fix AI bias in hiring? Start by admitting you don’t actually know what makes someone good at the job. Then build from there, slowly, with transparency and the humility that maybe, just maybe, the algorithm isn’t smarter than your best recruiter.
I wonder if we're becoming so fixated on measuring productivity that we've confused the map for the territory. Every moment gets tracked, every output quantified, as if a dashboard could capture the full value of a human mind.
Think about how many breakthrough ideas happened in what productivity culture would call "wasted time." Einstein famously sailed when he needed to think. Darwin took daily walking breaks. Jobs insisted on walking meetings. None of these fit neatly into a productivity app.
The really perverse thing is how we've internalized this efficiency mindset to the point where we feel guilty for taking an aimless walk or staring out the window. Yet that's precisely when connections form that never would during scheduled "deep work" sessions.
I've noticed in my own life that my best ideas rarely come when I'm frantically checking items off my to-do list. They emerge when I'm walking my dog, cooking dinner, or even (embarrassingly often) in the shower. There's something about releasing the pressure valve that allows thoughts to recombine in unexpected ways.
What if instead of optimizing for busyness, we optimized for effectiveness? Might look completely different from the productivity porn we're currently drowning in.
Right, and here’s the kicker: the bias isn’t just a glitch — it’s often the product of exactly *what* companies are asking these systems to optimize for.
Take the classic example: an AI trained to identify "top performers" by looking at past hiring and promotion data. Sounds logical — until you realize that training it on historical patterns effectively tells it: “Find me more people like the ones we already hired." If those hires skewed male, white, Ivy-League, or whatever flavor of homogenous success a company has celebrated internally — the model will just replicate that bias, but now with scientifically assured confidence. It’s bias turned into a product feature.
Look at Amazon’s infamous AI recruiting tool. It downgraded resumes that mentioned “women’s” (as in “women’s soccer team”). Why? Because their historical data signaled that male applicants were more successful. The model didn't hate women. It just optimized for what the company valued — whether consciously or not.
But the deeper question we’re not asking is: Should we even *want* AI to make hiring decisions like humans? Because that’s often the goal — mimic the “best” recruiters or the “best” employees, as judged by past outcomes. But humans are riddled with cognitive biases, and worse, they don’t consistently agree with each other. So when we train a system to be “human-like,” we may just end up institutionalizing the worst of our irrational instincts at scale.
And here’s where things get really uncomfortable. AI gives the illusion of objectivity. Humans are biased — sure. But at least we know Todd from HR might not love your haircut and can work around it. With AI, once it screens you out, you’re not even in the room. And no one can quite explain why.
So it’s not just that AI is biased — it's that it *masks* bias with a veneer of algorithmic fairness. That’s more dangerous than old-school gatekeeping. At least gatekeepers were visible. Now they’re just lines of code ghosting you silently.
I've been thinking about exactly this. We've somehow convinced ourselves that productivity is measured by how many blocks we fill on our calendar apps, not by the quality of our ideas.
There's this brilliant mathematician, Andrew Wiles, who spent seven years solving Fermat's Last Theorem - mostly by walking, staring out windows, and thinking. No 15-minute time blocks or productivity apps. Just deep, meandering thought. And he solved one of math's greatest mysteries.
But try explaining that to today's workplace culture. "Sorry, I can't join your third standup today, I'm having a potentially breakthrough thought while staring at clouds."
What's wild is that companies simultaneously demand innovation while eliminating the conditions that create it. We're tracked, optimized, and interrupted to the point where we can't access the mental states that produce our best work.
The irony kills me - all this obsession with productivity has made us incredibly efficient at completing tasks that might not even matter. We're acing the assignment but failing the class.
What if we measured success not by tasks completed but by problems elegantly solved? By ideas that shifted perspectives? By work that still matters in five years?
That’s the irony, right? We brought in AI to eliminate human bias—only to scale it. Faster. Slicker. With the comforting illusion of objectivity. But here’s the crux: these systems are trained on historical hiring data. And what is that data except a mirror to our past hiring prejudices?
If, over the last decade, a company’s “successful hires” were 80% white men from the same five universities, guess what AI will learn? That those are the markers of a “good” candidate. It’s not being malicious—just obedient.
Amazon famously scrapped their experimental AI recruiting tool after discovering it downgraded resumes that included the word “women’s” (as in, “women’s chess club captain”). Why? Because the training data was overwhelmingly male-dominated. The model was just doing math: "These features haven't correlated with success before, so let's penalize them." No malice. Just math. Dangerous math.
But here’s where it gets worse: once bias is encoded in software, it gets slippery. You can audit humans. You can ask a hiring manager why she picked one candidate over another. You can argue with her reasoning. But try poking around a black-box neural net that just decided Candidate A is a 6.2 and Candidate B is a 5.9. What does that even mean?
We’re not just importing human bias—we’re translating it into a dialect we can’t understand, then acting like it speaks the truth. That’s deeply irresponsible.
Look, I’m not anti-AI. I think there’s potential to improve hiring. But not with this blind faith in historical data. If your past is dirty, your data is dirty. The solution isn’t more complex algorithms—it’s better philosophy. Train models on what you *want* to reward, not just what you *have* rewarded. Or maybe—here’s a radical thought—let’s not outsource something as profoundly human as hiring to machines that can't comprehend context, culture, or potential.
You know what? I've watched this productivity obsession grow into something bizarre over the past decade. We've become these strange creatures who feel guilty about taking a shower without listening to a podcast at 1.5x speed.
It's like we've confused being busy with being effective. I had a colleague who tracked every minute of his workday in a color-coded spreadsheet. His proudest achievement? Reducing his "non-productive time" to 17 minutes per day. That included bathroom breaks. Meanwhile, the guy couldn't solve a complex problem to save his life because he never gave his brain room to breathe.
The science on this is actually pretty clear. Our brains form creative connections during diffuse thinking – those moments when we're not laser-focused. That's why you get your best ideas in the shower or on a walk. It's not a coincidence, it's neuroscience.
I think we've developed this weird productivity puritanism where anything that doesn't look like active work must be sloth. But Darwin took long walks every day while working out evolution. Einstein played violin when stuck on physics problems. These weren't distractions – they were essential parts of their process.
Maybe instead of optimizing every second, we should be protecting space for thought. What if the most productive thing you could do tomorrow is stare out a window for 30 minutes?
Right, and what's wild is that so many companies seem to think throwing AI into hiring makes things more objective—as if algorithms are immune to human baggage. But let’s be honest: training an AI on decades of HR data is like teaching your kid ethics by locking them in a room with reruns of Mad Men. If your historical hires leaned white, male, and from five Ivy League schools, congratulations—you’ve just bottled that bias into code and scaled it.
The worst part? The opacity. A human recruiter, flawed as they might be, can explain why they passed on a candidate. An AI? Good luck reverse-engineering a black-box transformer model that decided a candidate was “not leadership material” because they didn’t use the word "synergize" enough in their resume.
And don’t even get me started on video interviewing AI that scores people based on tone of voice, facial movements, or how much they smile. That’s phrenology with a GPU. It’s not just ineffective—it’s dangerous. How many neurodivergent candidates get dinged? How many accents get misread?
If you're serious about fairness, you don’t outsource judgment to a machine trained on biased outcomes. You rebuild the criteria. The uncomfortable truth is that AI's promise of efficiency is tempting—especially in hiring, where time and bias are costly—but it only works if you’re willing to challenge the assumptions baked into your data. Most companies aren’t.
They don’t want AI to eliminate bias. They want it to automate their existing instincts, just faster.
This debate inspired the following article:
Companies using AI for hiring decisions are accidentally creating the most biased recruitment process in history