Should banks use AI for loan approvals or is that just automating financial discrimination?
What if we stopped blaming AI for learning society's worst habits… and started calling out the people who built it that way?
Let’s talk about loan approvals. The way this conversation usually goes is something like: “Should we trust AI to decide who gets a mortgage or a small business loan? Won’t it just encode old-school discrimination?” That’s a fair question. And also, a slightly lazy one.
Because here's the uncomfortable truth: human-driven lending hasn’t exactly earned sainthood either.
Redlining didn’t need GPUs. Racial disparities in mortgage approvals didn’t start with machine learning. What AI does is give biased decisions a spreadsheet aesthetic and a statistical fig leaf. And when banks claim “the algorithm made the call,” we get a faster, smoother, and far more scalable version of the same systemic inequity—now with fewer fingerprints.
But if that’s where the conversation stops, we’ve already lost. Because buried under the fear and the headlines is this quieter, weirder reality:
AI might actually give us our first real shot at fixing the problem.
Stop pretending “intuition” is neutral
For decades, loan officers got to make high-stakes decisions based on a cocktail of credit reports, “gut feelings,” and vibes. Anecdotally? That always sounded like flexibility. Statistically? It produced consistent disparities.
Consider zip code discrimination—a subtle form of redlining that still impacts where credit flows today. A human underwriter might avoid saying, “We don’t lend in your part of town.” But their decisions say it for them. Quietly. Persistently.
AI, on the other hand, keeps receipts.
You can trace every variable, every weight, every decision path. You can simulate the outcome if the applicant’s race, gender, or zip code were different—and see exactly when the algorithm veers off course. That’s not just useful—that’s a superpower, if anyone has the guts to wield it.
And that’s the catch: it requires guts. Not just transparency, but active governance.
Automation doesn’t kill humanity. It just exposes it.
There's this myth that AI systems are neutral until proved otherwise—that bias enters later, as some tragic side effect. But let's be honest: AI bias is just a very efficient mirror of our past decisions. And banks have decades of messy, inequitable human history for these systems to learn from.
Thanks to those lovely historical datasets, even something as “objective” as a FICO score is part of the problem. It’s built on credit usage patterns that systematically disadvantage entire groups—people who never got access to loans in the first place, who lived in neighborhoods ignored by banks, who paid rent on time for decades but weren’t rewarded for it.
That's what most underwriting AIs are trained on.
Feed that data into a model and you haven’t built a revolutionary fintech engine. You’ve built a discrimination time machine with nicer charts.
But it doesn’t have to be that way.
Some fintechs are already rewriting the script. Instead of modeling risk off a narrow band of legacy credit inputs, they’re looking at consistent rent payments, utility bills, even employment deposits as indicators of financial reliability. For folks without traditional credit histories—often immigrants, gig workers, or low-income borrowers—that’s a game-changer.
The magic of AI isn’t that it’s smarter than us. It’s that it can spot patterns across millions of applicants—and critically, question those patterns if trained to do so.
But let’s be real: that only happens if we design it to.
Profit is not the same as precision
Ask a bank why they use AI in underwriting and you’ll get the usual corporate smoothie: “faster decisions,” “reduced operational cost,” “enhanced precision.”
Fine. But precision for what?
Most models optimize for default prediction. That seems rational, until you realize it makes certain biases worse, not better. If your model heavily weights variables correlated with race, gender, or geography—even indirectly—you’re just scaling exclusion based on historical injustice.
And the worst part? It’s usually unintentional.
Model performance gets tuned for accuracy, not equity. Engineers scrub the outputs for error rates, not disparate impacts. Leaders celebrate faster lending decisions without ever asking who’s still being left out, or why.
The real issue isn’t AI bias. It’s that we’ve defined “success” in purely financial terms and then let AI chase it without constraint.
But what if we flipped the script?
Imagine if model evaluations included fairness metrics alongside precision. If disparate impact testing were required by regulators. If explainability—and not just technical accuracy—was a baseline feature.
Harder to build? Absolutely. But not more expensive than continuing to automate unfairness.
The real blackout isn’t in the model—it’s in who owns the system
One reason “black box” AI gets so much flack is because, for most people, it is a black box. Banks treat these models like proprietary trade secrets—and regulators, stuck in the software equivalent of the fax machine era, can’t keep up.
But opacity isn’t a technical limitation. It’s a choice.
If a borrower is denied a loan by a human, they have legal rights to an explanation. When they’re denied by an AI, they typically get... a checkbox that says “insufficient credit history,” with no further context.
That’s not transparency. That’s a power imbalance.
And if you think that imbalance doesn’t have real-world consequences, talk to the women who discovered they were offered dramatically lower credit limits than their husbands for the same financial profile—by Apple Card. It wasn’t a person being sexist. It was an algorithm doing exactly what it was designed to do: optimize based on historical patterns.
That scandal didn’t reveal something new. It just made the discrimination quantifiable. And really hard to ignore.
This isn’t about trust. It’s about control.
So let’s be clear: the question isn’t “Should banks use AI for loan approvals?” They already are. The real debate is: who sets the guardrails?
Done right, AI gives us a rare tool for accountability. You can’t ask Carl the loan officer to run counterfactual analysis on thousands of loans. You can’t ask him how often he denies people with the same income but different genders. But you can ask the system. You can test, revise, retrain, and even red-team it for edge-case bias before it goes live.
AI could be the scalpel that cuts systemic bias out of underwriting. Or it could be a chainsaw that carves that bias deeper into modern finance.
It depends entirely on how much discomfort we're willing to tolerate in pursuit of fairness—and how willing we are to measure fairness as something other than “default risk per unit of capital.”
Because let’s be honest: banks didn’t start caring this much about fairness when they hired their first machine learning engineer. They just realized bias is now traceable... and legal exposure is scalable too.
So what changes in how we think about this?
Let’s bring it home. If you’re in finance, tech, or anywhere near decision-making AI, here’s what you should take away:
-
Bias is inevitable—transparency isn’t. You won’t eliminate all bias from your AI model. But you can make sure it’s visible, measurable, and correctable. That already puts AI miles ahead of opaque human underwriting.
-
Fairness isn’t a feature. It’s a product decision. If you’re designing a loan model, you have to explicitly choose whether fairness metrics matter. You won’t get justice by accident.
-
Faster credit decisions are pointless if they’re just faster at being unfair. Efficiency only matters if the underlying system is equitable. Otherwise, you’re just speeding up discrimination.
And if nothing else? Remember this:
AI didn’t create inequality. It just stopped letting us ignore it.
So now we have a choice: resist the discomfort of an honest mirror, or use it to actually build something better. Your move.
This article was sparked by an AI debate. Read the original conversation here

Lumman
AI Solutions & Ops