When AI Bias Isn’t the Whole Problem: What the Earnest Settlement Tells Us About Human Oversight
top of page

When AI Bias Isn’t the Whole Problem: What the Earnest Settlement Tells Us About Human Oversight

When AI Bias Isn’t the Whole Problem: What the Earnest Settlement Tells Us About Human Oversight

A recent $2.5 million settlement between the Massachusetts Attorney General and student loan provider Earnest is making waves — and it should be. At first glance, it looks like another story of flawed AI lending models. But read the settlement’s fine print, and you’ll see something deeper: It wasn’t just the algorithms that failed. It was the humans, too.


Earnest used algorithmic underwriting tools to approve or deny loans and set interest rates. The models included inputs like a borrower’s Cohort Default Rate (CDR) — a school-level statistic that unfairly penalized applicants who graduated from institutions with higher loan default rates. Since these schools often serve Black and Hispanic students, the model had a built-in racial bias.


But the AI wasn’t acting alone. Internal communications revealed that underwriters routinely overrode the algorithm’s decisions without clear rules or documentation. Some even expressed confusion or admitted bias in internal chats. In other words, the compliance risk wasn’t just in the code — it was in the conversations.


That’s exactly why Paragraph 77 of the settlement spells out what needs to change: Earnest underwriters must be trained on fair lending laws, their decisions reviewed for compliance, any overrides carefully controlled, and everything documented. Regulators want more than updated policies — they want proof that judgment calls aren’t leading to unfair outcomes. But policy alone isn’t enough — you also need a way to know when those controls are breaking down in real time.

Here’s where HarmCheck comes in.


HarmCheck monitors employee email and chat messages for intent-based language that signals risk — including signs of bias or efforts to circumvent fair lending policy. If something sounds off, HarmCheck flags it in real time, giving compliance teams a chance to step in before it becomes a regulatory problem.


Takeaways for Compliance Teams


  • AI bias doesn’t absolve human responsibility. Lenders still need to monitor how employees talk about credit decisions, overrides, and model outputs.


  • Internal language can be evidence. The Earnest settlement didn’t quote internal emails, but it referenced underwriter chats that revealed confusion and bias (see Paragraphs 29 and 33).


  • Adverse action notices matter. Earnest was also flagged for giving vague or inaccurate reasons for denials — often because employees selected the “closest match” from a limited dropdown.


  • Real-time monitoring is key. HarmCheck helps catch problematic intent before it spreads — not after regulators find it in discovery.


Technology isn’t the enemy here. In fact, smart tools like HarmCheck are part of the solution — by helping you catch issues in human communication just as rigorously as you're now expected to audit your algorithms.



Book a free demo with HarmCheck today: http://harmcheck.ai/demo



Carolyne Zinko is the editorial director and AI editor at Alphy.


HarmCheck by Alphy is an AI communication compliance solution that detects and flags language that is harmful, unlawful, and unethical in digital communication. Alphy was founded to reduce the risk of litigation from harmful and discriminatory communication while helping employees communicate more effectively. For more information: www.harmcheck.ai.

 
 
purple background 2.jpg
bottom of page