The problem? That’s mostly rubbish. Not only is it rubbish, but it’s dangerous rubbish. Handing over your recruitment to a poorly designed algorithm is like giving a toddler a flamethrower and hoping they’ll only toast marshmallows. More often than not, these systems amplify the very problems they were meant to solve, creating a perfect storm of legal risk and corporate embarrassment. But what if there was another way? A way to use AI not to read words on a page, but to understand the human being behind them?
So, What’s Gone Wrong with AI in Recruitment?
Before we get to the potential fix, let’s get our hands dirty with the problem itself: AI hiring biases. These aren’t sci-fi robots with a grudge. They are mathematical models that have been fed a diet of our own historical hiring data. And if that data is tainted with decades of conscious and unconscious bias—which, let’s be honest, it is—the AI will simply learn to be a faster, more efficient version of a biased human recruiter.
The most infamous example, of course, is Amazon’s ill-fated recruiting tool. Back in 2018, the company had to scrap an AI project after discovering it was penalising applicants for things like attending all-women’s colleges or having the word “women’s” on their CV. The algorithm had taught itself that successful candidates were overwhelmingly male, so it systematically downgraded anyone who didn’t fit that pattern. It wasn’t malicious; it was just maths, reflecting a flawed reality.
This is where the promise of data-driven hiring crashes headfirst into the wall of terrible diversity metrics. Companies invest in these tools hoping to broaden their talent pool, but they end up fishing in an even smaller, more homogenous pond. The algorithm, in its cold logic, reinforces the status quo, making it nearly impossible for underrepresented groups to get a fair shake. It’s a vicious cycle that leaves businesses less innovative, less representative, and wide open to accusations of discrimination.
Can We Hear a Better Way? The Rise of Vocal Analytics
If scanning CVs is a dead end, what’s the alternative? One of the more intriguing—and admittedly, slightly spooky—ideas gaining traction is vocal analytics. No, this isn’t about using a lie detector or judging someone on their accent. The idea is to analyse the how of communication, not the what. Think of it like a master musician listening to an orchestra. They aren’t just hearing the notes; they’re sensing the rhythm, the tempo, the harmony, and the overall emotional texture of the performance.
This technology focuses on patterns of speech—pace, tone, intonation, and energy—to build a picture of a candidate’s behavioural traits. Are they a collaborative communicator? An analytical thinker? Do they show empathy? These are things a piece of paper, or even a keyword-scanning algorithm, will never tell you.
One company pushing this boundary is Mappa, a behavioural intelligence platform that has been making some serious waves. As reported by TechCrunch, Mappa has developed a system that uses voice pattern analysis during interviews to assess a candidate’s compatibility with a role and a team. Founded by Sarah Lucena, the startup has already attracted $3.4 million in seed funding and is servicing over 130 enterprise customers. As Lucena puts it, the goal isn’t to label people as ‘good’ or ‘bad’. “We understand traits as compatible or not,” she explains. It’s about fit, not judgment.
And the results? They speak for themselves. The platform has facilitated over 3,000 hires, and a staggering 60% of them have come from underrepresented backgrounds, including women, members of the LGBTQ+ community, and immigrants. By ignoring demographic data and focusing purely on behavioural markers, Mappa appears to be sidestepping the biases that plague traditional recruitment AI.
Staying on the Right Side of the Law
Now for the part that makes the lawyers and executives sit up and pay attention: EEOC compliance. In the United States, the Equal Employment Opportunity Commission has strict rules designed to prevent discrimination in hiring. Using a biased AI tool isn’t just unethical; it’s a one-way ticket to a courtroom and a public relations nightmare.
The legal landscape is scrambling to catch up. New York City’s Local Law 144, for instance, now requires companies using automated employment decision tools to conduct independent bias audits. This is just the beginning. Regulators are circling, and any company using a “black box” algorithm—one where you can’t explain why it made a certain decision—is taking a massive gamble.
This is where a tool that focuses on observable behaviours, rather than inferring traits from demographic proxies, could offer a crucial advantage. If you can demonstrate that your hiring criteria are based on job-relevant skills and communication styles—and that the process is applied consistently to everyone—you are on much firmer ground legally. Mappa’s approach, which quantifies behavioural traits, aims to provide exactly this kind of auditable, defensible framework. It shifts the conversation from “Why didn’t you hire this person?” to “Here are the specific, non-discriminatory behavioural attributes where the selected candidate showed stronger alignment with the role.” It’s a bold claim, but one that could redefine what a compliant hiring process looks like.
The Billion-Dollar Problem of People Walking Out the Door
Let’s talk about money. The cost of AI hiring biases isn’t just measured in potential lawsuits. It’s measured in employee turnover. When you hire someone based on a flawed process, you’re not just risking a bad fit; you’re practically guaranteeing it. Someone hired for the wrong reasons is likely to be disengaged, unhappy, and already updating their LinkedIn profile by the end of their first month.
The cost of replacing an employee is enormous—estimates range from 50% to 200% of their annual salary. This includes recruitment costs, training for the new hire, and lost productivity whilst the role is empty. When a company’s hiring process consistently produces bad matches, these costs spiral out of control, eating directly into the bottom line.
This is perhaps Mappa’s most compelling metric. According to the company, their clients see an employee turnover rate of just 2%. Let that sink in. The average turnover rate in many industries hovers around 30% or higher. Cutting that down to 2% is transformative. It’s the difference between a business that is constantly bleeding talent and one that is building a stable, experienced, and motivated workforce. By matching people based on genuine compatibility and behavioural fit, you’re not just filling a seat—you’re making a long-term investment that pays dividends in productivity and culture.
A Glimpse into the Future: Is Your Voice Your New Credit Score?
The promise of this technology extends far beyond the HR department. Mappa is already developing an API for other industries. Imagine venture capitalists using vocal analytics to assess a founder’s resilience and coachability during a pitch. Or educational institutions using it to identify students who might need extra support based on their communication patterns.
But here’s where the cautionary tone must get a little louder. As mentioned in the original TechCrunch article, there’s talk of applying this behavioural analysis to financial services, like loan approvals. On the one hand, this could create a fairer system, one where your character and reliability matter more than your postcode. On the other, it opens a Pandora’s box of ethical dilemmas.
Could you be denied a mortgage because an algorithm decided your voice lacked “financial prudence”? What happens when this data is breached? We are on the cusp of creating powerful new ways to judge people, and without robust regulation, transparency, and ethical guardrails, this technology could easily become a tool for a new, more insidious form of discrimination. The leap from assessing job compatibility to assessing creditworthiness is a huge one, and we need to tread very, very carefully.
The challenge is to harness the power of tools like vocal analytics to undo old biases without inadvertently creating new ones. Companies like Mappa are showing that it’s possible to build AI that enhances fairness, boosts diversity metrics, and delivers incredible business results. But the work is far from over. The technology is here. The real test is whether we have the wisdom and foresight to use it responsibly.
So, as a leader, the question you should be asking isn’t if you’ll use AI to hire, but how. Will you choose a black box that perpetuates the past, or will you invest in a system that tries to build a fairer future?


