Your TA Tech Stack Is Lying to You (And You Already Know It)

There's a conversation happening in boardrooms right now that nobody in TA is being invited to.

It goes something like this. "We invested heavily in AI-powered recruitment technology. We reduced headcount. Costs came down. And yet the quality of hire hasn't improved, the business is still frustrated with TA, and we can't quite articulate what we're getting for the spend." The people in that room are looking for someone to explain it. The head of TA is usually not in the room when it happens.

That is not a coincidence.

The AI hype cycle in recruitment has been extraordinary, even by the standards of an industry that has never been shy about a good story. Vendors promised transformation. Analysts published breathless predictions. Conference stages filled up with case studies that somehow always featured the same handful of early adopters. And TA leaders, under pressure to modernise, to demonstrate strategic thinking, to show their boards they weren't asleep at the wheel, bought in.

Not because they were naive. Because the pressure to act was real and the tools looked plausible.

Now the bill is arriving.

The metrics were always the wrong ones

Here is the core problem. Most organisations adopted AI in TA and then measured success using the metrics they already had. Time-to-fill. Cost-per-hire. Candidate volume. Application-to-interview ratios. These are operational metrics. They tell you how busy the machine is. They do not tell you whether the machine is producing anything valuable.

Time-to-fill dropped. Of course it did. Automated screening processes applications in seconds rather than days. That is not a business outcome. That is a processing speed improvement. If the candidates being screened in rapidly are no better matched to the role than the ones being screened in slowly, you haven't improved anything. You've just moved faster toward the same destination.

Candidate volume increased. Again, predictable. AI-assisted sourcing casts a wider net. More people in the funnel. But volume is not quality, and in many organisations the increase in candidate volume has actually made the human parts of the process worse, not better. Recruiters are triaging more, assessing less. Hiring managers are drowning in CVs and blaming TA for the noise. The AI solved a problem that wasn't the real problem and created two new ones in the process.

The dashboard stayed green. The business stayed frustrated. And the gap between those two things is where TA credibility goes to die.

The automation of the wrong thing

There's a principle in operations that should be tattooed on every technology purchasing decision in TA: automating a broken process doesn't fix it. It makes it break faster and at greater scale.

Most TA functions were not broken in their use of technology before AI arrived. They were broken in their relationship with the business. In their ability to influence hiring decisions rather than just execute them. In their capacity to have a conversation about workforce strategy rather than just headcount plans. In the gap between what TA knew about the talent market and what it was being asked to do with that knowledge.

None of those problems are solved by a better screening algorithm. But that is precisely what got bought. And when it didn't fix the underlying dynamic, the technology took the blame rather than the strategy.

The tools that genuinely work are embedded in a redesigned workflow. They change what work gets done, not just how quickly the old work happens. That distinction matters enormously, and it is the distinction that most implementation roadmaps never address.

The bias sitting in plain sight

There is a more uncomfortable conversation that needs to happen alongside the ROI one, and it involves the data that most AI screening tools were trained on.

Those tools learned from past successful hires. They identified patterns in the people who got offers, who passed probation, who were rated highly in performance reviews. And then they used those patterns to score future candidates. The logic sounds reasonable until you ask one question: what if the past successful hires all look the same?

In most organisations, they do. Not intentionally. Not because anyone sat down and decided to build a homogeneous workforce. But because hiring is a human activity conducted under cognitive load, and humans under cognitive load default to familiarity. The candidate who looks like the last person who did the job well. The background that signals the right cultural fit. The university that reliably produces a certain type of candidate.

Feed that historical data into an AI, and you have built a very efficient engine for reproducing those patterns at scale. You've taken an unconscious bias that used to operate at the speed of a recruiter's gut feeling and given it the processing power of industrial machinery.

That is a legal risk. It is a reputational risk. And it is sitting in plain sight in the tech stack of a significant proportion of TA functions that are publicly committed to diversity and inclusion.

The audit starts here.

What an honest audit actually looks like

Most technology reviews in TA are procurement exercises disguised as evaluations. The vendor provides case studies. The internal team reviews utilisation rates. Someone checks whether the contract renewal is better value than switching. And the decision gets made without ever asking whether the tool is producing what the business actually needs.

An honest audit looks different. It starts with outcomes, not activity.

Not how many CVs the AI screened, but how many hires from that pipeline are still with the business at twelve months. Not how quickly it shortlisted, but whether the shortlist reflected the brief or just the historical pattern encoded in the training data. Not how many stages were automated, but whether the humans in the remaining stages are making better decisions or just faster ones.

Then it looks at where humans have reinstated themselves. Because they always do. In every organisation I've worked with, there's a gap between what the AI is supposed to do and what actually happens. Recruiters who double-check the tool's output because they don't trust it. Hiring managers who reject shortlists and ask for a conversation instead. Coordinators who manage by exception so routinely that the exceptions have become the process. If your team is manually overriding AI recommendations more than 30% of the time, that is a signal. Either the tool is not calibrated for your operational reality, your team doesn't trust it, or both. Both are problems worth solving before you sign the next renewal.

Then comes the hardest part: the conversation with leadership. Not the one where you defend the spend. The one where you say, here's what's working, here's what isn't, and here's what we're changing. That conversation is the difference between a TA leader who manages perceptions and one who manages outcomes.

The real argument for keeping humans at the centre

Here is the irony that sits at the heart of all of this. The argument for AI in TA has always been that it frees humans to do the higher-value work. The relationship building. The strategic advising. The quality-of-hire conversation. The workforce planning. And that argument is correct.

But in most organisations, the AI has freed humans from the transactional work and into a vacuum, because nobody designed what the higher-value work actually looks like, or gave TA the authority to do it.

The AI reality check isn't about admitting failure. It isn't a case against technology. It is an argument for clarity. About what problem you are actually trying to solve. About what success actually looks like. About what a TA function is for when it is operating at its best.

The organisations that answer those questions before the next technology purchase are the ones that will actually get value from AI. The ones that don't will spend the next three years explaining dashboards that nobody believes to boardrooms they're still not invited into.

You don't need a smarter stack. You need a clearer question.

Start there.

Previous
Previous

You're Automating Away Your Next Generation of Leaders. And Nobody's Saying Anything.

Next
Next

The Agentic TA Revolution: Why 2026 Is the Year Hiring Teams Finally Admit More Recruiters Isn’t the Answer