Imagine this: an applicant submits a résumé so polished it could have been written by a seasoned recruitment consultant. Meanwhile, on the other side of the table, a hiring system scans it – not a person, but an algorithm trained on thousands of past hires. The system flags keywords, patterns and styles it “likes”, and the applicant moves to the next stage.
Only here’s the twist: the résumé itself was generated by an AI tool. And the hiring algorithm was trained on human-written CVs. So now we have AI evaluating AI. What sounds like the future of recruiting may hide unexpected risks.
The echo chamber of style
When a job-seeker uses an AI to craft their CV, they’re tapping into current patterns of what “good” looks like: strong verbs, tidy structure, keyword-rich phrasing. Simultaneously, the HR vetting AI is picking up those same signals.
Instead of assessing what you can do, the system leans on how you present it. The “best” CV becomes the one that looks most like the model CVs the algorithm has seen – rather than the one that reflects genuine skill, creativity, or unconventional experience.
In effect: the machine rewards what the machine has rewarded. The result? Originality and difference get flattened into sameness.
Authenticity vs Optimisation
Once upon a time, a CV revealed more than credentials. It carried tone, personality, even a hint of vulnerability. Now, with AI-generated résumés, many look eerily similar: crisp, confident, syntactically flawless.
That’s not inherently bad – clarity is useful – but it risks turning personal stories into templated artefacts. The HR AI doesn’t know (and perhaps doesn’t care) which sentences were born from lived experience and which were assembled from a prompt.
The risk is that our hiring process is becoming all fragments – disconnected phrases that sound right but lack the thread of genuine connection.
HR without the “H”
It’s worth pausing on the phrase Human Resources. The first word matters. If we replace the Human in HR, we are left with only Resources.
And resources can be sorted, scored, and filtered by machines, but successful organisations are built by humans, not algorithms. It is the human judgment, empathy, curiosity, and intuition that turn data points into meaningful hires.
The moment we forget that, we risk building teams of technical fits rather than cultural contributors.
Bias amplified, not mitigated
AI promises objectivity. But if a vetting model learns from historical hiring data – data already shaped by human bias – that bias becomes encoded and automated.
Now add AI-written CVs to the mix: candidates with access to the best tools or the right prompts may appear more “aligned” with the algorithm’s preferences. Those without such access may be unfairly penalised.
Instead of breaking bias, we risk building a new bias – an algorithmic one – hidden behind a veil of efficiency.
The skills signal gets distorted
AI résumé tools can generate achievements or certifications that look impressive but lack substance. If the screening AI can’t distinguish between authentic skill and linguistic polish, inflated CVs slip through while genuine talent is overlooked.
In this sense, AI can make it harder – not easier – to identify the people who can truly deliver. The hiring funnel fills with perfect words but uncertain truths.
Accountability and the Black Box problem
When both the résumé and the reader are machines, accountability evaporates. Who is responsible when a great candidate is rejected? Who checks the logic of the algorithm’s decision?
Transparency becomes not just a technical issue, but an ethical one. And as AI-governance frameworks evolve, companies will need to rediscover an old truth: you can’t outsource judgment.
An arms race of optimisation
As AI on both sides gets smarter, job seekers start optimising for the algorithm instead of for clarity. Employers update their AI to detect AI writing. The result is an arms race of automation, where success depends less on human skill and more on prompt mastery.
The irony: in trying to make hiring more human-friendly, we risk squeezing the human out of it entirely.
So what can be done?
- Keep humans in the loop. Let AI assist, not decide.
- Value story as much as structure. Look for authenticity, not just alignment.
- Audit your pipeline. Watch who gets filtered out, and why.
- Invite real conversations. People reveal more in a few minutes of dialogue than in 500 optimised words.
- Redefine what “fit” means. Algorithms can match patterns; only humans can sense potential.
As organisational psychologist Adam Grant said recently,
“Algorithms can read résumés, but only humans can read people.”
And that’s the heart of it. AI can help us hire faster, but it can’t tell us who we’ll trust, collaborate with, or grow alongside.
And finally ..
We can automate the sifting. We can optimise the wording. But we can’t automate understanding.
When machines read machines, we gain efficiency, and risk losing empathy. The organisations that thrive will be those that remember: it is the Human in Human Resources that builds success.
This was in the news earlier this month …
Richard Stott, from Beverley, East Yorkshire, said he turned down a job interview after learning the questions would be coming from Artificial Intelligence (AI).
First dropped: | Last modified: November 12, 2025