Kwame had seven years of experience, a degree from a well-regarded university, and a CV that read like someone had engineered it specifically to get hired. Every keyword from the job description appeared somewhere on that document. The hiring manager loved him in the interview. The offer went out within the week.
Four months later, the team was quietly falling apart.
Not because Kwame was a bad person. Not because he was lazy or disengaged. But because the version of Kwame that existed on paper and the version that showed up to work every day were two genuinely different people — and nobody in the hiring process had built anything capable of telling them apart.
This story has no single villain. Not the candidate, not the hiring manager, not even the resume itself. The villain, if there is one, is the assumption that a document can do a job that only evidence can do.
It's worth understanding how we got here.
The resume made sense once. In a slower world — where skills changed gradually, where the gap between what you studied and what you'd need to do was manageable, where a hiring manager could reasonably infer capability from a career history — a tidy summary of where you'd been was a reasonable proxy for what you could do. Companies built entire hiring processes around it. Industries emerged to help people write better ones. For decades, it worked well enough that nobody asked too hard whether it was actually working.
Then the world sped up. Technology started changing faster than any university curriculum could track. AI tools got good enough that 78% of job applications now contain AI-generated content, and 65% of hiring managers say they can no longer verify the skills listed on the resumes they receive. The document that was already an imperfect signal became something closer to noise — optimised not for honesty but for the algorithm deciding whether a human ever sees it at all.
And then there's what the data shows about the people sending those resumes. 36% of job seekers now openly admit to listing skills they don't yet have. The most commonly faked skill is AI, the single capability that more companies are urgently hiring for than anything else right now. The tool everyone is racing to find talent in is also the most frequently fabricated skill on the documents they're using to find it.
There's something almost poetic about that, if you're not the one paying for the bad hires.
The companies that moved fastest away from credential-based hiring didn't do it because they were feeling progressive. They did it because the data made it impossible to keep pretending the old way was working.
Google's internal research produced a finding that quietly embarrassed the industry when it surfaced: after analysing thousands of hires across multiple years, they found that university prestige had almost zero correlation with long-term job performance. They dropped degree requirements, introduced structured competency assessments, and saw 20% more diverse engineering hires alongside measurably faster product cycles. Not as a side effect — as a direct result of the better signal their new process was generating.
IBM went further and built an entire talent philosophy around it. Its New Collar programme removed degree requirements for technical roles and replaced them with standardised assessments, structured interviews, and hiring manager training. The common thread between every company that genuinely transformed its hiring — IBM, Google, Apple, Accenture — wasn't a policy announcement. It was infrastructure investment. They didn't just say they were hiring differently. They built the tools to actually do it.
McKinsey found that hiring for skills is five times more predictive of job performance than hiring based on education, and more than twice as effective as hiring based on work experience alone. Five times. That's not a marginal improvement in hiring quality — that's a structural competitive advantage that compounds across every single hire a growing team makes.
Siemens discovered this in a way that's hard to forget once you hear it. When it needed teams for new smart manufacturing facilities, it didn't post job descriptions and wait. It had already built a skills intelligence system tracking demonstrated capabilities — not formal qualifications — across 300,000 employees globally. The system surfaced electrical engineers who coded as a hobby, quality control specialists with deep data analysis skills, people whose CVs would never have appeared in a traditional search. These became the core teams for the new facilities. Siemens didn't stumble onto hidden talent. It built a system specifically designed to see what traditional hiring had trained itself to look past.
Here's the part the industry is slower to admit: technical skill alone has never been enough.
There's a category of hiring failure that no assessment of coding ability will catch, and it's the kind that tends to be most expensive when it arrives. The new hire whose technical work is genuinely good, but who goes quiet when they're stuck rather than surfacing the problem. Who makes the standup tense without anyone being able to say exactly why. Who can't explain their own work to a non-technical stakeholder without making the room feel stupid for asking. The work is fine. Everything around the work is quietly broken.
89% of hiring failures come down to soft skills, not technical incompetence. Nearly nine in ten times something goes wrong in a hire, the code was never the problem.
The talent who can build something technically excellent and bring people with them through the complexity of building it, that person is a different category of hire than the one who can only do the first part. Most hiring processes aren't designed to find that distinction. The best ones treat it as the whole point.
This is where assessment design starts to matter enormously. Not a timed whiteboard challenge. Not a multiple-choice test on programming syntax. But structured scenarios — the kind that reveal how someone actually thinks under realistic conditions, how they communicate when they don't immediately know the answer, whether they can hold a complex problem in their head and walk someone else through it at the same time. The closer the assessment mirrors the real work, the more honest the signal it returns.
Pre-hire assessments consistently reduce time-to-hire by up to 50% — not by cutting corners but by eliminating noise earlier, so the conversations that actually matter happen faster with the candidates who genuinely belong in them.
Take Evalia. It's worth talking about specifically because it was built for exactly the gap that most hiring processes fall into, the space between what a candidate presents and what they can actually do.
Evalia gives hiring managers structured scorecards, calibrated interview panels, and AI-guided insights so that decisions are fast and defensible — built on evidence, not impressions. Every candidate moves through the same structured process. Every evaluation is scored against the same framework. The bias that lives inside traditional interviewing — the likability bias, the familiarity bias, the tendency to favour candidates who remind the interviewer of themselves — gets methodically removed from the equation.
What makes it particularly relevant right now is that it isn't built exclusively for engineering roles. Skills-based assessment matters just as much when you're hiring a product manager, a designer, an operations lead, a finance hire — any role where the gap between a polished CV and actual capability is wide enough to cost you. The structured scorecard adapts to whatever the role demands. The principle stays the same across all of them: what can you actually show us, rather than what have you told us about yourself?
For companies hiring across borders — and increasingly, the best companies are — that consistency matters even more. A talent in Accra being evaluated through the same structured framework as a talent in Amsterdam isn't just fairer. It's more accurate. And accuracy, in hiring, is the whole game.
The resume had a good run. For a long time, it was the best available tool for a genuinely difficult problem — how do you evaluate a stranger's capability at scale, quickly, before you've seen them work?
In 2026, better answers exist. They've been validated by enough research, and enough expensive mistakes, that the companies still defaulting to the old approach are doing so by choice rather than necessity.
Kwame's story doesn't have to keep repeating itself. The hiring process that produced it wasn't inevitable, it was just the one nobody had gotten around to replacing yet.
The replacement is here. The only question left is how long it takes the rest of the industry to use it.
