
This blog post is based on what we learned after building and launching conversational AI in just 43 hours.
You schedule a screening call. The candidate seems perfect on paper, with a strong resume, relevant experience, and good communication in email.
Then they don't show up. Or they show up but can't answer basic questions about their own resume. Or worst case: you hire them and discover weeks later they're not who they claimed to be.
Fake candidates aren't a minor annoyance anymore. They're a systematic problem that's costing companies real money.
The Problem Is Getting Worse
Remote work changed hiring. You can recruit globally now. That's great for finding talent, but it also made verification harder.
You used to meet candidates in person. You could verify identity naturally during an office interview. Now you're hiring people you've never met face-to-face.
The economics changed, too. For someone running a fake candidate operation, the barrier to entry is low. Create a LinkedIn profile, copy a real person's resume, and use their photo. Apply to hundreds of jobs and get a few offers. Collect paychecks for a few weeks before getting caught.
The math works for them because they're playing a numbers game.
What It Actually Costs You
Let's break down what one fake candidate costs:
- Recruiter time on initial screen: 30-60 minutes to review application, schedule call, conduct phone screen, and write notes
- Hiring manager time on technical interview: 60-90 minutes, including prep and evaluation
- Team time if they get hired: Days or weeks of onboarding, training, code review, and meetings before you realize something is wrong
- Opportunity cost: The real candidate you didn't hire because you gave the offer to someone fake
- Replacement cost: Starting the entire process over
One fake candidate that gets through initial screening costs 3-5 hours of your team's time. If they get hired, it's 40-80 hours before you catch them.
At loaded costs of $75-150/hour for your team, that's $3,000-$12,000 per fake hire. Now multiply by how many times this happens.
Why This Happens
The traditional hiring process has gaps:
- Resume screening: You're verifying documents, not people. A fake resume looks the same as a real one.
- Phone screens: You're talking to someone, but you don't know if they match the application. Voice doesn't verify identity.
- Video interviews: Better, but you're still relying on manual comparison. Did the person on the call match the LinkedIn photo? Most interviewers don't check carefully.
- Reference checks: Easy to fake. Provide phone numbers for accomplices who confirm the false background.
The verification step that would catch fakes, which is comparing the person to their application photo and checking for signs of impersonation, doesn't happen until it's too late. By the time you have doubts, you've already invested hours.
What We Did About It
At 10Clouds, we built an AI Recruiter system that conducts initial screening interviews. One goal was efficiency, and the other was verification.
Every interview starts with identity verification:
- The system asks candidates to turn on their camera
- It captures a frame and compares it to their application photo using facial recognition
- It runs anti-spoofing checks to detect if someone is holding up a photo or playing a video instead of appearing live
The verification happens in the first 60 seconds, before any interview time is invested.
If the match confidence is high and anti-spoofing passes, the interview continues. If not, the system flags it for manual review.
What this catches:
- Someone using another person's identity gets flagged when faces don't match
- Professional impersonation services get detected by anti-spoofing that catches photos, screens, and video playback
- AI-generated profile photos fail when there's no real person to show on camera
- Stock photo profiles get caught when the person on camera doesn't match the application
The system processes everything locally. No cloud storage or data privacy issues. Candidates who are real don't experience any friction, as the verification takes 30 seconds.
The Results
We've been testing this internally. Here's what changed:
Before:
- Fake candidates made it to phone screens regularly
- Recruiters spent time on calls that went nowhere
- No systematic way to verify identity early
After:
- Identity verification happens before any recruiter time is spent
- Fake candidates get flagged automatically
- Recruiters only talk to verified candidates
The time savings: 30-60 minutes per fake candidate that would have gotten a phone screen.
The cost savings: $75-150 in recruiter time per flagged candidate, plus avoided downstream costs if they would have progressed further.
The psychological impact: Recruiters trust the pipeline more. Less time wasted means less frustration.
What This Means for Your Hiring
Even if you don't need AI to conduct interviews, you might need better verification.
Ask yourself:
- How often do candidates not show up for scheduled calls?
- How often do people perform much worse in interviews than their resume suggested?
- Have you hired someone who turned out not to be who they claimed?
- How much recruiter time do you spend on screens that go nowhere?
If these happen regularly, you have a verification problem.
Solutions at different scales
Small volume (under 20 hires/year):
Manual verification during video interviews. Take 30 seconds to compare the person on camera to their LinkedIn photo. Ask them to show ID if you have doubts.
Cost: Free, just process change.
Medium volume (20-100 hires/year):
Add a verification step before phone screens. Use a simple video check where candidates record a 30-second intro and confirm identity visually.
Cost: Minimal, can use free tools.
High volume (100+ hires/year):
Automated verification becomes worth it. Build or buy a system that checks identity at scale before investing recruiter time.
Cost: Development time or vendor fees, but saves multiples of that in recruiter time.
The key principle: verify early, before you invest time.
The Broader Issue: Trust in Remote Hiring
Fake candidates are a symptom of a bigger problem. Remote hiring removed the natural verification that happened when you met people in person.
Companies adapted by adding more interview rounds, more reference checks, and more verification after hiring. But this slows everything down and creates friction for real candidates.
The better approach: verify identity early and thoroughly. Then streamline everything else.
Real candidates don't mind quick verification. They want you to catch fakes too, as it means less competition from fraudulent applications. Fake candidates can't pass proper verification. They rely on gaps in your process.
Close the gap early and you solve the problem before it costs you money.
What To Do Next
If you're dealing with fake candidates:
- Start tracking the problem. Count how many candidates don't show up, can't answer basic questions about their resume, or raise red flags during interviews. Get a baseline.
- Calculate the cost. Multiply the number of suspicious candidates by the time your team spent on them. Use loaded hourly rates. See what this actually costs.
- Decide on verification. Based on volume and cost, figure out what level of verification makes sense. Manual process change, simple video verification, or automated system.
- Verify before you invest time. Whatever solution you choose, make sure verification happens before phone screens, not after.
- Test and iterate. Start with one role or one recruiter. Measure time saved and false positives. Adjust based on what you learn.
The fake candidate problem isn't going away. Remote hiring is permanent. The economics that make fraud profitable haven't changed. But you can protect your team's time by verifying identity before you invest in evaluating candidates.




