
“What’s funny is, the low-fi approach works,” says Daniel Goldman, a blockchain software engineer and former startup founder. Goldman says he began changing his own behavior after he heard a prominent figure in the crypto world had been convincingly deepfaked on a video call. “It put the fear of god in me,” he says. Afterwards, he warned his family and friends that even if they hear what they believe is his voice or see him on a video call asking for something concrete—like money or an internet password—they should hang up and email him first before doing anything.
Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their resume, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details.
Another verification tactic some people use, Schumacher says, is what he calls the “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings. But it’s safe to say this approach can also be off-putting: Honest job candidates may be hesitant to show off the inside of their homes or offices, or worry a hiring manager is trying to learn details about their personal lives.
“Everyone is on edge and wary of each other now,” Schumacher says.
While turning yourself into a human captcha may be a fairly effective approach to operational security, even the most paranoid admit these checks create an atmosphere of distrust before two parties have even had the chance to really connect. They can also be a huge time suck. “I feel like something’s gotta give,” Yelland says. “I’m wasting so much time at work just trying to figure out if people are real.”
Jessica Eise, an assistant professor studying climate change and social behavior at Indiana University-Bloomington, says that her research team has been forced to essentially become digital forensics experts, due to the amount of fraudsters who respond to ads for paid virtual surveys. (Scammers aren’t as interested in the unpaid surveys, unsurprisingly.) If the research project is federally funded, all of the online participants have to be over the age of 18 and living in the US.
“My team would check time stamps for when participants answered emails, and if the timing was suspicious, we could guess they might be in a different time zone,” Eise says. “Then we’d look for other clues we came to recognize, like certain formats of email address or incoherent demographic data.”
Eise says the amount of time her team spent screening people was “exorbitant,” and that they’ve now shrunk the size of the cohort for each study and have turned to “snowball sampling” or having recruiting people they know personally to join their studies. The researchers are also handing out more physical flyers to solicit participants in person. “We care a lot about making sure that our data has integrity, that we’re studying who we say we’re trying to study,” she says. “I don’t think there’s an easy solution to this.”
Barring any widespread technical solution, a little common sense can go a long way in spotting bad actors. Yelland shared with me the slide deck that she received as part of the fake job pitch. At first glance, it seemed like legit pitch, but when she looked at it again, a few details stood out. The job promised to pay substantially more than the average salary for a similar role in her location, and offered unlimited vacation time, generous paid parental leave, and fully-covered health care benefits. In today’s job environment, that might have been the biggest tipoff of all that it was a scam.