Meta is preparing to harness facial recognition once again, but this time, the goal is to combat scammers without breaking privacy laws.
The social network aims to help Facebook and Instagram detect and crack down on so-called “celeb-bait ads,” which exploit a celebrity’s image to trick users into visiting a scam website.
Meta’s advertising system already performs a routine check before it publishes an ad. The problem is that numerous legitimate ads rely on celebrity images, making it hard for the company’s machine learning models to spot fraudulent ads, Monika Bickert, Meta’s VP for content policy, said during a press briefing.
To fix this, the company wants to tap facial-recognition technology to compare faces in the celeb-bait ads against the Facebook and Instagram profile pictures of public figures.
“If we confirm a match and that the ad is a scam, we’ll block it,” Bickert says. “Early testing with a small group of celebrities and public figures shows promising results in increasing the speed and efficacy with which we can detect and enforce against this type of scam.”
Meta will also use facial-recognition technology to help people recover their accounts. This can occur when a hacker breaks in and changes the password and recovery options or when the user loses their phone and can’t receive a recovery code.
(Credit: Meta)
If an account takeover does occur, Meta can require the user to upload an official ID to help them regain access. The company is now testing another option that involves the user uploading a “video selfie” to recover their account. Meta will then compare the selfie to the user’s profile pictures; if everything is legit, then access will be restored.
But we’ll be interested to see if scammers can exploit the system using deepfake technologies. For now, a Meta spokesperson noted: “We don’t have data yet from this test to fully understand how well it stands up against synthetic videos. This is something we’re going to monitor as we learn and iterate.”
Recommended by Our Editors
The news is a bit surprising since Meta shut down its facial-recognition system in 2021, citing “growing concerns about the use of this technology as a whole.” Back then, the company was harnessing facial recognition to help users automatically tag photos of themselves. However, Meta was later forced to pay huge settlements in Illinois and Texas because the tech also amounted to collecting users’ biometric information without consent.
Despite that setback, Bickert notes the company always saw potential in using facial-recognition tech for privacy and security-related purposes. This time, Meta will avoid testing its systems in the EU, the UK, Illinois, and Texas. The company is also promising to only use facial-recognition technology for security purposes. “We immediately delete any facial data generated after this comparison regardless of whether there’s a match or not,” Bickert says.
Meta says it has already thoroughly vetted the facial-recognition technology through a “robust privacy and risk review process,” which involved consulting with privacy lawyers, engineers, and relevant policy experts. In December, the company plans to broadly test the facial-recognition system to prevent celeb bait ads. Celebrities will also be able to opt out. The video selfie recovery option will roll out to more users in the coming months.
Like What You’re Reading?
Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.