The technology has been the subject of growing scrutiny in recent years from regulators, criminal justice and privacy advocates, and consumers. Facial recognition has been especially controversial when used by law enforcement agencies because of questions about its accuracy and numerous studies that demonstrate the technology is less accurate when identifying women and people of color.
The news from Facebook comes in the middle of a public relations blitz. Facebook faces intense scrutiny from regulators after thousands of documents leaked by former employee Frances Haugen demonstrated that the company buried evidence of numerous problems, including misinformation, abuse by drug cartels and human traffickers, and negative impacts on teenagers’ mental health from the use of Facebook’s Instagram platform.
Facebook changed the name of its parent company to Meta last week, and said it will expand its work on augmented reality and virtual reality environments.
Many Facebook users may have enjoyed a little algorithmic help in tagging friends in photos, and facial recognition has also been a key part of Facebook’s celebrated Automatic Alt Text features for people who are visually impaired.
However, a 2019 Consumer Reports investigation found that many users didn’t have access to the setting that turns facial recognition off, a problem that affected hundreds of millions of people. Facebook ultimately corrected the problem, and disabled the technology unless users opted in. Facebook’s facial recognition feature spurred a complaint by the Federal Trade Commission in a $5 billion fine against the company, and a class-action lawsuit which ultimately was settled for $650 million.
“It’s good to see Facebook demonstrating a modicum of humility in pulling this feature back,” says Justin Brookman, director of privacy and technology policy at Consumer Reports. “I’m sure there are instances when it’s been helpful for some users, but at the end of the day, the use of facial recognition is invasive and often unwanted. The benefits just don’t outweigh the downsides.”
Facebook used facial recognition only on its own apps and websites, and did not let outside companies or organizations leverage the technology. By contrast, other businesses sell access to facial recognition systems. In 2020, Amazon and Microsoft said they would temporarily stop providing the technology to law enforcement, and IBM said it would stop working on facial recognition altogether. Several municipalities have barred their police forces and other local agencies from using the technology.
If you’re the type of person who skips over pop-ups when you’re trying to log in to a service, you might have agreed to let Facebook create a template of your face without even realizing it—and for years Facebook deployed facial recognition on every user without asking permission anywhere outside of the privacy policy.
You don’t have to do anything to have your facial recognition data deleted. Facebook will do that automatically. You’ll still be able to tag people in photos, but you’ll have to do it manually. Facebook’s Automatic Alt Text technology will still be able to recognize how many people are in a photo, but it will stop attempting to identify who each person is.
But the end of those features might not mean the end of facial recognition for Facebook, Instagram, or other services owned by their parent company.
Consumer Reports asked Facebook if it would delete or otherwise change the facial recognition algorithm it uses to create the face templates that it uses to store information about an individual’s features. The company didn’t directly answer that question.
“We are deleting all of the face templates that were calculated and turning off the underlying service that was generating them,” company spokesperson Jason Grosse wrote in an email.
Facebook or its parent company Meta could use that algorithm for other facial recognition projects at some point down the line.
“Looking ahead, we still see facial recognition technology as a powerful tool, for example, for people needing to verify their identity, or to prevent fraud and impersonation,” Meta’s Jerome Pesenti wrote in his blog post. “While we will continue working on use cases like these, we will ensure people have transparency and control over whether they are automatically recognized.”