Deepfakes reveal vulnerabilities in certain facial recognition technology

UNIVERSITY PARK, Pa. — Mobile devices are using facial recognition technology to help users quickly and securely unlock their phones, make a financial transaction or access medical records. But according to a new study from the Pennsylvania College of Information Sciences and Technology, facial recognition technologies that use a particular method of identifying users are highly vulnerable to deepfake-based attacks that can lead to serious security issues for users and applications.

Researchers have found that most APIs that use face-liveness testing — a feature of facial recognition technology that uses computer vision to confirm the presence of a live user — don’t always detect digitally altered photos or videos of people who look like a live version of someone else. also known as deepfakes. Apps that use these detection measures are also significantly less effective at detecting deep fakes than what the app vendor claims.

“In recent years, we have seen significant development in facial authentication and verification technologies that have been deployed in many security-critical applications,” said Ting Wang, an associate professor in the Department of Information Science and Technology and one of the project’s principal investigators. “Meanwhile, we’ve also seen significant progress in deepfake technology, which makes it quite easy to synthesize live facial images and videos at little cost. So we ask an interesting question: Is it possible for attackers to abuse deepfakes to fool facial recognition systems?”

The study, which was presented this week at the USENIX Security Symposium, is the first systematic study of the safety of facial liveness testing in real-world settings.

Wang and his colleagues have developed a new deepfake-based attack framework called LiveBugger, which allows for automated security evaluations to be configured when testing the liveness of a face. They evaluated six leading commercial software interfaces for facial condition testing. According to the researchers, any vulnerabilities in these products could be inherited by other applications that use them, potentially putting millions of users at risk.

Using deep fake images and videos secured from two separate datasets, LiveBugger attempted to fool the app’s facial liveness verification techniques, which aim to verify a user’s identity by analyzing static or video images of their face, listening to their voice, or measuring their reaction to perform an action on command.

The researchers found that all four of the most common verification methods can be easily bypassed. In addition to highlighting how their framework circumvented these methods, they offer suggestions for improving the security of the technology, including eliminating verification methods that only analyze a static image of the user’s face, and matching lip movements to the user’s voice in methods that analyze audio and video from user

“While facial liveness verification can protect against many attacks, the development of deepfake technology creates a new threat to it that is still little known,” said Changjiang Li, a doctoral student in information science and technology and one of the authors of the paper “Our findings help vendors address vulnerabilities in their systems”.

The researchers reported their findings to the vendors whose apps were used in the study, and one of them has since announced plans to launch a deep fake detection project to combat the new threat.

“Facial liveness verification has been used in many critical scenarios such as online payments, online banking and government services,” Wang said. “In addition, more and more cloud platforms have started to provide facial liveness verification as a platform-as-a-service, which significantly lowers the cost and lowers the barrier for companies to deploy the technology in their products. Therefore, the security of the face liveness check is of great concern.”

Wang and Li collaborated with Zhaohan Xi, a doctoral student in the Department of Computer Science at the University of Pennsylvania; Li Wang and Shangqing Guo of Shandong University; and Shouling Ji and Xuhong Zhang of Zhejiang University. Contributions from Penn State were supported in part by the National Science Foundation.

Leave a Comment

Your email address will not be published.