Deepfake detectors can be defeated, computer scientists show for the first time
Systems designed to detect deepfakes --videos that manipulate real-life footage via artificial intelligence--can be deceived, computer scientists showed for the first time at the WACV 2021 conference which took place online Jan. 5 to 9, 2021.
Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.
Here they show that XceptionNet, a deep fake detector, labels the adversarial video they created as real.
This shows that even a low-quality deepfake will be recognized as real by detectors if modified with the proper adversarial inputs.
Read the full release: