Detect Fakes is an MIT research project focused on the differences between how human beings and machine learning models spot AI-manipulated media. The question is are humans better at spotting fakes than machines? Or, is it the other way around. Or better yet, are there some kinds of things humans excel at and others that machines excel at? If so, what is that exactly? Our goal is to communicate the technical details of DeepFakes through experience. We hope these DeepFake videos from the recent Kaggle DeepFake Detection Challenge (DFDC) give you a better sense of how algorithms can manipulate videos and what to look for when you suspect a video may be altered.
The data for this project is based on data from Kaggle's Deepfake Detection Challenge (DFDC). This website is not associated with Kaggle nor the sponsors of the DFDC. On the Kaggle Competition website the description of the challenge says, "AWS, Facebook, Microsoft, the Partnership on AI’s Media Integrity Steering Committee, and academics have come together to build the Deepfake Detection Challenge (DFDC). The goal of the challenge is to spur researchers around the world to build innovative new technologies that can help detect deepfakes and manipulated media." The winners of the challenge were awarded $1,000,000. Here is a link to the Github repository for the winning model. This challenge contains 100,000 DeepFake videos and 19,154 real videos, which make up just over 470gb of video data. While this Detect Fakes project does not have a $1,000,000 award, it offers the opportunity to learn more about DeepFakes and see how well you can discern real from fake when it comes to AI-manipulated media. For more information, email email@example.com.
You can read more about the project on the MIT Media Lab project page.
Detect Political Fakes has COUHES approval. However, some of the videos you may see in our project do not have captions. This is because we are interested in understanding how the presence or absence of captions influences the discernment of deepfakes. We acknowledge that this excludes certain individuals from our research, and apologize for the same.