We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Deepfakes are an increasing concern for society. This year, a deepfaked voice of a CEO was used to steal $250,000 from a company.
The technology allows for uncanny, life-like clips of politicians saying things they didn't really say. It isn't hard to imagine how this could be used to unsettle the masses.
That's why Google has released a trove of deepfake videos to help researchers come up with detection methods.
RELATED: NEW DEEPFAKE SOFTWARE ONLY NEEDS ONE IMAGE TO MAKE YOU SING
Harmful to society
As Google points out in a new blog post, "while many [deepfake videos] are likely intended to be humorous, others could be harmful to individuals and society."
"Google considers these issues seriously. As we published in our AI Principles last year, we are committed to developing AI best practices to mitigate the potential for harm and abuse," the post continues.
Last year, the company released a dataset of synthetic speech for an international challenge that asked for developers to devise fake audio detectors that could catch deepfake audio clips.
This time, in collaboration with Jigsaw, Google has announced the release of a large dataset of deepfake videos. The dataset, the company says in its blog post, has been added into the Technical University of Munich and the University Federico II of Naples’ new FaceForensics benchmark — an initiative that Google co-sponsors.
Which one's real?
The new data was created with the help of consenting actors. Over the past year, Google paid the actors to record hundreds of videos. As Google says, "the resulting videos, real and fake, comprise our contribution, which we created to directly support deepfake detection efforts."
A few examples of the videos side-by-side — where we can't really tell which one is real — can be seen below.
The incorporation of the data into the FaceForensics video benchmark was carried out with the help of leading researchers, including Prof. Matthias Niessner, Prof. Luisa Verdoliva, and the FaceForensics team.
The deepfake video data is free to download on the FaceForensics GitHub page.