Japan Institute Researchers Devise New Way to Fight Deepfakes

Ukrainian President Volodymyr Zelenskyy, right, speaks in a screenshot captured from the YouTube channel of U.S. news program “Inside Edition.” The image on the left is a deepfake.

A method to restore deepfakes to the original content has been developed by a National Institute of Informatics team.

The misuse of these convincingly realistic fake videos created using artificial intelligence software has become an issue affecting society. For example, a person’s face can be replaced by that of a celebrity or politician and the voice and facial expressions can seem to show something the celebrity or politician never said or did.

Ahead of the U.S. presidential election next year, controversy has arisen after the campaigns of former U.S. Donald President Trump and current Florida Gov. Ron DeSantis posted fake videos on social media in an attempt to manipulate voters’ impressions. Both are vying to win the Republican Party’s nomination for the White House.

In recent years, it has become increasingly easy to generate deepfakes using apps and other computer programs. As technology for creating deepfakes becomes more sophisticated, it becomes more difficult to detect what is being faked.

Given the situation, the Japanese government has supported through subsidies and other assistance the efforts of universities and research institutes to develop technology to detect fake videos.

The team at NII, which is under the jurisdiction of the Education, Culture, Sports, Science and Technology Ministry, will look to use its method on deepfakes of public institutions and celebrities.

The method involves embedding data identifiers about a person’s facial features. Even if the face is merged with that of a different person, the information in the video can be used as clues to restore the original face.

The team said it tested its method on about 200 deepfakes and successfully restored most of them highly accurately.

“Like a vaccine administered in advance to fight off an illness, our method can be used as a ‘cyber vaccine’ to prevent the misuse of fake videos,” said team leader Isao Echizen, the director of NII’s Information and Society Research Division.

There are of course still issues to be dealt with when using the method. For example, the method cannot restore a video if the person is not facing forward or if the image is dark.

As the team continues to improve the accuracy of its method, it is considering offering the technology upon request from enterprises and other outside entities.

Toshihiko Yamasaki, a University of Tokyo professor and expert on deepfake research, called the institute’s method “an achievement that can serve to deter the misuse of fake videos.”

“If more complex information is embedded in a video, security is likely to be further enhanced,” he said.