technically challenged / AI-powered ‘deepfakes’ manipulate reality in Japan
14:15 JST, July 4, 2021
“Fake news” has taken flight on the wings of doctored photos and videos, many of which would deceive even the most eagle-eyed viewer. As ever-evolving artificial intelligence (AI) algorithms are used to make even more sophisticated fakes, developers of detection tools have been struggling to stay one step ahead of the forgers, in a never-ending cat-and-mouse game between reality and deception.
Prime minister targeted
In October 2019, then-Prime Minister Shinzo Abe donned a blue emergency jumpsuit and visited an evacuation shelter in Koriyama, Fukushima Prefecture, in the wake of Typhoon Hagibis, where he consoled an elderly woman, hands clasped over hers. The moment was captured by a photographer from the Kyodo news agency, which distributed the image across its network of major news outlets. Two days later, a doctored version that made it look like the prime minister’s visit had been staged in a studio began circulating on Facebook.
The viral image was allegedly the handiwork of a Tokyo man in his 60s. In an interview with The Yomiuri Shimbun, the man said that he whipped the image together in half an hour using editing software, motivated by a “pent-up rage” over what he perceived as the government’s insufficient support for the disaster-hit areas.
The man reposted the doctored image in December with the disclaimer: “This image is homemade. It’s not real.” But the fake had already taken on a life of its own. Even after a fact-checking organization debunked the image as a forgery in March 2020, it continued to be widely propagated online, mainly by critics of the Abe administration. One post was shared nearly 1,100 times, collecting comments decrying Abe for “staging [a photo op].”
The man said he regretted the post: “At first, I was just flabbergasted that anyone would believe the image was real. But now I’m terrified to think how far this will spread.”
Easy to fake
The advent of editing software has made it easier than ever to digitally alter images. Nowadays, anyone can create convincing fabrications on their smartphones, with no technical expertise required.
As technology advances, experts have begun sounding the alarm on “deepfakes,” a new frontier in video manipulating that harnesses the unbridled computational power of AI.
In the United States, video from a campaign rally was manipulated to give the impression that then-presidential candidate Joe Biden had addressed a cheering crowd by the wrong state name.
Deepfakes have also begun making headlines in Japan. A profusion of pornographic videos superimposed with the likenesses of celebrities have been investigated as a flagrant infringement of the “moral rights” clause of the nation’s legislation. In September last year, a few men were arrested on suspicion of defamation and other charges, after allegedly creating and posting deepfaked content to a members-only adult video website.
By the end of 2020, about 85,000 fake videos had been detected online, 10 times more than in 2018, according to a Dutch information security firm.
Not just nefarious
Deepfake technology, in the right hands, may also be tapped as a force for social good in new services that add convenience to users’ lives.
A subsidiary of online fashion retailer ZOZO, Inc. turned to AI to generate a lineup of fake fashion models that may someday strut in virtual changing rooms of the future, allowing shoppers to see garments on computer-generated avatars closer to their own age and body-type.
“By helping customers better visualize how [a garment] will look before they make a purchase, we also expect to be able to cut down on the amount of product that ends up being returned,” a company representative said.
Since April, seven golf courses in eastern Japan have been offering another unique service featuring deepfaked footage of pro golfer Ryo Ishikawa. In the personalized videos, Ishikawa greets visitors by name and provides messages of encouragement, saying, “Have a good time on the course.”
As deepfakes threaten to blur the line between reality and illusion, researchers have been hard at work refining technology capable of detecting digital manipulation, on a mission to preserve the social order.
Last year, U.S. tech giant Microsoft developed a deepfake detection tool that helps identify which parts of a video have been artificially manipulated. Microsoft has provided the tool to a consortium of institutional partners for testing.
Still, deepfake technology grows more sophisticated by the day. The fight against misinformation is an uphill battle.
“Artful deepfakes have reached a level where humans are unable to discern whether the video they’re seeing or the voice they’re hearing is the real thing,” said Junichi Yamagishi, professor of information processing at the National Institute of Informatics. “Combatting misinformation is going to be a team effort, requiring the cooperation of not only AI experts, but specialists in each of the plethora of social factors that allow [fakes] to spread in the first place.”
"SCIENCE & NATURE" POPULAR ARTICLE
Europe’s Euclid Space Telescope Releases First Images
Autonomous Driving Could Get ¥2.7 Billion Boost from Japanese Govt
Japan Environment Chief Urges China to Base Stance Over Treated Water on Scientific Evidence
G7 to Share Information on Invasive Alien Species; Members Agree to Create Database, Strengthen Research
Keep or Cull? Romania Divided over Bear Population
JN ACCESS RANKING
- Japan, Vietnam Trade Ministers Discuss Supply Chains, IPEF
- BOJ Ueda: Japan Increasingly Likely to Hit Inflation Target
- Stimulus Package Set to Drive Greater Govt Borrowing; Likely Effectiveness Called into Question
- Japan 2023 Food Exports Reach 1 Tril. Yen at Record Pace