Deepfake detection algorithms will never be enough
Get link
Facebook
X
Pinterest
Email
Other Apps
-
Spotting fakes is just the start of a much bigger battle.
Illustration by Alex Castro / The Verge
You may have seen news stories last week about researchers developing tools that can detect deepfakes with greater than 90 percent accuracy. It’s comforting to think that with research like this, the harm caused by AI-generated fakes will be limited. Simply run your content through a deepfake detector and bang, the misinformation is gone!
But software that can spot AI-manipulated videos will only ever provide a partial fix to this problem, say experts. As with computer viruses or biological weapons, the threat from deepfakes is now a permanent feature on the landscape. And although it’s arguable whether or not deepfakes are a huge danger from a political perspective, they’re certainly damaging the lives of women here and now through the spread of fake nudes and pornography.
Hao Li, an expert in computer vision and associate professor at the University of Southern California, tells The Verge that any deepfake detector is only going to work for a short while. In fact, he says, “at some point it’s likely that it’s not going to be possible to detect [AI fakes] at all. So a different type of approach is going to need to be put in place to resolve this.”
Li should know — he’s part of the team that helped design one of those recent deepfake detectors. He and his colleagues built an algorithm capable of spotting AI edits of videos of famous politicians like Donald Trump and and Elizabeth Warren, by tracking small facial movements unique to each individual.
These markers are known as “soft biometrics” and are too subtle for AI to currently mimic. These include how Trump purses his lips before answering a question, or how Warren raises her eyebrows to emphasize a point. The algorithm learns to spot these movements by studying past footage of individuals, and the result is a tool that’s at least 92 percent accurate at spotting several different types of deepfakes.
Li, though, says it won’t be long until the work is useless. As he and his colleagues outlined in their paper, deepfake technology is developing with a virus / anti-virus dynamic.
One deepfake detection algorithm works by tracking subtle movements in the target’s face.
Take blinking. Back in June 2018, researchers found that because deepfake systems weren’t trained on footage of people with their eyes closed, the videos they produced featured unnatural blinking patterns. AI clones didn’t blink frequently enough or, sometimes, didn’t blink at all — characteristics that could be easily spotted with a simple algorithm.
But what happened next was somewhat predictable. “Shortly after this forensic technique was made public, the next generation of synthesis techniques incorporated blinking into their systems,” wrote Li and his colleagues. In other words: bye bye, deepfake detector.
Ironically, this back and forth mimics the technology at the heart of deepfakes: the generative adversarial network, or GAN. This is a type of machine learning system comprised of two neural networks, operating in concert. One network generates the fake and the other tries to detect it, with the content bouncing back and forth, and improving with each volley. This dynamic is replicated in the wider research landscape, where each new deepfake detection paper gives the deepfake makers a new challenge to overcome.
Delip Rao, VP of research at the AI Foundation, agrees that the challenge is far greater than simple detection, and says that these papers need to be put in perspective.
One deepfake detection algorithm unveiled last week boasted 97 percent accuracy, for example, but as Rao notes, that 3 percent could still be damaging when thinking at the scale of internet platforms. “Say Facebook deploys that [algorithm] and assuming Facebook gets around 350 million images a day, that’s a LOT of misidentified images,” says Rao. “With every false positive from the model, you are compromising the trust of the users.”
It’s incredibly important we develop technology that can spot fakes, says Rao, but the bigger challenge is making these methods useful. Social platforms still haven’t clearly defined their policies on deepfakes, as Facebook’s tussle with a fake Mark Zuckerberg video recently showed, and an outright ban would be unwise.
“At the minimum, the videos should be labeled if something is detected as being manipulated, based on automated systems,” says Li. He says it’s only a matter of time, though, before the fakes are undetectable. “Videos are just pixels, ultimately.”
Rao and his colleagues at the AI Foundation are working on approaches that incorporate human judgement, but others argue that verifying real videos and images should be the starting point, rather than spotting fakes. To that end, they’ve developed programs that can automatically watermark and identify images taken on cameras, while others have suggested using blockchain technology to verify content from trusted sources.
None of these techniques will “solve” the problem, though; not while the internet exists in its current form. As we’ve seen with fake news, just because a piece of content can be easily debunked doesn’t it mean it won’t be clicked and read and shared online.
More than anything else, the dynamics that define the web — frictionless sharing and the monetization of attention — mean that deepfakes will always find an audience.
Take the recent news of a developer creating an app that lets anyone make fake nudes of photographs of clothed women. The resulting images are obviously fabrications. When you look at them closely the skin and flesh is blurred and indistinct. But they’re convincing enough at a glance, and sometimes that’s all that’s needed.
The resulting fakes could be used to shame, harass, and intimidate their targets. The DeepNude app creates AI fakes at the click of a button. A new AI-powered software tool makes it easy for anyone to generate realistic nude images of women simply by feeding the program a picture of the intended target wearing clothes. The app is called DeepNude and it’s the latest example of AI-generated deepfakes being used to create compromising images of unsuspecting women. The software was first spotted by Motherboard’s Samantha Cole, and is available to download free for Windows, with a premium version that offers better resolution output images available for $99. THE FAKE NUDES AREN’T PERFECT BUT COULD EASILY BE MISTAKEN FOR THE REAL THING Both the free and premium versions of the app add watermarks to the AI-generated nudes that clearly identify them as “fake.” But in the images created by Motherboard , this watermark is easy to remove. (We were unable to test t...
Giertz got tired of waiting for Elon Musk to release Tesla’s first pickup truck, so she made one herself. Simone Giertz was tired of waiting for Elon Musk to unveil his new Tesla pickup truck, so she decided to make one herself. The popular YouTuber and self-described “queen of shitty robots” transformed a Model 3 into an honest-to-god pickup truck, which she dubs “Truckla” — and naturally you can watch all the cutting and welding (and cursing) on her YouTube channel. There’s even a fake truck commercial to go along with it. Giertz spent over a year planning and designing before launching into the arduous task of turning her Model 3 into a pickup truck. And she recruited a ragtag team of mechanics and DIY car modifiers to tackle the project: Marcos Ramirez, a Bay Area maker, mechanic and artist; Boston-based Richard Benoit, whose YouTube channel Rich Rebuilds is largely dedicated to the modification of pre-owned Tesla models; and German des...
Chelsea will face Arsenal in the Europa League final on May 29 after knocking out Eintracht Frankfurt on penalties. Chelsea beat Eintracht Frankfurt on penalties on Thursday to set up an all-English Europa League final with local rivals Arsenal in Baku on May 29. Arsenal beat La Liga side Valencia 7-3 on aggregate, while in the other semi-final Chelsea and Eintracht Frankfurt finished 2-2 on aggregate after extra time. Chelsea won 4-3 on penalties to finally overcome their Bundesliga challengers. The result means both of Europe's club competitions will feature all-English finals after Tottenham Hotspur set up a June 1 title decider with Liverpool in the Champions League. Chelsea won the penalty shoot-out 4-3 and with it a place in the final against Arsenal. Kepa Arrizabalaga saved from Martin Hinteregger and Goncalo Paciencia before Hazard converted the decisive kick. Scores, Results & Fixtures THURSDAY 9TH MAY 2019 Chelsea 1 Frankfurt 1 ( Agg 2-2 ...
Comments
Post a Comment