New deepfake tech turns a single photo and audio file into a singing video portrait
Get link
Facebook
X
Pinterest
Email
Other Apps
-
Finally, technology that can make Rasputin sing like Beyoncé.
Artist: William Joel
Another day, another deepfake: but this time they can sing.
New research from Imperial College in London and Samsung’s AI research center in the UK shows how a single photo and audio file can be used to generate a singing or talking video portrait. Like previous deepfake programs we’ve seen, the researchers uses machine learning to generate their output. And although the fakes are far from 100 percent realistic, the results are amazing considering how little data is needed.
By combining this real clip of Albert Einstein speaking, for example, with a photo of the famous mathematician, you can quickly create a never-before-seen lecture:
Getting a bit whackier, why not have everyone’s favorite mad monk, Grigori Yefimovich Rasputin, belting out the Beyoncé classic ‘Halo’? What a karaoke night that would be.
Or how about a more realistic example: generating video that not only matches the input audio, but is tweaked to communicate a specific emotion. Remember, all that was needed to create these clips was a single picture and an audio file. The algorithms did the rest.
As mentioned above, this work isn’t completely realistic, but it’s the latest illustration of how quickly this technology is moving. Techniques for generating deepfakes are becoming easier every day, and although research like this is not available commercially, it didn’t take long for the original deepfakers to bundle their techniques into easy-to-use software. The same will surely happen with these new approaches.
Research like this is understandably making people worried about how it will be used for misinformation and propaganda — a question that is currently vexing US legislators. And although you can make a good argument that such fears in the political realm are overblown, deepfakes have already had caused real harm, particularly for women, who have been targeted to create embarrassing and shaming non-consensual pornography.
Getting Rasputin to sing Beyoncé is just a bit of light relief at this point, but we don’t know how weird and terrible things might get in the future.
Source: James Vincent (The Verge). ~Best Feeds ™...
Giertz got tired of waiting for Elon Musk to release Tesla’s first pickup truck, so she made one herself. Simone Giertz was tired of waiting for Elon Musk to unveil his new Tesla pickup truck, so she decided to make one herself. The popular YouTuber and self-described “queen of shitty robots” transformed a Model 3 into an honest-to-god pickup truck, which she dubs “Truckla” — and naturally you can watch all the cutting and welding (and cursing) on her YouTube channel. There’s even a fake truck commercial to go along with it. Giertz spent over a year planning and designing before launching into the arduous task of turning her Model 3 into a pickup truck. And she recruited a ragtag team of mechanics and DIY car modifiers to tackle the project: Marcos Ramirez, a Bay Area maker, mechanic and artist; Boston-based Richard Benoit, whose YouTube channel Rich Rebuilds is largely dedicated to the modification of pre-owned Tesla models; and German des...
It is at least the fourth fatal crash involving Autopilot Illustration by Alex Castro / The Verge Tesla’s advanced driver assist system, Autopilot, was active when a Model 3 driven by a 50-year-old Florida man crashed into the side of a tractor-trailer truck on March 1st, the National Transportation Safety Board (NTSB) states in a report released on Thursday . Investigators reviewed video and preliminary data from the vehicle and found that neither the driver nor Autopilot “executed evasive maneuvers” before striking the truck. NTSB_Newsroom ✔ @NTSB_Newsroom NTSB issued preliminary report Thursday for its ongoing investigation of the fatal, March 1, 2019, highway crash near Delray Beach, Florida. The preliminary report is available at; https:// go.usa.gov/xmpBm 67 3:10 PM - May 16, 2019 Twitter Ads info and privacy 62 people are talking about this The driver, Jeremy Beren Banner, was killed in th...
The resulting fakes could be used to shame, harass, and intimidate their targets. The DeepNude app creates AI fakes at the click of a button. A new AI-powered software tool makes it easy for anyone to generate realistic nude images of women simply by feeding the program a picture of the intended target wearing clothes. The app is called DeepNude and it’s the latest example of AI-generated deepfakes being used to create compromising images of unsuspecting women. The software was first spotted by Motherboard’s Samantha Cole, and is available to download free for Windows, with a premium version that offers better resolution output images available for $99. THE FAKE NUDES AREN’T PERFECT BUT COULD EASILY BE MISTAKEN FOR THE REAL THING Both the free and premium versions of the app add watermarks to the AI-generated nudes that clearly identify them as “fake.” But in the images created by Motherboard , this watermark is easy to remove. (We were unable to test t...
Comments
Post a Comment