Carnegie Mellon Researchers Develop New Deepfake Method

Carnegie Mellon Researchers Develop New Deepfake Method

Deepfakes, ultrarealistic fake videos manipulated using machine learning, are getting pretty convincing. And researchers continue to develop new methods to create these types of videos, for better or, more likely, for worse. The most recent method comes from researchers at Carnegie Mellon University, who have figured out a way to automatically transfer the “style” of one person to another.

The first example seen—Oliver to Colbert—is far from the most realistic manipulated video out there. It looks low-res, with certain facial features blurring at certain points. It’s almost as if you’re trying to stream an interview from the internet

but you have incredibly weak wifi. The other examples (excluding the frog) are certainly more convincing, showing the deepfake mirroring the facial expression and mouth movements of the original subject. The researchers describe the process in a paper as an “unsupervised data-driven approach.”

Like other methods of developing deepfakes, this one uses artificial intelligence. The paper doesn’t exclusively deal in translating talking style and facial movements from one human to another—it also includes examples with blooming flowers, sunrises and sunsets, and clouds and wind. For the person-to-person deepfakes, the researchers cite examples of how certain mannerisms can be transferred, including “John Oliver’s dimple while smiling, the shape of mouth characteristic of Donald Trump, and the facial mouth lines and smile of Stephen Colbert.”

The team used videos available to the public to develop these deepfakes.

Source: gizmodo.com