Audio Deepfake A.I. Narrates Reddit As David Attenborough

Sir David Attenborough, a wildlife documentary broadcaster and natural historian, is an international treasure that must be protected at all costs. Now 94, Attenborough is still finding new dark recesses to explore on planet Earth – including the r / relationship and r / AskReddit boards on Reddit. One type.

In a series of videos posted to YouTube this week and shared by the motherboard, Attenborough’s sonorous voice is used to dubious effect from being assigned to the AI ​​reading the Reddit thread. The result is that Reddit will be a little learning spell, with much deeper greed than you would normally expect.

The audio was created by Deepfake video software developer Garrett McGowan, who explains his process in a special “making off” video. McGown used Google’s text-to-speech software, but was successful in giving it a suitable human-sound by employing a software-generated voice model trained in Attenborough’s actual speech. It was not created by McGowan himself, but was compiled by fellow YouTuber YouMeBangBang.

The results do not seem entirely solid (although I’m not sure how David Attenborough would be convinced by reading the reading threads). Attenborough mispronounces some words, and they don’t have as much drama to read as you’d expect from a story about the Redditor relationship drama. However, this is another compelling piece of evidence showing how to get good audio deepfakes.

This is not a completely new area. Earlier this year, we complained about an audio dipfake of the famous rapper by Jay-Z’s record label that popped up online. There is also no shortage of other celebrity audio deepfakes. However, by far the most influential, —the Massachusetts Institute of Technology — created Deepfake – mixing both video and audio – with President Richard Nixon reading an alternate address written at the event, showing that the 1969 Apollo Moon landing was horribly inaccurate. Was.

These technologies are advancing not only all the time, but as many YouTube videos in style, are now accessible to anyone who wants to create an audio dipfake. Fortunately, most of the use cases so far are attempts at humor rather than something more malicious. It is not that nothing can change in future.

Editors recommendations






Related Posts