>>October (Page 5)

The production of a deepfake is usually a secret undertaking, with the developer hiding their identity behind an electronic display or a cyber pseudonym. This was not the case when the producers from UK TV network Channel 4 put out a behind-the-scenes video of how they’d developed a deepfake version of the Queen doing a TikTok dance and making jokes about her favourite hobbies of “Phil and Netflix”.  

The Queen was depicted by English actress Debra Stephenson in a film that was intended to provide a severe warning about the sophisticated technology that is facilitating the proliferation of misinformation and fake news in the cyber age. 

Regardless of their capability to produce transformative digital effects, deepfakes have witnessed restricted utilization in Hollywood. A majority of deepfakes are experiments, and produced to generate shock value or to increase awareness of the minimal regulations regarding misinformation doing the rounds on the internet. A report from DeepTrace Labs discovered that were in excess of 14,000 deepfake videos doing the rounds on the internet as of September 2019, a dominating 96% of which were of, surprise, surprise, a pornographic nature. 

Leveraging image-to-image conversion in films 

Deepfakes are imagery, videos, or audio tracks that has been altered leveraging deep learning to emulate another individual’s likeness or speech. This variant of image-to-image conversion can substitute faces, alter facial expressions, and even undertake synthesis of speech and faces. The most basic iteration of image-to-image conversion is facial swapping, as observed in Snapchat’s filtration, where one individual’s likeness is superimposed onto that of another individual or animal for comedic impact. Provided a big enough dataset, however, an unsupervised ML algorithm can produce a deepfake, which does a far more realistic task of synthesis of imagery to emulate an individual’s mannerisms. Presently, the actual targets are individuals like noteworthy politicians and the CEOs of dominant enterprises – like a Tom Cruise, or Elon Musk.  

Image-to-image conversion takes imagery from one field and converts them so they possess the style, or traits of imagery from another field, like a photograph of the Eifel Tower produced a la Van Gogh’s Starry Night. This is accomplished leveraging a Generative Adversarial Network (GAN), which includes dual machine learning models functioning antagonistically, a discriminator and a generator. The generator makes an effort to produce new instances or derivations of the input information that can make the cut for being perceived as actual data, while the discriminator categorizes these outputs as real or fake. The essential objective is for the algorithm to produce a new image so authentic that it tricks the discriminator. 

While these transformative visual effects might appear to be a goldmine to Hollywood film productions, which make payments to visual effects studios in the tune of millions of dollars to generate authentic looking dragons, spacecrafts, and aliens into live action videos leveraging labour-heavy 3D layering, deepfakes have barely proliferated Hollywood. Computer-generated imagery (CGI) is still the visual effect of preference, although deepfakes can be produced by anybody with adequate computational capabilities and access to open sourced deepfake applications such as Deepfake App, or DeepFace Lab, which is also open source on Github.  

Hollywood movies have even started leveraging “full-body” CGI to star deceased stars into films, for example, the return of Carrie Fisher in the latest Star Wars: Rogue One, in which unutilized scenes of her character, Princess Leia, were extracted from the Force Awakens to finish the character’s storyline in Rise. As a matter of fact, Finding Jack, a Vietnam-era action drama which hit screens in 2020 features James Dean, who passed away in 1955. Performer Will Smith’s younger clone in 2019’s Gemini Man signified a major progression in VFX technology, “the industry’s most believable digital human”, in the opinion of IndieWire. VFX artists stated that as the actor had taken quite well to aging, they could not merely electronically alter a sagging cheek or drooping jowl to produce the illusion of youth, or systematically soften the look of crow’s feet. 

A deeper dive into understanding what youth actually means was needed, states Guy Williams, a VFX supervisor at Weta Digital, stated to the publication. 

In these scenarios, movie producers are required to have a broad variety of videos of the actor in order to realistically recreate them. Deepfake ML models, on the other hand, merely require training on a big enough image dataset with a human performer to function as a stand-in, making it feasible to recreate images of an actor who has passed away with reduced limitations. 

Will deepfake technologies proliferate Hollywood? 

Up till now, deepfakes remain too low-res for the cinematic screen. Open sourced deepfake software can only generate footage at a max resolution of 256×256, paltry in contrast to the dominant number of theatres leveraging 2K digital image projection, which is a container possessing a resolution of 2048×1080 pixels. But researchers affiliated with Disney Research Studios are carrying out work on a model that can generate video with a 1024 x 1024 resolution, indicating that the technology could be going to the silver screen. 

Due to these reasons, deepfakes within the entertainment industry have stayed mostly restricted to fan produced videos so far – which are usually released in reaction to badly executed CGIs. The Irishman, a film by Martin Scorcesese witnessed performer Robert DeNiro electronically de-aged to look three decades younger, but the facial swap fell short of hiding the fact that DeNiro, who was seventy plus during the time of shooting, couldn’t move with the nimbleness of the young man he was electronically impersonating. Shortly following the Irishman’s release, a fan created deepfake propped up on a YouTube account managed by a deepfake producer known as Shamook, who ran videos from the movie through deepfake software. Shamook’s creation displays considerably more fresh-faced DeNiro in contrast to the Netflix CGI, in which the performer’s wrinkled face appears like current-day DeNiro, but with a darker shade of hair. 

Deepfake enthusiasts have an inclination for releasing remakes that appear far better to their Hollywood counterparts, the original CGI-altered footage and were created at a fraction of the expenditure. CGI leverages motion-tracking technologies to track the movements of the face from several angles when a performer is talking, weeping, yelling, and so on to recreate these scenarios in 3D and overlay them over live action videos. A deepfake, in contrast, receives training on image datasets made up of thousands of photos, and produces a complete fabrication independently. The more images it is fed, the more convincing the outcomes. 

Even more fundamental cosmetic alterations to cinematic video typically turn out better leveraging deepfakes over CGI. Take, for instance, the viral footage of actor Henry Cavill’s moustache removed from Warner Bros’ Justice League leveraging just a half-a-grand computer and an artificial intelligence algorithm, which few stated performed better work than the studio’s CGI division. 

Fans have even put out deepfakes that swap the original performer for another, like Will Smith substituting Keanu Reeves in the Matrix, Jim Carrey as Jack Nicholson in the Shining, or Tom Cruise as Christian Bale in American Psycho. 

One especially dystopian outcome on what could transpire if deepfakes witness mass adoption in Hollywood is that performers could theoretically be substituted totally by electronic recreations of their likeness. 

If an actor wants too high of a wage, you could just swap them out with a DeepFake variant of them and no one would be aware of the difference. This way, the producers of a movie do not need to pay an actor millions upon millions of dollars to star in their next blockbuster. 

Over allocating their time to the filming of scenes, actors could become millionaires without moving a single muscle. They could make money on the licensing fees on their likeness by putting out the rights to thousands upon thousands of images that could be leveraged in training a GAN, saving movie studios potentially millions in expenditure. 

Auditory deepfakes, a variant of audio reshuffling that’s been likened as “Photoshop for audio” can be leveraged for language dubbing to substitute language actors while maintaining the original performer’s voice for a more authentic viewing experience. Video dialog substitution consists of leveraging a performer’s mouth movements to manipulate another individual’s mouth is current footage. 

But provided that AI has the capability of producing completely new human faces and not only modifying existing real ones, it is theoretically feasible to produce a movie with an all-digital cast, where human actors basically operate as body doubles, but are never viewed in the finished product. Deepfake acting and casting are considerably nascent, but as deepfakes have exponentially enhanced their levels of realism and authenticity, movie makers have started leveraging them in TV adverts to broadcast-quality productions. 

Deepfakes within the mainstream 

Mischief USA, a creative agency, put out a couple of adverts for a voting rights campaign that featured deepfake variants of North Korean dictator Kim Jong-un and Russian president Vladimir Putin. In this scenario, the casting procedure was based on what training data was readily available to train the model. For Kim, a majority of his filmed speeches depicted him with glasses on, which in turn hid his face and was the reason behind the algorithm breaking down, so identifying a performer who appeared like him was more critical. With regards to casting Putin, there was already a large number of videos available online of him giving speeches from several angles, so the producers had more wiggle room to work with.  

To identify the correct performer, the team proeccesed their casting tapes through DeepFaceLab to identify the performer that appeared to be the most authentic. They were basically functioning as a human shield, stated Ryan Laney, a VFX artist who worked on the project, stated Technology Review. 

While deepfake production was at a time, restricted to programmers with know-how of Python and unsupervised machine learning, deepfake software attained viral status in 2019 with the proliferation of a smartphone application referred to as Zao, which enables people to add themselves into popular movies in under eight seconds leveraging only one photograph of them. But end-users could only opt from a group of preselected clips just a few minutes in length to prevent copyright infringement. The application creator had probably trained their algorithms on every one of these clips to simply re-map a user’s face onto them. 

While deepfake identification tools are not yet viewed as an aspect of an organization’s fundamental cybersecurity infrastructure, Adamas states they could become more of a concern as the technologies become more advanced, and this is occurring on a daily basis. 

Currently, deepfake identification is never a factor in penetration testing, vulnerability testing, or anything of the like, he stated. But they would be critical for a physical security and public relations perspective or marketing. 

Deepfakes are mainly leveraged for entertainment reasons, like individuals putting themselves into scenes from their favourite movies or inserting their favourite actor in an iconic film, or scene, but aside from entertainment, they can also operate as a variant of propaganda. Three years ago, comedian Jordan Peele put out a deepfake of former U.S. president Barack Obama, where he leveraged his own voice to substitute Obama’s, where he referred to then president Trump “a complete and total dipshit.” Not long after that, two performers released a deepfake of Facebook creator Mark Zuckerberg showing off about exercising dominion over billions of individuals private information. In both scenarios, the creators made it overtly known that the footage they were putting out were deepfakes of their own creation. 

If you observe the science of propaganda, you’re not attempting to convince everybody that what they’re stating is real and accurate – they are merely required to convince an adequate number of people, stated Adams. Therefore, it ultimately becomes the tyranny of the majority.