Deepfake Technology Loses Its Stigma Amid Socially Redeeming Uses

The News: Deepfake refers to use of AI to generate synthetic video, voice, audio, and text for the purpose of impersonating someone known by the intended audience. New NYU research by Robert Volkert and Henry Ajder found that deepfake technology is becoming increasingly accessible and that the threats posed by criminal exploitation of that technology are growing. They also looked into the efficacy or suitability of new deepfake-specific laws designed to mitigate these risks. The researchers investigated how deepfakes are being created, shared, and sold online. They identified three main approaches to creating and selling deepfakes: open source tools, service platforms, and marketplace sellers. They analyzed hundreds of deepfake service portals, marketplaces, forums, and chat rooms that have emerged in the past 2 years. Read the full article here.

Deepfake Technology Loses Its Stigma Amid Socially Redeeming Uses

Analyst Take: Deepfake technology has been stigmatized extensively in popular discussions. Likewise, deepfake creators have been tarred with a brush of criminality, defamation, deception, and other socially unacceptable intents.

Sensationalistic discussions in the mass media have created the misperception that deepfaked video is on the verge of hijacking the political process, and that deepfaked revenge porn has the potential to ruin the reputations of innocent people everywhere.

Deepfake is Dual-use Technology

Though these risks are not entirely far-fetched, their salience in today’s mass culture obscures the fact that a fair amount of deepfake activity is benign, perhaps creative, and even socially beneficial in intent. In the life sciences, for example, deepfake technology can serve a prosthetic-like function when fabricating digital voices for people who lost theirs.

But perhaps the splashiest showcases for deepfake technology are in media, entertainment, and the arts. Check out this hilarious “deepfake roundtable” that was used recently to promote a new online streaming services. Essentially, this is a new form of animation, only different in degree from the animation/live-action synthesis that cinematic innovators such as the Walt Disney Studios achieved decades ago. You have to remind yourself constantly while watching this video that the real-life celebrities are being entirely simulated and did not participate in its development in any way.

No doubt, these same celebrities might benefit from deepfake technology when used by directors to “reshoot” footage in post-production to correct a bobbled line, distracting anachronism, or continuity failure. In fact, major Hollywood directors are already employing deepfake technology with impressive results, such as when Martin Scorsese recently used it to render a younger Robert De Niro in “The Irishman.”

So, clearly, AI-generated fakery is working its magic more deeply into our lives. In the broadest sense, we can consider deepfakes to be a “dual-use” technology, just as likely to be used for good ends as for evil schemes.

The positive spin on all this is that deepfakes are an astonishingly powerful new tool for live-action animation and interactive simulation. In addition to astonishingly lifelike video such as that presented above, AI-generated audio has crossed the “uncanny valley” that makes it indistinguishable from what comes out of humans’ actual mouths, as was demonstrated publicly two years ago with Google’s Duplex digital-assistant technology.

AI-generated video and audio came into the popular consciousness around that same time as people began to realize that it’s now possibly to impersonate practically anybody with astonishing verisimilitude. Deepfake technology, powered by what’s often referred to as “generative AI”, continues to improve.

Deepfake Technology is Complicit in Popular Anxiety Over “Fake News”

The advance of deepfake technology has exacerbated jitters everywhere in the popular culture, especially with respect to its potential use in generating “fake news” for nefarious political purposes. As it intensifies, this trend will inflame more political discussions and give Hollywood’s science-fiction screenwriters more material to process in their imagination mills. My colleague Daniel Newman summarized those popular anxieties last August with the not-entirely-rhetorical question “what do we do about deepfakes?” Read the article “Deepfake Technology Loses Its Stigma Amid Socially Redeeming Uses“.

My perspective is that we need to acknowledge the dual-use nature of deepfake technology—good and otherwise—and begin to think of it as an ecosystem into which regulations can be applied without unnecessarily stifling the positive uses. The recent NYU study on the deepfake revolution takes such a perspective. From it we can take away the following key points.

Open Source Software is Accelerating the Deepfake Revolution

The NYU researchers found that free, downloadable open-source software is the primary factor in the commoditization of deepfake technology. Most of these projects are located on Github. Many project creators request user donations via Patreon, Paypal, or Bitcoin.

The principal use cases for this software, in descending order of deployment frequency, are face swapping and synthetic voice generation. Most of these tools require some programming experience and a powerful GPU, putting them into the province of professional rather than amateur developers. However, the availability of detailed tutorials and discussion groups on some popular open-source deepfake platforms is helping amateur developers get up to speed on using the tools.

Online Service Platforms are Delivering Deepfakes Globally

Increasingly, deepfake service platforms and marketplace sellers rely on these tools to help subscribers to generate astonishingly true-to-life video, audio, and other media for a fee.

The NYU researchers found a wide range of deepfake service platforms. These are websites that use GUIs to help users accelerate the process of creating deepfakes. They typically require users to upload training data in the form of media objects pertaining to the subjects they intend to deepfake. They also provide tools for receiving the AI-generated deepfake media object once it has been created. These service platforms may handle all of these back-end functions in an entirely automated fashion, or through partially manual processing by the service’s employees or contractors.

The NYU researchers found that most of these service platforms are either research-focused outlets such as GitHub, Discord, and Reddit or “underground” (i.e, porn-focused) outlets such as Voat and Telegram. They also found that most online deepfake activity still focuses on face swapping in pornographic videos, though several communities are creating “safe for work” deepfakes for research and entertainment purposes.

The researchers found that several service platforms are explicitly advertised as facilitators of deepfake pornography or such adjacent use-cases as synthetically removing clothes from pictures of women. Other service platforms published user terms that specifically prohibited this sort of content as well as prohibiting such uses as impersonating or harassing others.

The researchers found that these platforms—both benign and unsavory–are owned and hosted in many countries, including Japan, Russia, and China. Some operate transparently on the open web, while others are in the dark web under a cloak of identity obfuscation. Some of these platforms are only showcasing deepfake technology, while others are primarily focused on generating business revenue from services that deploy this technology.

Marketplace Sellers are Monetizing Custom-made Deepfakes

The NYU researchers found a growing community of deepfake marketplace sellers. These individuals use online channels to advertise custom-made deepfakes, including both those that are “Safe For Work” (SFW)” and those pornographic and other deepfakes that are “Not Safe For Work” (NSFW).

For both SFW and NSFW sellers, the researchers found that the pricing of marketplace services varied greatly, and often these discussions are conducted on a platform’s private messaging system.

They found that SFW sellers are mostly YouTubers and hobbyists who sell deepfakes on SFW-friendly forums and online marketplaces such as Fiverr, and that most of them clearly state that they will not make pornographic or otherwise unsavory content. They found that NSFW sellers are typically located on message board websites such as Voat and 4Chan; on messaging apps such as Telegram; and on deepfake pornography websites.

NSFW sellers openly advertise their services for creating deepfake pornography, with some sellers sharing their own NSFW videos to attract new customers. Indeed, they report that most deepfake activity on the “surface and dark web” involves producing pornographic videos.

Dystopian Scenarios of Deepfake Technology’s Abuse are Wildly Exaggerated

The most reassuring finding from the NYU research is that only one seller on the so-called “dark web” is advertising deepfakes for a fee or deepfaking “nudes” onto clothed images (with or without their subjects’ consent). The researchers stated that there was a “significant lack of sellers on these underground [NSFW deepfake] sites overall.”

Just as important, the researchers found a low likelihood of new NSFW deepfake sellers coming into market. This was due to the fact that, as they state, “demand for video creation on the dark web is currently very low.”

Just so we don’t grow too complacent, they call attention to the limits of their research. “It is possible that deepfake video creation services are being sold entirely on private and encrypted channels, but this would not be conducive for large and recurring profits, with few options for marketing to a wider audience.”

The Takeaway – Deepfake Technology is Here to Stay, and has Plenty of Good Uses

Faces are a currency of any society. There must be regulatory oversight how AI is used to render the otherwise true-to-life video representation of the identifiable faces of specific human beings. Likewise, regulations must govern when, why, and to what extent AI can be used replicate their voices, gestures, and other identifying attributes with pinpoint precision.

As governments everywhere grapple with issues surrounding use of AI-driven facial recognition, they’ll have to consider how to factor facial deepfaking into their regulatory frameworks. Though it’s relatively easy to ascertain whether a deepfaked subject consented to an impersonation, it’s another thing entirely for the subject to be sure that every video/audio segment in a recording is authentically them and not a deepfaked replica.

Deepfake technology will be used to doctor a video/audio recording to smooth out the “ummms” and “ahhsss” in normal speech. More of us will trust that this post-facto editing is not hijacked by unscrupulous parties armed with deepfake tooling. For example, someone may give a three-hour off-the-cuff talk in a public arena, into which a single fabricated sentence is later surreptitiously inserted midway through. Unless that person has a precise memory of every word they uttered on that occasion, and lacking a tamperproof video of the actual speech, they may later accept the fabrication as a true record of what they said. To the extent that the deepfaked is used to defame them or subject them to civil or criminal prosecution at a later date, this technology could thoroughly unravel the trust that society places in audio-visual materials as unimpeachable historical records.

Even if there were a surefire way to identify deepfakes, banning them would run afoul of free-speech guarantees in democratic nations. After all, pornography of any sort is protected speech in many jurisdictions, and using deepfake technology to produce it should not necessarily dilute these protections. And it would be difficult to define a clear demarcation point beyond which deepfake uses in satire are to be prohibited, considering how amazingly well some human impersonators can embody their subjects while remaining on the right side of the law.

Also, there is no clear line between deepfakes and retouching, remixing, and other established techniques for post-processing and refining video, audio, image, and other media. Forbidding the use of AI to automate such functions would run up against the fact that this same technology is now embedded in many cameras and is used to improve picture quality in many use cases with which society has no desire to interfere.

Deepfake technology has so many positive uses, that any regulatory regime must protect them while also mitigating the darker risks that come with the technology’s adoption. That will be fiendishly difficult balance for society to make, especially as deepfake tools evolve to eliminate any trace how they’ve altered the video, images, audio, and other media at the center of our lives in the 21st century.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Read more Analysis from Futurum Research:

Splunk’s Transformation to Cloud Continues With Strong Q4 

HPE is Executing its Strategy Well Despite Mixed Q1 Earnings

5 AI And Analytics Trends Marketers And Brands Should Be Investing In

Image Credit: Business Insider

Author Information

James has held analyst and consulting positions at SiliconANGLE/Wikibon, Forrester Research, Current Analysis and the Burton Group. He is an industry veteran, having held marketing and product management positions at IBM, Exostar, and LCC. He is a widely published business technology author, has published several books on enterprise technology, and contributes regularly to InformationWeek, InfoWorld, Datanami, Dataversity, and other publications.


Latest Insights:

All-Day Comfort and Battery Life Help Workers Stay Productive on Meetings and Calls
Keith Kirkpatrick, Research Director with The Futurum Group, reviews HP Poly’s Voyager Free 20 earbuds, covering its features and functionality and assessing the product’s ability to meet the needs of today’s collaboration-focused workers.
Paul Nashawaty, Practice Lead at The Futurum Group, shares his insights on the Aviatrix and Megaport partnership to simplify and secure hybrid and multicloud networking.
Paul Nashawaty, Practice Lead at The Futurum Group, shares his insights on AWS New York Summit 2024 and the democratizing of Generative AI.
Vendor Leverages Amazon Q on AWS to Drive Productivity and Access to Organizational Knowledge
The Futurum Group’s Daniel Newman and Keith Kirkpatrick cover SmartSheet’s use of Amazon Q to power its @AskMe chatbot, and discuss how the implementation should serve as a model for other companies seeking to deploy a gen AI chatbot.