Main Body

Synthetic media: Deepfakes

“A deepfake is meant to cause an incongruence between expectation and perceived reality.” – Dobber et al.

Overview

This chapter is a continuation of last chapter’s exploration of purely AI-generated content. Deepfakes, however, combine existing real content with AI-generated media to create new content that appears to be representing a real person’s communication.

Here is a Bloomberg Quicktake that demonstrates what this looks like.

Naturally, there are concerns beyond using deepfakes as a form of entertainment. If deepfake technologies were used exclusively by Hollywood film effects specialists, perhaps these clips wouldn’t be too much cause for concern. (Do you recall how impressive it was in 1994 when Forrest Gump met President Kennedy and Kennedy’s speech was synthesized?).

However, deepfake software is freely available today and nearly anyone with a desire to make them can do so at minimal cost.

What does it mean when a single person can produce fake communication and impose asymmetrical chaos into our social and political well-being?

Below: A deepfake video experiment produced by Ethan Mollick, a leading researcher and writer in the area of AI and LLMs. Note that the clip includes AI-generated voice and picture in multiple languages.

Key Terms

The Liar’s Dividend: In the article “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” the authors, Robert Chesney and Danielle Keats Citron, propose that the risk of deepfake media permeating society is not so much in the misleading perception posed by actual deepfake content but, instead, in the notion that even legitimate media could be synthetic. The “dividend” contained in the phrase refers to the benefit gained by power structures or individuals when no one knows for certain whether any media is real, particularly when an individual denies culpability for behaviors recorded on video by claiming that the video was a fake.

Read more about the research conducted to measure the impact of deepfake media on human perception.

There are two examples worth exploring that demonstrate the undermining effect of deepfake videos on the perception of actual, non-fake video content:

What should you be focusing on?

Your goal this week is to assess the influence of deepfake media in the human effort to form reliable mental models of the real world. As you review the readings and media, think about what kind of problem this is from the perspective of your chosen framework.

Readings & Media

Thematic narrative in this chapter

In the following readings and media, the authors will present the following themes:

  1. Deepfake media is able to synthesize motion video, audio, and natural phenomena. Current applications produce novelty-quality results, but they are improving.
  2. Deepfake media has the potential to disrupt perceptions of truth in social, political, and legal situations.
  3. It is possible for deepfake media to be deployed in combination with targeted online publication strategies to produce a more intense effect, though it is unclear how influential or persuasive deepfake videos are to targeted audiences.

    Experiment     Test yourself!

Go to Detect Fakes on the MIT website and test your ability to detect a deepfake video.

    Required    Deepfakes, explained” by Meredith Somers, MIT Sloan School of Management, July 21, 2020 (9 pages)

This article provides a foundational explanation of deepfake videos with links to some prominent examples. Be sure to view the deepfakes of Mark Zuckerberg and Kim Kardashian.

Somers, M. (2020). Deepfakes, explained. MIT Sloan School of Management. Retrieved from https://points. datasociety. net/you-think-you-want-medialiteracy-do-you-7cad6af18ec2.

    Required    Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling” by Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu1, November, 2023.

Scroll to the bottom to review examples of extrapolated movement mapped to lifelike avatars. The quality of these examples are outstanding though there are still discernibly synthetic features. Consider the potential to generate lifelike avatars of targeted individuals performing actions that they did not do. Pay attention to the realistic movement of the clothing as the avatars move – a feature you might not even notice unless prompted.

Below is a variation of this technology demonstrated by taking a still reference image and mapping it onto existing biometric-based human movement. (This video has no sound).

    Required    Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?” Dobber et al. The International Journal of Press/Politics, 2021

This article describes the results of a research study to measure the effect of deepfake media. In particular, review the sections that describe how deepfake media is implemented in combination with political microtargeting (PMT), which refers to an algorithmically optimized system of feeding media to targeted audiences in social media.

You are free to skim this research study in its entirety, but please read carefully the following sub-sections:

  • Abstract/Introduction
  • Theoretical Background
  • Discussion

Related: Fake Biden robocall to New Hampshire voters highlights how easy it is to make deepfakes − and how hard it is to defend against AI-generated disinformation (2 pages)

Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes?. The International Journal of Press/Politics26(1), 69-91.

    Required    Deepfakes 2.0: The New Era of ‘Truth Decay’” by Brig. Gen. R. Patrick Huston and Lt. Col. M. Eric Bahm, Just Security, April 14, 2020 (4 pages)

This article describes the scope of the misinformation problem, lists the potential problems, and offers some solutions.

Related: Nikon, Sony and Canon fight AI fakes with new camera tech (1 page)

Related: Deepfake deluge expected from AI image generation breakthrough (so long, LoRA?) (1 page)

Huston, R. P., & Bahm, M. E. (2020). Deepfakes 2.0: The new era of “truth decay.” Just Security.

    Required    Is seeing still believing? The deepfake challenge to truth in politics”  by William A. Galston, Brooking Institute, January 8, 2020. (8 pages)

This article touches upon several of the topics we have covered in this course: engagement with mediated communication, epistemological differences, and subjectively optimized information bubbles.

Look for the author’s framing of the problem and how he approaches combating it. Think about how you would approach the problem according to your chosen framework.

Galston, W. A. (2020). Is seeing still believing? The deepfake challenge to truth in politics. The Brookings Institution. January8.

For further study: “Deepfakes: The Coming Infocalypse” by Nina Schick.

Optional: “Nvidia researchers debut GauGAN, AI that creates fake landscapes that look real

OptionalExamples of AI-generated audio samples imitating known speakers.

License

Icon for the Creative Commons Attribution 4.0 International License

Synthetic Media and the Construction of Reality Copyright © 2021 by University of New Hampshire College of Professional Studies (USNH) is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.