Main Body
Synthetic media: AI generated content
Overview
In the second half of this book, we step inside the mediated communication ecology to examine synthetic media:
- Content composed of objects/people that do not actually exist in real life.
- Content that is not created by actual people.
- Artificial content that is derived from the extraction and computational extrapolation of real objects or people.
The purpose of introducing synthetic media into our narrative of study is to say, in so many words, “It is already difficult to know what to believe as objective truth within mediated communication content created by humans. Now we have to contend with media created by artificial intelligence (AI)!”
While the current state of AI-generated media is somewhere beyond its early stages, it is imperfect. Most AI-generated images, audio, and video content is easy to detect. Yet, as the pace of AI development continues, AI-generated synthetic media will become difficult to discern from human generated media. AI-generated human faces have already achieved an extraordinary level of quality worth implementing for commercial purposes.
In this chapter, we examine AI-generated content that produces new content “out of thin air” and then we consider its implications to the human construction of reality, given the prior factors we have already studied.
Key Terms
Datasets: All AI systems require being “fed” data from which the system can learn the patterns of information in everyday objects. For example, if an AI system needs to recognize the characteristics of a dog, it will be provided with millions of images of dogs to ascertain the range of representations of “dog.” Once seeded with this information, an AI system can be employed to recognize or, as in the case examples below, generate examples of imagery that do not exist in real life. Review the article “LAION-5B: A New Era of Open Large-scale Multi-modal Datasets.”
Liar’s Dividend: This expression is used to describe a pathological media environment where public perception becomes so skeptical of fake content that liars and con artists can reasonably persuade people that real content is fake. This is exemplified by the quote made by Steve Bannon, former chief strategist for Donald Trump, “The Democrats don’t matter. The real opposition is the media. And the way to deal with them is to flood the zone with shit.”
This “flooding the zone” tactic refers to intentionally contaminating an information ecosystem with misinformation so that audiences become confused and distrusting. They cannot distinguish what is real and what is not. The longterm effect is a public weariness over the process of finding the truth at all, which leads people to abandon the idea that the truth is knowable. Review the article, “Deepfakes, Elections, and Shrinking the Liar’s Dividend.”
What should you be focusing on?
Your goal this week is to investigate the various developments of AI-generated content to determine who is using it and for what purpose. As you review each resource:
- Think about the patterns of use that have occurred in prior technology advancements: What was the developers’ original vision for their products? How were they actually used? How did bad actors leverage these innovations to achieve their goals? What do those patterns suggest about the future use of current innovations in AI-generated content?
- Think about what kind of ethical concerns should be discussed, by whom, and under what conditions. What should we be discussing about these systems before they reach a point of ubiquity, like the way YouTube, Facebook, and other globally accessible communication platforms have permeated societies?
Readings & Media
Thematic narrative in this chapter
In the following readings and media, the authors will present the following themes:
- Artificial intelligence can generate coherent text and realistic images.
- The open availability of synthetic media applications inspires many creative uses.
Required “The Digital “Real” is Probably Now Unknowable to the Human Eye” by Stefan Baushard, Education Disrupted: Teaching and Learning in An AI World, August 12, 2024. (9 pages, mostly images)
This article curates several examples of AI-generated images and videos that are so realistic, they are hardly discernible as fake. As the recent controversy over the Vice President Kamala Harris airport crowd photo suggests, the inability to discern real from fake imagery creates a weary and ambivalent public that cannot be certain about the validity of anything.
A quote from the article states, “I wonder if this will kill the internet. The online world will become a place not of fake news, but of fake reality.”
Required “Designed to Deceive: Do These People Look Real to You?” by Kashmir Hill and Jeremy White, The New York Times, November 11, 2020 (about 10 pages with interactive elements).
This extraordinary article describes how AI is used to generate realistic images of people and animals. Scroll down through the initial visual content to read the article. Use the interactive features at the bottom that show how it is possible to edit the features, gender, race, mood, and physical perspective.
Note: The New York Times requires a paid subscription, but visitors can access a certain number of articles at no charge each month. If you cannot access the article, please contact the instructor.
Supplemental: “Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling” by Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu.
Scroll to the bottom to view the experimental applications of this tool.
Nov. 21, 2020. https://www.nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html
Required “The People Onscreen Are Fake. The Disinformation Is Real.” By Adam Satariano and Paul Mozur, The New York Times, Feb. 7, 2023
This article describes how Synthesia, an AI-generated video system for creating business: training, corporate communication, and product demo videos was used to produce propaganda.
Note: The New York Times requires a paid subscription, but visitors can access a certain number of articles at no charge each month. If you cannot access the article, please contact the instructor.
Supplemental: “UK startup’s generative AI tools used for Venezuelan propaganda” by Oscar Hornstein, UKTN, March 17, 2023. An example of actual propaganda created in Venezuela.
Required “These AI-generated news anchors are freaking me out” by Kyle Orland, December 15, 2023, Ars Technica. (5 pages)
This article describes examples of AI-generated news content from a service provider called Channel 1 (not an actual TV channel). The showcase example shows an original French video recording translated and regenerated by an AI application to show the subject speaking in English with the same voice as the original. While the business model for this company is intended to serve regular newscasts, consider the potential for misuse.
Optional: “Almost an Agent: What GPTs can do” by Ethan Mollick, One Useful Thing blog, November 7, 2023. This article describes how an AI agent that produces text and images can be constructed without knowing code or specialized knowledge.
Optional: Review “The AI Index Report – Measuring Trends in Artificial Intelligence.” This is a comprehensive compilation and analysis of the state of AI, how it is being used, ethical issues, legislative trends, and international implications. It is produced every year by Stanford University.