Main Body

Synthetic media: AI generated content

Overview

In the second half of this book, we step inside the mediated communication ecology to examine synthetic media:

  • Content composed of objects/people that do not actually exist in real life.
  • Content that is not created by actual people.
  • Artificial content that is derived from the extraction and computational extrapolation of real objects or people.

The purpose of introducing synthetic media into our narrative of study is to say, in so many words, “It is already difficult to know what to believe as objective truth within mediated communication content created by humans. Now we have to contend with media created by artificial intelligence (AI)!”

While the current state of AI-generated media is somewhere beyond its early stages, it is imperfect. Fake audio and video content is fairly easy to detect. Yet, if the pace of improvement continues, AI-generated synthetic media will become difficult to discern from human generated media with real human subjects. AI-generated human faces have already achieved an extraordinary level of quality worth implementing for commercial purposes.

In this chapter, we examine AI-generated content that produces new content “out of thin air” and then we consider its implications to the human construction of reality, given the prior factors we have already studied.

Key Terms

Datasets: All AI systems require being “fed” data from which the system can learn the patterns of information in everyday objects. For example, if an AI system needs to recognize the characteristics of a dog, it will be provided with millions of images of dogs to ascertain the range of representations of “dog.” Once seeded with this information, an AI system can be employed to recognize or, as in the case examples below, generate examples of imagery that do not exist in real life. Review the article “LAION-5B: A New Era of Open Large-scale Multi-modal Datasets.

What should you be focusing on?

Your goal this week is to investigate the various developments of AI-generated content to determine who is using it and for what purpose. As you review each resource:

  • Think about the patterns of use that have occurred in prior technology advancements: What was the developers’ original vision for their products? How were they actually used? How did bad actors leverage these innovations to achieve their goals? What do those patterns suggest about the future use of current innovations in AI-generated content?
  • Think about what kind of ethical concerns should be discussed, by whom, and under what conditions. What should we be discussing about these systems before they reach a point of ubiquity, like the way YouTube, Facebook, and other globally accessible communication platforms have permeated societies?

Readings & Media

Thematic narrative in this chapter

In the following readings and media, the authors will present the following themes:

  1. Artificial intelligence can generate coherent text and realistic images.
  2. The open availability of synthetic media applications inspires many creative uses.

    Required    Almost an Agent: What GPTs can do” by Ethan Mollick, One Useful Thing blog, November 7, 2023

Ethan Mollick is a leading voice in the advancement and implementation of LLMs such as ChatGPT into everyday use and education in particular. This article describes Mollick’s experiments creating an “agent,” which is a pre-programmed AI applet that can perform a function such as generating text and images. There are many illustrative images in this article which be viewed in large scale by clicking on them to view details.

There is a saying that is permeating the conversation about LLMs: “The AI you are using to day is the worst it will ever be,” which is to suggest that the mind-blowing capabilities of LLMs today are only the beginning. To that, use your imagination to project the capabilities of GPT agents to produce information and media at-scale.

    Required    Designed to Deceive: Do These People Look Real to You?” by Kashmir Hill and Jeremy White, The New York Times, November 11, 2020 (about 10 pages with interactive elements).

This extraordinary article describes how AI is used to generate realistic images of people and animals. Scroll down through the initial visual content to read the article. Use the interactive features at the bottom that show how it is possible to edit the features, gender, race, mood, and physical perspective.

Note: The New York Times requires a paid subscription, but visitors can access a certain number of articles at no charge each month. If you cannot access the article, please contact the instructor.

Supplemental:Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling” by Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu.

Scroll to the bottom to view the experimental applications of this tool.

Hill, K., & White, J. (2020). Designed to deceive: Do these people look real to you? The New York Times,
Nov. 21, 2020. https://www.nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html

    Required    The People Onscreen Are Fake. The Disinformation Is Real.” By Adam Satariano and Paul Mozur, The New York Times, Feb. 7, 2023

This article describes how Synthesia, an AI-generated video system for creating business: training, corporate communication, and product demo videos was used to produce propaganda.

Note: The New York Times requires a paid subscription, but visitors can access a certain number of articles at no charge each month. If you cannot access the article, please contact the instructor.

Supplemental:UK startup’s generative AI tools used for Venezuelan propaganda” by Oscar Hornstein, UKTN, March 17, 2023. An example of actual propaganda created in Venezuela.

Satariano, A., & Mozur, P. (2023, February 2). The People Onscreen Are Fake. The Disinformation Is Real. The New York Times. https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html

    Required    These AI-generated news anchors are freaking me out” by Kyle Orland, December 15, 2023, Ars Technica. (5 pages)

This article describes examples of AI-generated news content from a service provider called Channel 1 (not an actual TV channel). The showcase example shows an original French video recording translated and regenerated by an AI application to show the subject speaking in English with the same voice as the original. While the business model for this company is intended to serve regular newscasts, consider the potential for misuse.


Optional: Follow “Trends in AI” by Sergi Castella i Sapé on Medium.com

This author publishes a monthly roundup of research, trends, and other points of interest related to AI.

Optional: Review “The AI Index Report – Measuring Trends in Artificial Intelligence

This is a comprehensive compilation and analysis of the state of AI, how it is being used, ethical issues, legislative trends, and international implications. It is produced every year by Stanford University.

License

Icon for the Creative Commons Attribution 4.0 International License

Synthetic Media and the Construction of Reality Copyright © 2021 by University of New Hampshire College of Professional Studies (USNH) is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.