Main Body
The Attention Economy & Algorithmic Search
“As users engage with technologies such as search engines, they dynamically co-construct content and the technology itself.” – Safiya Umoja Noble
Overview
In the prior readings, you explored the theoretical construct of a mediatized communication ecology and the relationship to the human construction of reality.
In this chapter, we focus on the digital systems within our mediatized ecology and the algorithms that determine what you see as you engage with them. If you have ever studied information literacy in a prior course, you were likely focused on the content of information to determine its objective validity and reliability. Instead, in this area of study, we are interested in knowing how that information appeared in front of you as an output of your engagement with a system.
Why does this matter? It is because popular search engines like Google and AI are where many users go to confirm “the real truth” instead of relying on other media institutions or experts. To many, a Google or AI search is an operationalized method of “Doing your own research.”
The problem with reliance on search engines and AI is that there is a presumption that the top search results, most popular selections, or AI output affirm the validity of a given answer. From the user’s perspective, digital systems are presumed to be objective arbiters of fact checking (Sundin, O., & Carlsson, H., 2016). However, this is not the case. The results of search queries reflect a combination of factors including what other people have selected in similar search queries.
“Knowledge is not simply conveyed to users, but co-produced by the search engine’s ranking systems and profiling systems, none of which are open to the rules of transparency, relevance and privacy in a manner known from library scholarship in the public domain (van Dijck, 2010, p. 575).”
With AI systems trained on Internet data, there is a risk that the output of AI published on the Internet will be circulated back into its own source content, which results in a recursive spiral of “junk data.”
“We find that indiscriminate use of [AI] model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs…. (Shumailov, I., Shumaylov, Z., Zhao, Y. et al., 2024)
When you trace the flow of information in algorithmically optimized systems, you find that the “truth” in digital network systems isn’t about the validity of information, it is about what the majority of people believe is true. As more people accept top search results and AI output as objective proof of a given body of knowledge, it creates a feedback loop that elevates popular content to a higher state of relevance regardless of its quality. In fact, search results are designed to be subjective and are, to a degree, determined by who you are, your interests, where you had just navigated on the Internet, and where you are physically located (Kozyreva, A., Lorenz-Spreen, P., Hertwig, R. et al., 2021).
Our study of algorithmic systems suggests that there is more to the human experience of “knowing” than the mere evaluation of information for its credibility. We must examine how the information we find is the output of robust calculations that are different for every user by design according to for-profit business models.
The Attention Economy: We will also explore the algorithmically driven strategies found in social media systems that serve the Attention Economy. The Attention Economy is (among several definitions) a model of commercial activity designed to sustain the user’s attention for as long as possible to capture user data for use in targeted advertisements and personalizing content. One of the strategies to achieve platform “stickiness” is by using algorithms that make the user feel as though they belong to a community of like-minded people, which influences the user’s desire to connect, engage, share, and “stick” on the system for as long as possible.
While the idea of a personalized Internet or app experience is a reasonable value proposition for users (especially since they can use these apps at no cost), there is a potential for users to experience an echo chamber effect that affirms and intensifies a singular worldview. When users co-construct online communities based on uniform worldviews, there is a greater risk of tribalistic divisiveness.
Key Terms
Algorithm – A set of mathematical rules that calculate an outcome according to the logic designed in the code. Algorithms are created by humans. There are many variations to this definitions according to the context of its use. This article by the MIT Technology Review offers an expanded analysis.
Homophily Principle – A term used to describe the psychological tendency of people to gravitate towards other people with whom they share a similar worldview, values, interests, etc. The diagram below shows how social media systems recirculate user engagement data to inform both the construct of affinity groups as well as the algorithms that predict which content would be most interesting to them (to foster further engagement and data collection).
The effects of homophily-based design are mixed: there is variation in the design of social media systems (such as reciprocal/non-reciprocal connections) as well as the degree to which any set of partisan content causes an individual to migrate their worldview towards one direction or another.
The more important takeaway from this research is to acknowledge that the content one encounters in all forms of social media are co-constructed according to the Homophily Principle, not by a purely objective landscape of content.
Personalized – In the study of algorithmically optimized systems, personalization refers to the response of digital systems to the characteristics of the user and their past navigation behavior. For example, if you search for an item in Amazon, a digital cookie is saved in your browser that is retrieved by a tracker embedded on the Web page you are visiting and then used as a basis for serving you ads in the sidebar. You will often see ads for items that you just search for because the ads are personalized to your (apparent) interests. While there is no personally identifiable information being transferred in this process, a person who uses the same computer or device for all of their online navigation will accumulate a body of metadata (data about a “person,” not specifically you) that serves as a profile signature to inform how an algorithmically optimized system should respond.
If you like, read more about metadata and tracking in this open e-book chapter, Metadata, Tracking and the User’s Experience.
Note: There are many other Key Terms to review in the S.1896 – Algorithmic Justice and Online Platform Transparency Act resource listed below.
What should you be focusing on?
The purpose of this week’s study is to comprehend how personalized content appears in an individual’s channels of communication according to the purposeful designs of commercially driven interests.
Your goal is to be able to explain how it is possible for individuals to construct a certain perception of reality according to the design of various channels of mediated communication they encounter.
Readings & Media
Thematic narrative in this chapter
In the following readings and media, the authors will present the following themes:
- Search algorithms are created by humans who bring their cultural biases to their design. As such, algorithms may serve as a codification of bias.
- Search algorithms do not produce objective results; Internet content is personalized according to profit-driven strategies.
- Algorithmic systems can limit the breadth of information people may encounter.
- Transparency in algorithmic design is a double-edged sword, though there are solutions to consider if we determine that social media should operate in the public good.
Required S.1896 – Algorithmic Justice and Online Platform Transparency Act (2021) (5 minute read)
This congressional bill describes a set of Findings that justify legislative consideration to address the impact of algorithmic systems on the public. While its focus is primarily on responding to discriminatory harm, the Findings section summarizes the evidence that we will be studying as part of this course. Read:
- Section 2. Findings
- Section 3. Definitions
Required “FTC investigating ‘surveillance pricing'” (4:05) CBS News, August, 2024
This video describe how the Federal Trade Commission (FTC) is investigating how retailers and “middlemen” (data analytics firms that develop customized algorithms) change prices for their online products according to the metadata associated with the consumer’s browsing history and other online habits. This emphasizes that the “economic reality” that individuals experience online is a reflection of the flow of data rather than an objective representation of (in this case) the price of a product or service.
This segment is based on the FTC’s updated report, “FTC Surveillance Pricing Study Indicates Wide Range of Personal Data Used to Set Individualized Consumer Prices,” January 17, 2025.
Last, it is important to recognize that the moniker associated with this practice suggests that companies are conducting surveillance on its consumers. This is inaccurate. “Surveillance pricing” is being used as a form of rhetoric to inflate the perception of what is actually occurring here. Surveillance refers to intentional monitoring of a known individual for the purpose of gathering data – usually for law enforcement purposes. The individual under surveillance is identified by name and other attributes of their Personally Identifiable Information (PII). However, the form of personalization referred to in this article is not surveillance – it is metadata tracking, which contains no PII and the individual subject to the flexible pricing policy is not known. Metadata tracking is used for many other purposes, such as personalizing Internet content, preferences for such things like streaming services, and Google search results. It is relatively benign and canbe negated easily by clearing one’s browser cookies or using an incognito browsing mode.
Required “Wall Street Journal: How TikTok’s Algorithm Figures You Out | WSJ” (13:02)
This video focuses exclusively on TikTok’s algorithm, but the concept of personalizing content is the same for all other social media platforms which operate on the same premise of the Attention Economy for their profitability.
One of the comments at the end by a data scientist sums up the theme of this video: “Whether it’s on TikTok, on Facebook, on YouTube, we’re interacting with algorithms in our everyday life, more and more. We are training them, and they’re training us.”
As you watch this video, consider what it is that the algorithms are “training” us to do. There are multiple levels to consider beyond just swiping to the next video.
Required Algorithms of Oppression: How Search Engines Reinforce Racism. Chapter 5 (excerpt) – Safiya Umoja Noble (2018). (20 minute read)
Read the sub-chapter Search as a Source of Reality. You may need to scroll up or down to locate the beginning of this sub-chapter within the e-book.
Noble’s research in this area is centered on the premise that “…information provided to a user is deeply contextualized and stands within a frame of reference (Noble, S., 2018, p. 108),” which points to the need to deconstruct the historical philosophy that informs how information should be classified (and thus, retrieved).
Noble goes a step further than merely stating that contemporary search engines are biased towards a commercial interest. They are biased towards reflecting a cultural frame of reference. Thus, algorithmic search is more than just personalized for us; it is recursively “culturalized” as well. (For further reading to support this proposition, skim through the chapter “Searching for Black Girls”).
Required “Searching for Alternative Facts: Analyzing Scriptural Inference in Conservative News Practices” – Francesca Tripodi (2018). Excerpt: “Googling for Truth” (15 minute read)
This is an excerpt from a larger report which you can access directly from the Data & Society Research Institute website. While the larger theme of the paper focuses on the news engagement habits of self-identified Conservatives, the section we are interested in is generalizable to all users of search engines to seek information.
This chapter describes how people employ Google search as a method of “Doing your own research” as a way to circumvent the perceived biases of news media and other institutions.
Pay particular attention to the way an initial search query is phrased. As you will see in later studies, if it is possible to influence how a search query is phrased, it is possible to control the search results.
Required “Behind The Filter Bubble” Politicology Podcast, May 25, 2022. (Look on the upper right corner for the red “Play Episode” button). (45 minute listen)
This podcast focuses on the expertise of Stanford Professors Mehran Sahami and Rob Reich. Among other topics, they ask: What does it mean when opaque algorithms operate without transparency as if they are presumed to be serving a public good? What would happen if there was total transparency of the algorithms? Would it make things better or worse? Below is a list of the topics covered.
The first two topics in the podcast repeat subject matter already covered in prior chapter resources, but they are articulated so well that it may help to solidify your understanding of the issues. Note that the player controls on this webpage include speed control if you prefer to play back the content faster than 1X speed.
- 02:21 – What do social media algorithms do and how do they impact what you see on platforms?
- 05:01 – The embedded values on social media platforms.
- 11:07 – How algorithmic transparency could help bad actors.
- 12:23 – What algorithms look for and what they’re trained to do.
- 15:05 – Content moderation and algorithmic amplification.
- 23:53 – Transparency without going open source.
- 28:01 – Algorithmic choice through “middleware.”
- 32:27 – Consumer Choice, misinformation, and filter bubbles.
- 48:43 – Whether Democracy can withstand this.
Optional: “New York Seeks to Ban Algorithmic Feeds for Teens” – By Andrew Hutchinson, Social Media Today, June 20, 2024 (2:00 minute read). This article describes several regulatory efforts to control the effects of algorithms on teens based on claims that algorithmically optimized content is harmful. There are conflicting reports to this claim. In contrast, “Yet Another Massive Study Says There’s No Evidence That Social Media Is Inherently Harmful To Teens” outlines the case against causal effects of social media. Note that TechDirt is not a scholarly resource. While their irreverent blogging is noteworthy and well-cited, readers should be cautious of the author’s bias.
Optional: “Code-Dependent: Pros and Cons of the Algorithm Age” – By Lee Rainie and Janna Anderson, Pew Research Center, 2017 (5:00 minute read). This article outlines seven themes that have emerged as more algorithmically optimized systems are implemented throughout the Internet. Skim through this article to capture the major concerns expressed by the survey participants.
Optional: “Many Tech Experts Say Digital Disruption Will Hurt Democracy” – By Janna Anderson and Lee Rainie, Pew Research Center. February 21, 2020. The Pew Research Center conducts ongoing research on public opinion and expert commentary on contemporary issues, many of which center on technology, the Internet, and social media. This article presents survey results about questions related to public trust and civil divisiveness.
“About half predict that humans’ use of technology will weaken democracy between now and 2030 due to the speed and scope of reality distortion, the decline of journalism and the impact of surveillance capitalism. A third expect technology to strengthen democracy as reformers find ways to fight back against info-warriors and chaos (Anderson and Raine, 2020, p. 1).”
Optional: The Attention Economy – By Lexie Kane on June 30, 2019, Nielsen Norman Group. This article describes the Attention Economy from the perspective of website and app design: How do advertisers force users to pay attention to ads or content? This may interest you if your studies center around marketing and business.
Optional: Understanding Echo Chambers and Filter Bubbles: The Impact of Social Media on Diversification and Partisan Shifts in News Consumption – By Kitchens, B., Johnson, S. L., & Gray, P. (2020). This is the full research study referred to in the Homophily Principle section above. Recommend reading the Literature Review and the Practical Implications section to gather a sense of the research questions and how the results were interpreted. It is critical to note that the findings do not suggest that there are direct causal relationships between the use of social media systems as a whole and the formation of certain psychological dispositions, i.e, liberal, conservative, etc. There is too much diversity in the design of social media systems, and the data is limited in this research to user behavior (not user beliefs).
References
Anderson, J., & Rainie, L. (2020). Many tech experts say digital disruption will hurt democracy. Pew Research Center. Internet & Technology. Feb, 21.
Kitchens, B., Johnson, S. L., & Gray, P. (2020). Understanding Echo Chambers and Filter Bubbles: The Impact of Social Media on Diversification and Partisan Shifts in News Consumption. MIS Quarterly, 44(4).
Kozyreva, A., Lorenz-Spreen, P., Hertwig, R. et al. Public attitudes towards algorithmic personalization and use of personal data online: evidence from Germany, Great Britain, and the United States. Humanit Soc Sci Commun 8, 117 (2021). https://doi.org/10.1057/s41599-021-00787-w
Shumailov, I., Shumaylov, Z., Zhao, Y. et al. AI models collapse when trained on recursively generated data. Nature 631, 755–759 (2024). https://doi.org/10.1038/s41586-024-07566-y
Sundin, O., & Carlsson, H. (2016). Outsourcing trust to the information infrastructure in schools: how search engines order knowledge in education practices. Journal of Documentation.
van Dijck, José. (2010), “Search engines and the production of academic knowledge.” International Journal of Cultural Studies. Vol. 13 No. 6, pp. 574–592.