DIGITAL LIFE

'News will find me' mindset makes people trust algorithms and online networks
One in three people believe they don't have to seek the news from traditional outlets like newspapers and television. Instead, they think the "news will find me" (NFM), relying on algorithms and social networks to get their information. A research team led by Penn State scholars recently found that these individuals often consider their online networks to be as trustworthy as professional editors and journalists.
This mindset may make people more vulnerable to believing and sharing misinformation, according to the researchers, who have published their findings in the journal Social Media & Society.
To understand news consumption behavior, the researchers designed an experiment that allowed them to observe how individuals with different levels of NFM engage with news. The researchers found users with higher NFM considered news recommended by algorithms or shared by others in their social network to be just as credible as news recommended by editors and reporters.
However, mid- and low-NFM individuals more critically evaluated news sources and placed higher value on stories from editors and reporters.
"The good news is that overall, professionals are still valued," said corresponding author S. Shyam Sundar, Evan Pugh University Professor and James P. Jimirro Professor of Media Effects at Penn State. "But people with this tendency to rely on news coming to them—which is becoming more and more people—are trusting algorithms and social media friends to be their news sources."
When readers grant algorithms and social networks the same authority as journalists, it's easy for bad actors to manipulate that digital space versus imitating a trusted news source, the researchers said.
"The underlying psychological mechanism was not parsed out in previous studies," said first author Mengqi Liao, assistant professor at the University of Georgia who completed her doctoral studies with Sundar at Penn State. "We did this experiment to understand and explain why respondents evaluate the recommended news the way they do."
The web-based experiment included 244 participants. Each user completed a pre-questionnaire that measured NFM level using a standardized survey scale. Then participants were randomly assigned to one of three simulated news feeds, which recommended content by a news editor, social media friends or an algorithm.
The content stayed the same across news feeds, only the source of the recommendation—algorithm, friends or editors—changed. This allowed the researchers to examine how each source prompted participants to rely on different heuristics: "mental shortcuts," or rules of thumb that people use to make quick judgments.
For example, when a news article is recommended by an editor, this activates the authority heuristic, prompting readers to trust the information because it comes from professional journalists.
When content is recommended by an algorithm, it triggers the machine heuristic. Articles recommended by social media friends activate the homophily heuristic, meaning people are more likely to trust information shared by individuals they see as similar to themselves.
"For some people, the algorithm now carries the same weight as a journalist," said co-author Homero Gil De Zúñiga, distinguished professor of media studies at Penn State. "We're seeing a flattening of authority so that algorithms and social media feeds are being trusted like professional journalism."
Sundar said the fact that this makes people with high NFM more vulnerable to misinformation and less informed overall is especially problematic with more people adopting an NFM approach to their news and information.
Liao added that it "would be a really big problem" if social media friends and algorithms recommended very biased or even false information.
"Subscriptions are going down; people are not actually seeking news," Sundar said. "Machine as a source is now becoming predominant, undermining the more traditional professional sources, and that's worrisome."
Sundar suggested possible strategies for combating the phenomenon, such as targeting high NFM people with customized media literacy interventions. These interventions could inform readers about where information originated, as well as the steps journalists took to uncover the information.
Trusting algorithms and online networks poses significant dangers, primarily because these systems are not neutral, objective, or transparent. They can amplify harmful biases, manipulate behavior through echo chambers, and facilitate the rapid spread of misinformation, ultimately threatening personal safety, social cohesion, and democratic processes.
1. Amplification of bias and discrimination:
-Biased training data: Algorithms are trained on historical data, which often contains societal prejudices. As a result, they can perpetuate, or even amplify, discrimination regarding race, gender, and socioeconomic status.
-Unfair Outcomes: Automated systems used in hiring, lending, or criminal justice can lead to unfair decisions, punishing or marginalizing individuals based on flawed, biased data.
-Facial Recognition Errors: Algorithms, particularly in facial recognition, have been shown to work less accurately for people of color, causing disproportionate issues for certain demographics.
2. Information manipulation and echo chambers:
-Filter Bubbles: Algorithms prioritize content that matches a user's previous preferences to maximize engagement. This restricts exposure to diverse perspectives, reinforcing existing beliefs and creating "echo chambers".
-Prioritizing Polarization: Platforms often highlight divisive or sensationalist content because it generates more engagement (clicks, views), which can radicalize users and increase social polarization.
-Spread of Misinformation: Algorithms can act as a catalyst for misinformation, promoting false or harmful content simply because it triggers high emotional reactions, which can destabilize democracies.
Provided by Pennsylvania State University




