Monday, February 16, 2026

 

DIGITAL LIFE


Study maps seven roles for generative AI in fighting disinformation

Generative AI can be used to combat misinformation. However, it can also exacerbate the problem by producing convincing manipulations that are difficult to detect and can quickly be copied and disseminated on a wide scale. In a new study, researchers have defined seven distinct roles that AI can play in the information environment and analyzed each role in terms of its strengths, weaknesses, opportunities and risks.

"One important point is that generative AI has not just one but several functions in combating misinformation. The technology can be anything from information support and educational resources to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and we need to create more effective policies," says Thomas Nygren, Professor at Uppsala University, who conducted the study together with colleagues at the University of Cambridge, U.K., and the University of Western Australia.

The study, published in Behavioral Science & Policy, is an overview in which researchers from a range of scholarly disciplines have reviewed the latest research on how generative AI can be used in various parts of the information environment. These uses range from providing information and supporting fact-checking to influencing opinion and designing educational interventions, and the study considers the strengths, weaknesses, opportunities and risks associated with each use.

The researchers chose to work with a SWOT framework as this leads to a more practical basis for decisions than general assertions that "AI is good" or "AI is dangerous." A system can be helpful in one role but also harmful in the same. Analyzing each role using SWOT can help decision-makers, schools and platforms discuss the right measures for the right risk.

"The roles emerged from a process of analysis where we started out from the perception that generative AI is not a simple 'solution' but a technology that can serve several functions at the same time. We identified recurrent patterns in the way AI is used to obtain information, to detect and manage problems, to influence people, to support collaboration and learning, and to design interactive training environments. These functions were summarized in seven roles," Nygren explains.

Seven roles(below):

1. Informer: Strengths/opportunities: Can make complex information easier to understand, translate and adapt language, can offer a quick overview of large quantities of information.

Problems/risks: Can give incorrect answers ('hallucinations'), oversimplify and reproduce training data biases without clearly disclosing sources.

2. Guardian: Strengths/opportunities: Can detect and flag suspect content on a large scale, identify coordinated campaigns and contribute to a swifter response to misinformation waves.

Problems/risks: Risk of false positives/negatives (irony, context, legitimate controversies), distortions in moderation, and lack of clarity concerning responsibility and rule of law.

3. Persuader: Strengths/opportunities: Can support correction of misconceptions through dialogue, refutation and personalized explanations; can be used in pro-social campaigns and in educational interventions.

Problems/risks: The same capacity can be used for manipulation, microtargeted influence and large-scale production of persuasive yet misleading messages—often quickly and cheaply.

4. Integrator: Strengths/opportunities: Can structure discussions, summarize arguments, clarify distinctions, and support deliberation and joint problem-solving.

Problems/risks: Can create false balance, normalize errors through 'neutral synthesis," or indirectly control problem formulation and interpretation.

5. Collaborator: Strengths/opportunities: Can assist in analysis, writing, information processing and idea development; can support critical review by generating alternatives, counterarguments and questions.

Problems/risks: Risk of overconfidence and cognitive outsourcing; users can fail to realize that the answer is based on uncertain assumptions and that the system lacks real understanding.

6. Teacher: Strengths/opportunities: Can give swift, personalized feedback and create training tasks at scale; can foster progression in source criticism and digital skills.

Problems/risks: Incorrect or biased answers can be disseminated as 'study resources'; risk that teaching becomes less investigative if students/teachers uncritically accept AI-generated content.

7. Playmaker: Strengths/opportunities: Can support design of interactive, gamified teaching environments and simulations that train resilience to manipulation and misinformation.

Problems/risks: Risk of simplifying stereotypes, ethical and copyright problems, and that gaming mechanisms can reward the wrong type of behavior if the design is not well considered.

AI must be implemented responsibly...The point of the roles is that they can serve as a checklist: they help us to see how each role can contribute to strengthening the resilience of society to misinformation, but also how each role entails specific vulnerabilities and risks. The researchers therefore analyzed each role using a SWOT approach: what strengths and opportunities it embodies, but also what weaknesses and threats need to be managed.

"We show how generative AI can produce dubious content yet can also detect and counteract misinformation on a large scale. However, risks such as hallucinations, in other words, that AI comes out with 'facts' that are wrong, reinforcement of prejudices and misunderstandings, and deliberate manipulation mean that the technology has to be implemented responsibly. Clear policies are therefore needed on the permissible use of AI."

The researchers particularly underline the need for:

-Regulations and clear frameworks for the permissible use of AI in sensitive information environments;

-Transparency about AI-generated content and systemic limitations;

-Human oversight where AI is used for decisions, moderation or advice;

-AI literacy to strengthen the ability of users to evaluate and question AI answers.

"The analysis shows that generative AI can be valuable for promoting important knowledge in school that is needed to uphold democracy and protect us from misinformation, but having said that, there is a risk that excessive use could be detrimental to the development of knowledge and make us lazy and ignorant and therefore more easily fooled. Consequently, with the rapid pace of developments, it's important to constantly scrutinize the roles of AI as 'teacher' and 'collaborator,' like the other five roles, with a critical and constructive eye," Nygren says.

Provided by Uppsala University

No comments:

Post a Comment

DIGITAL LIFE How will chatbots and AI-generated content influence the 2026 elections in Brazil ? ChatGPT's response to the question of w...