DIGITAL LIFE

How will chatbots and AI-generated content influence the 2026 elections in Brazil?
ChatGPT's response to the question of who to vote for in the 2026 presidential elections is something like "I can't comment." However, the artificial intelligence profiles President Luiz Inácio Lula da Silva (PT) and Senator Flávio Bolsonaro (PL), summarizes electoral polls, lists strengths and weaknesses, and suggests what to consider when making a decision.
The consultation leads to the following conclusion: for those who prioritize "administrative experience and broad social policies," Lula would be the strongest candidate. If the focus is on a "conservative agenda, security, and less state intervention in certain areas," Flávio would be the most aligned.
Other chatbots, such as Gemini and Claude, follow a similar approach. The tools don't suggest a name, but they provide assessments of each candidate, list pros and cons, and propose how the user can reach a conclusion.
While the Superior Electoral Court (TSE) finalizes the rules on the use of artificial intelligence in the 2026 elections, the debate is growing about how the spread of AI could influence the election — whether through chatbots or the advancement of AI-generated content.
The influence of chatbots...Two recent studies show that talking to an AI can be more persuasive than traditional election propaganda.
A study from the University of Oxford, published in December, analyzed this effect in 76,977 people. Before the conversation with the AIs, participants' opinions on politics were measured on a scale of 0 to 100. After the dialogue with the systems, which were programmed to defend certain points, the response was measured again. In the most persuasive scenario, the average of the responses changed by 15.9 points.
Research published in Nature, also in December, with six thousand voters, showed a similar effect. The focus was on the USA, Canada, and Poland.
In the American experiment, pro-Donald Trump and pro-Kamala Harris voters had to interact with systems that advocated for the rival candidate. The conversations altered the participants' preferences by up to 3.9 points on a scale of 0 to 100—about four times the average effect of electoral advertising recorded in the country in the two previous elections. In experiments in Canada and Poland, the change reached about 10 points.
The results do not indicate that conventional AI tools act as campaign workers. But they reveal how persuasive language models can be when and if they are configured to defend a point of view.
In the 2024 elections, the TSE (Superior Electoral Court) restricted the use of bots posing as candidates to contact voters. It also prohibited deepfakes and mandated warnings about the use of AI in electoral advertising.
This year, organizations such as NetLab UFRJ are also advocating that chatbots be prohibited from endorsing candidates and that electoral advertisements within AI systems be banned.
Trust in AI...In Brazil, one of the concerns is the level of trust that voters have placed in these tools, which operate under an aura of neutrality and authority, although they can make mistakes and reproduce biases.
A recent study by Aláfia Lab shows that 9.7% of Brazilians see AI systems, such as ChatGPT and Gemini, as a source of information. Matheus Soares, coordinator of the laboratory and co-coordinator of the AI in Elections Observatory, says that "confusing and inaccurate" answers can end up being interpreted as real in the electoral context.
In addition to errors, there are doubts about bias, says anthropologist David Nemer, a professor at the University of Virginia. He cites another Oxford study that identified regional distortions, for example. In the analysis, which took into account millions of interactions, ChatGPT attributes less intelligence to Brazilians from the North and Northeast. The risk is that this type of bias will appear in a contest with thousands of candidates that, in addition to the Executive branch, also involve the Legislative branch.
"This is a space where people trust that what is produced is true. But this 'truth' is based on a system whose origin and foundations are opaque," says the researcher. He adds that, unlike the disputes on social media, chatbots are usually seen as "neutral."
Fernando Ferreira, a researcher at UFRJ's Netlab, adds that the presence of AI has expanded on the internet beyond chatbots. Search engines, such as Google, present answers generated by artificial intelligence, while tools like Grok, from X, are used for fact-checking. And the answers are seen as "a source of truth."
In 2024, Google even restricted answers about elections in Gemini. The filter, however, failed -- the AI answered about some candidates in the municipal race, but not about others. In the case of OpenAI, the models can deal with politics, but must be neutral. In an October publication, the company stated that internal tests identified that less than 0.01% of ChatGPT's answers showed signs of political bias.
Gender violence and electoral integrity...Another concern is disinformation, in videos, audio or images, amplified by artificial intelligence. The researchers' interpretation is that the impact of technology on this electoral process should be greater than it was two years ago. One of the reasons is the spread of AI content, which has become more accessible, faster, and more believable.
Nemer says that, in addition to misinformation about candidates, he is concerned about the spread of deepfakes (realistic content generated by artificial intelligence) that question the integrity of the electoral system. He cites, as an example, manipulated videos that simulate failures in electronic voting machines, for example, which could undermine voter confidence.
For Soares, a point of concern is deepnudes (hyperrealistic images that simulate nudity), which were already exploited in 2024 and could intensify gender-based political violence this year. Both expect candidates and supporters to use more AI tools to produce political material.
Two weeks ago, Agência Lupa showed that the share of fake content produced with AI, among the organization's fact-checks, jumped from 4.65% to 25.77% in one year. Almost 45% of the cases had a political bias.
For Laura Schertel, a professor at IDP and UnB, the main challenge for the TSE (Superior Electoral Court) will be the implementation of existing rules. Among the proposals submitted to the Court, the researcher cites the creation of a mandatory compliance plan, in which companies explain in advance how they will apply and monitor electoral rules.
— The TSE is not a digital regulator. So there is a great challenge, which is how to ensure that this court, which issues new rules, has the capacity to implement them — says the lawyer.
mundophone










