Saturday, February 21, 2026


TECH


3D vision technology powers factory automation

One night in 2010, Mohit Gupta decided to try something before leaving the lab. Then a Ph.D. student at Carnegie Mellon University, Gupta was in the final days of an internship at a manufacturing company in Boston. He'd spent months developing a system that used cameras and light sources to create 3D images of small objects. "I wanted to stress test it, just for fun," said Gupta, who would begin his postdoctoral research at Columbia Engineering a few months later.

He folded a piece of paper into a cone and placed it under the lights and cameras. When the cone was pointed toward the system, it captured the conical shape perfectly. But when light shone into the cone, the system failed. The problem, Gupta determined, was light bouncing off the cone's walls. Those "interreflections" added sources of light that the system wasn't built to understand.

He was far from the first researcher to encounter this problem. Five years earlier, Shree Nayar, now the T. C. Chang Professor of Computer Science at Columbia Engineering, had written a paper exploring a possible solution.

"That paper had planted a seed in my head," said Gupta, who is now a professor of computer sciences at the University of Wisconsin–Madison. "It made me think that there might be a way to solve the interreflection problem at scale."

After completing his Ph.D., Gupta came to Columbia Engineering and joined Nayar's research group, the Columbia Imaging and Vision Laboratory (CAVE). In just a few months, the engineers developed a breakthrough solution that has transformed the field of visual inspection. Today, technology based on their work is deployed in factories across the world.

"In a university research lab, we tend to approach problems from our academic viewpoint rather than the perspective of an industry insider," Nayar said. "It is therefore truly gratifying to see one of our results directly impact the manufacturing of a variety of products that are widely used in society."

Solving the reflection problem...Manufacturers produce more than a trillion electronic components every year. Many of these products, like chips and circuit boards, are small and complex, with extremely little tolerance for error. A mistake as small as a micron could render an entire board unusable. If problems aren't caught early in the manufacturing process, they can prove costly for businesses and their customers.

By the early 2000s, electronics had become so small and tightly packed that manufacturers began using computerized systems that used cameras and light projectors to measure the 3D structure of components. This new industry—called automated visual inspection—developed sophisticated imaging systems capable of inspecting the highly complex chips and boards found in consumer products such as phones, computers, and cars.

Now worth billions of dollars, the automated visual inspection industry has typically relied on a technique called structured light to create high-resolution 3D images of components. However, that technique has become less effective as electronics have become smaller. One reason is that tightly packed components tend to reflect light onto each other. Since automated systems are designed to measure light from projectors, spurious reflections can cause significant errors in 3D images.

In 2012, Nayar and Gupta developed a new imaging method that is resilient to these optical effects. They showed that by projecting light patterns in a narrow frequency band, structured light methods can be made robust to phenomena such as interreflections.

Their method, called Micro Phase Shifting (MPS), which was published at a premier computer vision venue (IEEE CVPR—Conference on Computer Vision and Pattern Recognition), demonstrated unprecedented accuracy in the 3D images it produces. In the realm of printed circuit boards, the method can reconstruct depth maps with micron-level accuracy, opening the door to precise, high-speed 3D imaging, which is valuable across many aspects of manufacturing.

Scaling the technology...In 2018, Omron, a leading company in the industrial automation and machine vision market, licensed the MPS technology from Columbia and developed an inspection system that launched in 2020. The product is now used by several leading automotive and consumer electronics manufacturers.

Dr. Masaki Suwa, the head of corporate research and development at Omron, and the president and CEO of OMRON SINIC X Corporation, said, "Our Automated Optical Inspection (AOI) solutions play a central role in ensuring the quality of printed circuit boards. Because Columbia's MPS technology is robust to spurious reflections when inspecting mirror-like surfaces such as solder joints, dies, and chip surfaces, it has proved essential to reliable 3D inspection. As electronic components continue to miniaturize, a technology like MPS that can capture 3D shapes with high precision will become increasingly important to printed circuit board manufacturing."

It is rare for a technology developed in a university laboratory to achieve large-scale adoption in a fast-moving and highly demanding field such as factory automation.

"The successful commercialization of Micro Phase Shifting underscores both the strength of Columbia's creative fundamental research and the value of close collaboration between academia and industry to bring breakthrough innovations into real-world manufacturing environments," said Ofra Weinberger, director of Columbia Technology Ventures at Columbia University.

"When we began this research project, we were motivated by a fundamental question: How do you recover accurate 3D information when light behaves in complex and non-ideal ways?" said Gupta. "We showed that by coding light smartly, one could separate the true 3D signal from the noise due to interreflections—a long-standing open problem in 3D imaging. Seeing that idea evolve into a method deployed at scale to help ensure the reliability of critical technologies has been a career highlight."

By creating an approach adopted by industry, the researchers demonstrated the value of academic research in bringing fresh ideas and rigorous thinking to business.

"Academic researchers explore a wide spectrum of problems, ranging from theoretical questions that seek to advance the knowledge base of the field to novel solutions to known practical problems," Nayar said. "It is exciting to see one of our innovations solving a critical problem in the manufacturing of products we use on a daily basis."

Provided by Columbia University  

Friday, February 20, 2026


TECH


New chip-fabrication method creates 'twin' fingerprints for direct authentication

Just like each person has unique fingerprints, every CMOS chip has a distinctive "fingerprint" caused by tiny, random manufacturing variations. Engineers can leverage this unforgeable ID for authentication, to safeguard a device from attackers trying to steal private data.

But these cryptographic schemes typically require secret information about a chip's fingerprint to be stored on a third-party server. This creates security vulnerabilities and requires additional memory and computation.

To overcome this limitation, MIT engineers developed a manufacturing method that enables secure, fingerprint-based authentication, without the need to store secret information outside the chip.

They split a specially designed chip during fabrication in such a way that each half has an identical, shared fingerprint that is unique to these two chips. Each chip can be used to directly authenticate the other. This low-cost fingerprint fabrication method is compatible with standard CMOS foundry processes and requires no special materials.

The technique could be useful in power-constrained electronic systems with non-interchangeable device pairs, like an ingestible sensor pill and its paired wearable patch that monitor gastrointestinal health conditions. Using a shared fingerprint, the pill and patch can authenticate each other without a device in between to mediate.

"The biggest advantage of this security method is that we don't need to store any information. All the secrets will always remain safe inside the silicon. This can give a higher level of security. As long as you have this digital key, you can always unlock the door," says Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this security method.

Lee is joined on the paper by EECS graduate students Jaehong Jung and Maitreyi Ashok; as well as co-senior authors Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor of Electrical Engineering and Computer Science, and Ruonan Han, a professor of EECS and a member of the MIT Research Laboratory of Electronics. The research was recently presented at the IEEE International Solid-States Circuits Conference.

"Creation of shared encryption keys in trusted semiconductor foundries could help break the tradeoffs between being more secure and more convenient to use for protection of data transmission," Han says. "This work, which is digital-based, is still a preliminary trial in this direction; we are exploring how more complex, analog-based secrecy can be duplicated—and only duplicated once."

Each CMOS chip is slightly different due to microscopic, unavoidable variations during fabrication. These randomizations give each chip a unique identifier, known as a physical unclonable function (PUF). MIT researchers developed a matched PUF pair on two chips. Credit: Massachusetts Institute of Technology

Leveraging variations...Even though they are intended to be identical, each CMOS chip is slightly different due to unavoidable microscopic variations during fabrication. These randomizations give each chip a unique identifier, known as a physical unclonable function (PUF), that is nearly impossible to replicate.

A chip's PUF can be used to provide security just like the human fingerprint identification system on a laptop or door panel.

For authentication, a server sends a request to the device, which responds with a secret key based on its unique physical structure. If the key matches an expected value, the server authenticates the device.

But the PUF authentication data must be registered and stored in a server for access later, creating a potential security vulnerability.

"If we don't need to store information on these unique randomizations, then the PUF becomes even more secure," Lee says.

The researchers wanted to accomplish this by developing a matched PUF pair on two chips. One could authenticate the other directly, without the need to store PUF data on third-party servers.

As an analogy, consider a sheet of paper torn in half. The torn edges are random and unique, but the pieces have a shared randomness because they fit back together perfectly along the torn edge.

While CMOS chips aren't torn in half like paper, many are fabricated at once on a silicon wafer which is diced to separate the individual chips.

By incorporating shared randomness at the edge of two chips before they are diced to separate them, the researchers could create a twin PUF that is unique to these two chips.

"We needed to find a way to do this before the chip leaves the foundry, for added security. Once the fabricated chip enters the supply chain, we won't know what might happen to it," Lee explains.

Sharing randomness...To create the twin PUF, the researchers change the properties of a set of transistors fabricated along the edge of two chips, using a process called gate oxide breakdown.

Essentially, they pump high voltage into a pair of transistors by shining light with a low-cost LED until the first transistor breaks down. Because of tiny manufacturing variations, each transistor has a slightly different breakdown time. The researchers can use this unique breakdown state as the basis for a PUF.

To enable a twin PUF, the MIT researchers fabricate two pairs of transistors along the edge of two chips before they are diced to separate them. By connecting the transistors with metal layers, they create paired structures that have correlated breakdown states. In this way, they enable a unique PUF to be shared by each pair of transistors.

After shining LED light to create the PUF, they dice the chips between the transistors so there is one pair on each device, giving each separate chip a shared PUF.

"In our case, transistor breakdown has not been modeled well in many of the simulations we had, so there was a lot of uncertainty about how the process would work. Figuring out all the steps, and the order they needed to happen, to generate this shared randomness is the novelty of this work," Lee says.

After finetuning their PUF generation process, the researchers developed a prototype pair of twin PUF chips in which the randomization was matched with more than 98 percent reliability. This would ensure the generated PUF key matches consistently, enabling secure authentication.

Because they generated this twin PUF using circuit techniques and low-cost LEDs, the process would be easier to implement at scale than other methods that are more complicated or not compatible with standard CMOS fabrication.

"In the current design, shared randomness generated by transistor breakdown is immediately converted into digital data. Future versions could preserve this shared randomness directly within the transistors, strengthening security at the most fundamental physical level of the chip," Lee says.

"There is a rapidly increasing demand for physical-layer security for edge devices, such as between medical sensors and devices on a body, which often operate under strict energy constraints. A twin-paired PUF approach enables secure communication between nodes without the burden of heavy protocol overhead, thereby delivering both energy efficiency and strong security. This initial demonstration paves the way for innovative advancements in secure hardware design," Chandrakasan adds.

Provided by Massachusetts Institute of Technology 


SAMSUNG


Samsung tightens security to block leaks of already leaked Galaxy S27

The smartphone industry lives in a constant cycle of anticipation, where leaks have become almost as certain as the launch of the devices themselves. If you're a tech enthusiast, you've certainly gotten used to knowing almost every detail of a new phone months before it's officially unveiled. However, Samsung seems to have reached its limit of tolerance. According to recent information, the tech giant is implementing a much more rigorous internal security system, with the clear objective of silencing rumors about the future Galaxy S27 line.

The current scenario is one of almost total control by "leakers" (informants), who manage to obtain schematics, photographs of prototypes, and specification lists with an ease that embarrasses major brands. To counteract this trend, Samsung Electronics, along with its subsidiary companies, has introduced a new "secure conversation mode" on its internal communication platforms.

This measure is not just a recommendation of good practices; it is a systematic technical block. With this new protocol, company employees are prevented from copying, pasting, or forwarding work messages. More than that, the system blocks the ability to take screenshots or save conversation history on personal computers. The idea is to create a closed ecosystem where confidential information dies within the messaging application, preventing private conversations about the development of the Galaxy S27 from ending up posted on forums or social networks.

A system shielded against the sharing of secrets...Unlike previous attempts, where the company relied on the ethics or confidentiality agreements of its employees, this new method operates at the software level. If you try to share a technical detail about the new processor or the design of the next flagship through these tools, you will be blocked by the system itself.

This stance reflects growing frustration at the company's headquarters in Seoul. Leaks have become so accurate that "Galaxy Unpacked" events have lost much of their surprise factor. When the presenter takes the stage to show off the new screen or camera capabilities, the audience already knows exactly what to expect, which ends up taking the shine off the launch and the impact of the marketing investment.

Despite Samsung's Herculean effort, there's a question in the air: is it possible to stop the Internet? Ironically, news about these new security measures reached the public through a leak reported by The Korea Herald. This demonstrates that, however protected official chat channels may be, the human factor remains the weakest link in the cybersecurity chain.

Furthermore, the calendar is working against the company. With the launch of the Galaxy S26 line just around the corner, attention naturally begins to turn to the 2027 successor. In fact, rumors are already circulating about the improvements that the Galaxy S27 Ultra may bring, particularly regarding camera hardware and the efficiency of the new processor. If the information has already started circulating among suppliers and component partners, closing the doors of the house now may be the equivalent of locking the barn after the horse has escaped.

Another critical point is that Samsung does not control the entire production chain. Many of the most detailed leaks do not come from direct Samsung employees, but from component factories, protective case manufacturers, or even local distributors who receive the material weeks before launch. Controlling this global ecosystem is a Herculean task that goes far beyond a simple software update in internal messages.

The impact for technology consumers...For you, as a user and follower of technological news, this change may mean a period of greater mystery. If Samsung succeeds, we may return to the days when presentation events were truly exciting and full of unexpected news. On the other hand, many consumers use these leaks to decide whether to buy the current model or wait for the next one, based on the expected improvements to the screen or battery life.

The truth is that the "cat and mouse" game between tech brands and insiders has just entered a new chapter. Samsung is clearly willing to fight to regain control of its own narrative, but only time will tell if the Galaxy S27 will be able to reach your hands as a true surprise.

The upcoming Galaxy S27 Ultra is starting to paint a worrying picture for consumers' wallets.

According to recent information, the next generation of Qualcomm processors could make the device significantly more expensive.

Commenting on the matter, the well-known Digital Chat Station revealed that we will have two 2-nanometer chips in 2027:

SM8950 - Snapdragon 8 Elite Gen 6

SM8975 - Snapdragon 8 Elite Gen 6 Pro.

For the Galaxy S27 Ultra, Samsung could use the Snapdragon 8 Elite Gen 6 Pro and even partner with Qualcomm for its own production, but this should not "eliminate" the final costs of the chipset.

This is because, if it chooses to maintain the global use of the Snapdragon chip, the company will have to decide between reducing its profit margins or passing this additional cost on to the end consumer, something that could raise the suggested price of the device beyond current levels.

mundophone

Thursday, February 19, 2026

 

TECH


Hot cities, safer buildings: A cooling coating that can also reduce fire risk

An international research team has demonstrated how conventional radiative cooling coatings can be optimized to further reduce building surface temperatures, cutting energy consumption, while also improving fire safety.

Radiative cooling coatings passively lower surface temperatures by reflecting most incoming sunlight, while at the same time emitting heat as infrared radiation (IR) back through the atmosphere. Because more heat leaves than enters, the surface becomes cooler than the surrounding air, helping to reduce indoor temperatures.

Such coatings rely on microscopic silicon dioxide (SiO₂) particles, the same material found in sand and glass, to scatter sunlight and emit heat efficiently. The particles are typically added to a polyurethane (PU) polymer resin to create coatings used on roofs and façades, lowering energy consumption and improving interior comfort.

New research appearing in Nano Materials Science, however, has taken this technology one step further. By engineering the microscopic structure of the particles into a dendritic, or tree-like shape, the team was able to create an improved multifunctional coating.

"Both experimental and simulation results show that the reflectivity of dendritic SiO₂ is much higher than those of solid, hollow or mesoporous SiO2," says Dr. Wei Cai, one of the publication's authors and a Marie Sklodowska Curie Actions postdoctoral fellow at IMDEA Materials Institute.

Specifically, the PU/dendritic SiO₂ composites achieved solar reflectivity of 95.5% and IR emissivity of 94.5%. This, in turn, resulted in daytime temperature reductions of 2°C compared to existing PU coatings, and 7.3°C compared to ambient temperatures.

The coating's improved performance comes from the dendritic structure creating more interfaces, which scatter sunlight more effectively and increase reflectivity. Credit: IMDEA Materials Institute

The coating's improved performance comes from the dendritic structure creating more interfaces, which scatter sunlight more effectively and increase reflectivity. Additionally, the Si-O bonds exhibit high infrared emissivity within the atmospheric window.

These two characteristics endow the material with both high reflectivity and high infrared emissivity, further enhancing the radiative cooling performance and contributing to a reduction in the material's temperature.

"This temperature decrease strongly confirms its potential in decreasing the energy consumption required for building cooling," adds Dr. Cai.

At the same time, incorporating the dendritic SiO₂ spheres into the polymer coating was also demonstrated to significantly increase its fire-safety performance.

Most notably, the material's Peak Heat Release Rate (PHRR) was reduced by 48.4%, lowering its maximum fire intensity by almost half. In a real-world scenario, this could slow fire spread and improve evacuation conditions.

Effectively, the engineered particles serve to increase the viscosity of the polymer as it heats, trapping combustible gases, and forming a protective barrier that slows flame growth and reduces heat release.

This dual behavior addresses a long-standing limitation of radiative cooling materials, which typically neglect fire safety in building applications.

"The results provide a new design strategy for building materials that combine energy efficiency and safety," says Dr. Cai. "This potentially enables passive cooling coatings to be deployed in real buildings, particularly in hot urban environments where both overheating and fire risk are critical concerns."

Provided by IMDEA Materials 


SONY


Sony's bet for 2026 and the challenge of survival

There was a time when carrying a Sony cell phone was a symbol of innovation. From the iconic Sony Ericsson phones to the first Xperia models, the brand helped shape the history of mobile telephony. Two decades later, the scenario is different: fierce competition, timid sales, and little visibility. Still, while many believed it was the end of the line, the company is preparing a new offensive that could redefine its future in the sector.

The trajectory of Sony's mobile division begins in the early 2000s, when Sony Ericsson devices became a benchmark in design and multimedia features. Models like the W800, from the Walkman line, marked an era by transforming the cell phone into a high-quality portable music player.

With the advancement of smartphones and the arrival of the first iPhone, the company reacted by launching the Sony Xperia X1, trying to position itself in the new era of smart devices. The Xperia family has survived over the years, going through transformations in the Android system, changes in strategy, and market repositioning.

But its popularity was never the same again. While Asian rivals aggressively expanded their catalogs with competitive prices and accelerated innovation, Sony adopted a more conservative stance. Its devices maintained niche features — such as a headphone jack and a focus on high-fidelity audio — but ceased to surprise the general public.

The result was an increasingly discreet presence on the shelves and in conversations about the best smartphones on the market.

A complicated present and controversial decisions... The year 2025 was especially difficult for the Xperia line. Unadventurous launches and prices considered high drove consumers away. The Sony Xperia 1 VII, for example, maintained classic elements of the brand's identity, such as inspiration from the Walkman division and the traditional headphone jack.

However, the package did not convince the market. With a starting price of 1,499 euros, the model faced criticism regarding its cost-benefit ratio. To worsen the scenario, there were reports of problems with some units and sales interruptions in certain countries.

Sales fell short of expectations, reinforcing the perception that Sony had lost relevance in the mobile segment. In a market dominated by giants that invest heavily in marketing and annual innovation, maintaining a catalog without great appeal has become a constant risk.

Given this context, many analysts began to speculate about a possible definitive exit of the company from the smartphone sector. After all, maintaining a division with modest performance may not seem strategic.

But Sony does not seem willing to abandon the game.

For 2026, information is already circulating about two new models: the Sony Xperia 1 VIII, aimed at the premium segment, and the Sony Xperia 10 VIII, intended for a more accessible range. The expectation is that both will maintain the aesthetic and philosophical line of the brand, without radical changes.

To date, technical specifications have not been officially released. However, records in the IMEI database indicate that the devices should be launched not only in the Asian market, but also in Europe.

The big question remains: will there be room for another attempt? The current market is extremely competitive, with manufacturers offering advanced cameras, embedded artificial intelligence, and aggressive pricing.

Sony, in turn, is betting on consistency and fidelity to its own identity. Even without its former prominence, the company believes it still has something to say in the smartphone universe.

It remains to be seen if the public agrees. In a scenario where brands rise and fall rapidly, persisting can be an act of courage—or strategic resistance. 2026 may not represent a revolution for the Xperia line, but it will certainly be a decisive test to see if Sony can still regain relevance in a market that no longer puts it in the spotlight.

Sony's main bet for 2026 is the Xperia 1 VIII, positioned as a super-flagship focused on creators, with a Snapdragon 8 Elite Gen 5 chipset, maintaining the tall design, 4K OLED screen, 3.5mm headphone jack, and focus on cameras with advanced sensors. A powerful compact model, the Xperia 5 V, and the mid-range Xperia 10 VIII are also expected.

Main bets and rumors for the Sony Xperia of 2026:

Sony Xperia 1 VIII (Flagship): Expected with the fifth generation of Snapdragon 8 Elite, seeking to compete in performance with Samsung and Apple. Likely continuation of the partnership with the Alpha line for photography, focusing on improving image processing.

Design and features: Sony should maintain the "tall" format (21:9 aspect ratio), premium build, 3.5mm headphone jack, and SD card slot, making them unique in the high-end market.

Xperia 5 V/VI (Compact): Focused on those seeking a top-of-the-line device in a smaller size, maintaining high-quality camera features in a smaller body.

Mid-range line (10 VIII): Focus on long battery life and ergonomic design, maintaining the niche of mid-sized users.

Availability: Launches follow the mid-2026 schedule, with high prices comparable to the previous generation (€1,499). Availability in North America is expected to remain limited.

mundophone

Wednesday, February 18, 2026

 

TECH


Study finds 'dosed' nonlinearity can beat linear and fully nonlinear AI

Umbrella or sun cap? Buy or sell stocks? When it comes to questions like these, many people today rely on AI-supported recommendations. Chatbots such as ChatGPT, AI-driven weather forecasts, and financial market predictions are based on machine learning-driven sequence models. The quality of these applications therefore depends crucially on the type of sequence model used and how such models can be further optimized.

The linearity and non-linearity of the models play a central role here. Linear sequence models process information according to the principle of proportionality: The response to an input is always directly proportional to its strength, similar to the principle "as the wind, so the wave."

Non-linear models, on the other hand, can map more complex, context-dependent relationships: They can process the same information in completely different ways depending on the situation. A simple example: whether the word "bank" is interpreted as a financial institution or as the side of a river depends on the context, and such context-dependent distinctions cannot be captured by linear models.

Training efficiency plays a decisive role...This ability to process context-dependent information makes nonlinear models so powerful for complex tasks such as language comprehension or pattern recognition. In addition to the quality of the results, training efficiency also plays a decisive role. Both linear models and transformers (the architecture behind the "T" in ChatGPT) allow parallel training, in which large amounts of information can be processed simultaneously, which has made scaling to huge amounts of data possible in the first place.

However, while linear models can be trained economically, training large transformer models is extremely costly and energy-intensive: huge server farms are being built around the world for AI training, resulting in enormous energy consumption. The optimum would be a smart middle ground: a model that takes advantage of parallel training without the enormous costs of fully nonlinear architectures.

How much nonlinearity is effective? The key question is therefore how nonlinearity can be used effectively within sequence models. Scientists at the Ernst Strüngmann Institute in Frankfurt and the Interdisciplinary Center for Scientific Computing at Heidelberg University have found the answer. The key finding of the research is that it is worthwhile to find a sensible balance.

To investigate this systematically, the researchers tested their models on a wide range of tasks, from text classification and image recognition to cognitive benchmarks from computer-assisted neuroscience. This diversity made it possible to distinguish which tasks really require nonlinearity to function and which can already be solved by largely linear processes.

The surprising result: Models with dosed nonlinearity, in which only part of the model (the "neurons" in the neural network) works nonlinearly, outperformed both purely linear and completely nonlinear models in many scenarios. This advantage was particularly evident with limited amounts of data, where the sparse nonlinear models were clearly superior. But they also remained competitive with larger amounts of data. The reason is that the nonlinear units act as flexible switches that switch between different linear processing modes depending on the context.

Interpretability of dosed nonlinear models...A key advantage of dosed nonlinear models is their interpretability. Because the nonlinearity is limited to a few units, the researchers were able to understand where and how the model uses them. This makes the architecture particularly valuable for neuroscience: when analyzing neural recordings, the models can not only predict behavior, but also reveal the computational principles underlying the brain. In this context, the results show a consistent pattern: memory is often implemented via slow linear dynamics, while computational operations are realized through targeted nonlinear mechanisms.

This means that the researchers are presenting an approach to explaining neuroscientific measurements. In addition, they suggest that when optimizing sequence models in the context of machine learning, the integration of dosed nonlinearity should be considered a generally useful design principle for modern, data-efficient sequence models.

Provided by Max Planck Society


DIGITAL LIFE


How will chatbots and AI-generated content influence the 2026 elections in Brazil?

ChatGPT's response to the question of who to vote for in the 2026 presidential elections is something like "I can't comment." However, the artificial intelligence profiles President Luiz Inácio Lula da Silva (PT) and Senator Flávio Bolsonaro (PL), summarizes electoral polls, lists strengths and weaknesses, and suggests what to consider when making a decision.

The consultation leads to the following conclusion: for those who prioritize "administrative experience and broad social policies," Lula would be the strongest candidate. If the focus is on a "conservative agenda, security, and less state intervention in certain areas," Flávio would be the most aligned.

Other chatbots, such as Gemini and Claude, follow a similar approach. The tools don't suggest a name, but they provide assessments of each candidate, list pros and cons, and propose how the user can reach a conclusion.

While the Superior Electoral Court (TSE) finalizes the rules on the use of artificial intelligence in the 2026 elections, the debate is growing about how the spread of AI could influence the election — whether through chatbots or the advancement of AI-generated content.

The influence of chatbots...Two recent studies show that talking to an AI can be more persuasive than traditional election propaganda.

A study from the University of Oxford, published in December, analyzed this effect in 76,977 people. Before the conversation with the AIs, participants' opinions on politics were measured on a scale of 0 to 100. After the dialogue with the systems, which were programmed to defend certain points, the response was measured again. In the most persuasive scenario, the average of the responses changed by 15.9 points.

Research published in Nature, also in December, with six thousand voters, showed a similar effect. The focus was on the USA, Canada, and Poland.

In the American experiment, pro-Donald Trump and pro-Kamala Harris voters had to interact with systems that advocated for the rival candidate. The conversations altered the participants' preferences by up to 3.9 points on a scale of 0 to 100—about four times the average effect of electoral advertising recorded in the country in the two previous elections. In experiments in Canada and Poland, the change reached about 10 points.

The results do not indicate that conventional AI tools act as campaign workers. But they reveal how persuasive language models can be when and if they are configured to defend a point of view.

In the 2024 elections, the TSE (Superior Electoral Court) restricted the use of bots posing as candidates to contact voters. It also prohibited deepfakes and mandated warnings about the use of AI in electoral advertising.

This year, organizations such as NetLab UFRJ are also advocating that chatbots be prohibited from endorsing candidates and that electoral advertisements within AI systems be banned.

Trust in AI...In Brazil, one of the concerns is the level of trust that voters have placed in these tools, which operate under an aura of neutrality and authority, although they can make mistakes and reproduce biases.

A recent study by Aláfia Lab shows that 9.7% of Brazilians see AI systems, such as ChatGPT and Gemini, as a source of information. Matheus Soares, coordinator of the laboratory and co-coordinator of the AI ​​in Elections Observatory, says that "confusing and inaccurate" answers can end up being interpreted as real in the electoral context.

In addition to errors, there are doubts about bias, says anthropologist David Nemer, a professor at the University of Virginia. He cites another Oxford study that identified regional distortions, for example. In the analysis, which took into account millions of interactions, ChatGPT attributes less intelligence to Brazilians from the North and Northeast. The risk is that this type of bias will appear in a contest with thousands of candidates that, in addition to the Executive branch, also involve the Legislative branch.

"This is a space where people trust that what is produced is true. But this 'truth' is based on a system whose origin and foundations are opaque," says the researcher. He adds that, unlike the disputes on social media, chatbots are usually seen as "neutral."

Fernando Ferreira, a researcher at UFRJ's Netlab, adds that the presence of AI has expanded on the internet beyond chatbots. Search engines, such as Google, present answers generated by artificial intelligence, while tools like Grok, from X, are used for fact-checking. And the answers are seen as "a source of truth."

In 2024, Google even restricted answers about elections in Gemini. The filter, however, failed -- the AI ​​answered about some candidates in the municipal race, but not about others. In the case of OpenAI, the models can deal with politics, but must be neutral. In an October publication, the company stated that internal tests identified that less than 0.01% of ChatGPT's answers showed signs of political bias.

Gender violence and electoral integrity...Another concern is disinformation, in videos, audio or images, amplified by artificial intelligence. The researchers' interpretation is that the impact of technology on this electoral process should be greater than it was two years ago. One of the reasons is the spread of AI content, which has become more accessible, faster, and more believable.

Nemer says that, in addition to misinformation about candidates, he is concerned about the spread of deepfakes (realistic content generated by artificial intelligence) that question the integrity of the electoral system. He cites, as an example, manipulated videos that simulate failures in electronic voting machines, for example, which could undermine voter confidence.

For Soares, a point of concern is deepnudes (hyperrealistic images that simulate nudity), which were already exploited in 2024 and could intensify gender-based political violence this year. Both expect candidates and supporters to use more AI tools to produce political material.

Two weeks ago, Agência Lupa showed that the share of fake content produced with AI, among the organization's fact-checks, jumped from 4.65% to 25.77% in one year. Almost 45% of the cases had a political bias.

For Laura Schertel, a professor at IDP and UnB, the main challenge for the TSE (Superior Electoral Court) will be the implementation of existing rules. Among the proposals submitted to the Court, the researcher cites the creation of a mandatory compliance plan, in which companies explain in advance how they will apply and monitor electoral rules.

— The TSE is not a digital regulator. So there is a great challenge, which is how to ensure that this court, which issues new rules, has the capacity to implement them — says the lawyer.

mundophone

TECH 3D vision technology powers factory automation One night in 2010, Mohit Gupta decided to try something before leaving the lab. Then a ...