Wednesday, December 17, 2025


TECH


Latin American countries tighten rules against Shein and Temu

The Latin American textile industry is waging a regulatory battle against Chinese ultrafast fashion giants like Shein and Temu, which have flooded the regional market with ultrafast, low-priced clothing. According to a report by the Rest of World website, legislators in several countries are implementing or proposing new regulations to protect local manufacturers from unfair competition.

In Argentina, the impact is especially visible. The Galfione family's textile factory in Buenos Aires, which employs more than 100 people and was once a symbol of Argentina's industrial base, now operates at only 40% of its capacity. "We grew for decades and invested in world-class machinery," Luciano Galfione, president of the ProTejer trade association, told the publication. "Today these plants are museums, with six out of ten machines out of service."

Argentina leads movement with specific bill...The South American country is at the forefront of the anti-Shein movement in the region. The local textile industry is proposing legislation that would impose import controls and a flat 30% customs duty on e-commerce packages, seeking to level the playing field. The initiative has gained multi-party support in Congress.

"We are not afraid to compete, but it has to be on equal terms," ​​said Galfione. "When I sell a T-shirt online from my factory, I pay every imaginable tax. Shein sells the same way and pays none."

Senator Miguel Ángel Pichetto, who introduced the bill in December, wrote on social media in August that "we have to put an end to these indiscriminate opening policies that will destroy the national industry and leave thousands of Argentinians unemployed."

Explosion of ultrafast fashion in the region...The numbers illustrate the scale of the transformation. Between the end of 2022 and the end of 2023, Shein launched 1.5 million new products, compared to approximately 40,000 by Zara and 23,000 by H&M, according to Rest of World. In the first half of 2025, Temu's monthly active users in Latin America skyrocketed 143% compared to the previous year, reaching 105 million, according to market intelligence firm Sensor Tower.

In Argentina, the phenomenon gained momentum after changes implemented by President Javier Milei in 2024. The government reduced import restrictions, cut tariffs to 20%, eliminated licenses, and raised the tax exemption limit for door-to-door imports from US$50 to US$400 per package. The measure triggered an avalanche of online deliveries.

Argentine textile production plummeted by more than 20% last year, while cheap imports increased. The country's textile and apparel industry employs nearly 300,000 people.

Regulatory wave spreads across the continent...The Argentine movement is part of a coordinated regional response. Textile trade associations in Brazil and Mexico are coordinating similar efforts. Mexico recently raised tariffs on small packages from China to 33.5%, while Chile is moving toward applying a 19% value-added tax on low-cost imports. Ecuador began implementing a $20 tax on small packages in June.

The Argentine proposal mirrors the new French ultrafast fashion law, which adds a progressive "eco tax" and requires labels to disclose important environmental information. The bill stipulates that imports from Shein and Temu will undergo inspections verifying that the fabrics are non-toxic and environmentally safe, in addition to being subject to standard import tariffs and taxes.

Global regulatory context...Latin America joins a global movement against ultrafast fashion. In October, the French Senate approved a bill that will sanction Asian fast fashion companies based on their environmental impact. Last year, Indonesia lowered the threshold below which goods are exempt from import tariffs from $100 to $3, while South Africa began taxing small packages below $27. In August, the US eliminated its $800 tariff exemption, meaning that even the smallest imports now face tariffs.

Studies show that many items from ultrafast fashion brands like Shein are worn only a few times before being discarded. Investigations have revealed grueling working conditions in supplier factories and risks of significant environmental damage linked to ultra-cheap production.

Shein has disputed allegations of labor and environmental abuses, arguing that the reports often rely on "small and unrepresentative samples" that "do not convey the reality of Shein as an organization." The company said its regular supplier audits have shown "consistent improvement" in compliance and stated that factory workers at its Chinese suppliers earn, on average, more than double the local minimum wage. When the French Senate moved forward with its fast fashion legislation, Shein countered that it is not "a fast fashion company," but a technology-driven retailer that is "part of the solution."

by Diogo Rodriguez—Journalist and social scientist, with experience in economics, finance, science, technology, and other subjects. He is a fellow of the Tow-Knight Center for Entrepreneurial Journalism (CUNY).


DIGITAL LIFE


Grok spews misinformation about deadly Australia shooting

Elon Musk's AI chatbot Grok churned out misinformation about Australia's Bondi Beach mass shooting, misidentifying a key figure who saved lives and falsely claiming that a victim staged his injuries, researchers said Tuesday. The episode highlights how chatbots often deliver confident yet false responses during fast-developing news events, fueling information chaos as online platforms scale back human fact-checking and content moderation.

The attack during a Jewish festival on Sunday in the beach suburb of Sydney was one of Australia's worst mass shootings, leaving 15 people dead and dozens wounded. Among the falsehoods Grok circulated was its repeated misidentification of Ahmed al Ahmed, who was widely hailed as a Bondi Beach hero after he risked his life to wrest a gun from one of the attackers.

In one post reviewed by AFP, Grok claimed the verified clip of the confrontation was "an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it," suggesting it "may be staged."

Citing credible media sources such as CNN, Grok separately misidentified an image of Ahmed as that of an Israeli hostage held by the Palestinian militant group Hamas for more than 700 days.

When asked about another scene from the attack, Grok incorrectly claimed it was footage from tropical "cyclone Alfred," which generated heavy weather across the Australian coast earlier this year.

Only after another user pressed the chatbot to reevaluate its answer did Grok backpedal and acknowledge the footage was from the Bondi Beach shooting.

When reached for comment by AFP, Grok-developer xAI responded only with an auto generated reply: "Legacy Media Lies."

'Crisis actor'...The misinformation underscores what researchers say is the unreliability of AI chatbots as a fact-checking tool. Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities.

In the aftermath of the Sydney attack, online users circulated an authentic image of one of the survivors, falsely claiming he was a "crisis actor," disinformation watchdog NewsGuard reported.

Crisis actor is a derogatory label used by conspiracy theorists to allege that someone is deceiving the public—feigning injuries or death—while posing as a victim of a tragic event.

Online users questioned the authenticity of a photo of the survivor with blood on his face, sharing a response from Grok that falsely labeled the image as "staged" or "fake."

NewsGuard also reported that some users circulated an AI image—created with Google's Nano Banana Pro model—depicting red paint being applied on the survivor's face to pass off as blood, seemingly to bolster the false claim that he was a crisis actor.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity.

But they caution that they cannot replace the work of trained human fact-checkers.

In polarized societies, however, professional fact-checkers often face criticism from conservatives of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

© 2025 AFP

Tuesday, December 16, 2025


TECH


A step forward in the practical application of non-terrestrial networks for beyond 5G/6G

The National Institute of Information and Communications Technology (NICT) has successfully demonstrated 2 Tbit/s Free-Space Optical (FSO) communication using small optical communication terminals that can be mounted on satellites and HAPS, marking a world first for this technology.

This experiment involved horizontal free-space optical communication between two types of small portable optical terminals developed by NICT: a high-performance FX (Full Transceiver) installed at NICT Headquarters (Koganei, Tokyo) and a simplified ST (Simple Transponder) installed at an experimental site 7.4 km away (Chofu, Tokyo).

NICT's 7.4 km, 2 Tbit/s horizontal propagation experiment (April 2025). The ST terminal was used as the transmitter and the FX terminal as the receiver. The terminals exchanged pseudo-random binary sequences (PRBS) to evaluate line quality. A transmission speed of 2 Tbit/s is equivalent to sending approximately 10 full-size 4K UHD movies per second. Credit: National Institute of Information and Communications Technology (NICT)

Despite the difficult conditions of an urban environment with atmospheric turbulence that disrupts laser beams, the system maintained a stable total communication speed of 2 Tbit/s via Wavelength Division Multiplexing (WDM) transmission of 5 channels (400 Gbit/s each).

This is the first time in the world that terabit-class communication has been realized using terminals miniaturized enough to be mounted on satellites or HAPS.

Moving forward, NICT plans to further miniaturize the terminals for implementation onboard a 6U CubeSat. NICT aims to conduct free-space optical communication demonstrations at speeds of up to 10 Gbit/s between a low Earth orbit (LEO) satellite (altitude approx. 600 km) and the ground in 2026, and between a satellite and HAPS in 2027.

Through these experiments, NICT will demonstrate compact, ultra-high-speed data communication capabilities and pave the way for the realization of Beyond 5G/6G Non-Terrestrial Networks (NTN).

Free-Space Optical (FSO) communication, which transmits laser light through space without optical fibers, is attracting attention as a fundamental technology supporting high-capacity communication between the ground, the sky, and space.

ST and FX free-space optical communication terminals developed by NICT. Credit: National Institute of Information and Communications Technology (NICT)

While demonstrations of FSO exceeding Tbit/s speeds have been advancing, primarily in Europe, previous experiments utilized large, stationary equipment in laboratory-style configurations.

These configurations face challenges in meeting size and weight constraints for mounting on mobile platforms such as satellites or HAPS and in maintaining stable communication in fluctuating environments. Furthermore, in Asia, there have been no reports of FSO demonstrations exceeding the terabit level, with speeds reported to reach at most around 100 Gbit/s.

Free-Space Optical (FSO) communication, which transmits laser light through space without optical fibers, is attracting attention as a fundamental technology supporting high-capacity communication between the ground, the sky, and space.

While demonstrations of FSO exceeding Tbit/s speeds have been advancing, primarily in Europe, previous experiments utilized large, stationary equipment in laboratory-style configurations.

These configurations face challenges in meeting size and weight constraints for mounting on mobile platforms such as satellites or HAPS and in maintaining stable communication in fluctuating environments. Furthermore, in Asia, there have been no reports of FSO demonstrations exceeding the terabit level, with speeds reported to reach at most around 100 Gbit/s.

Achievements...The terminals used in this demonstration were designed for integration into microsatellites, including CubeSats. They meet size and weight constraints, distinguishing them from conventional laboratory-style configurations that use large stationary equipment.

To achieve miniaturization, NICT strictly adhered to a design policy that fits within the severe Size, Weight, and Power (SWaP) constraints of CubeSats. NICT implemented three approaches:

Development of custom-designed components (e.g., a 9 cm-class telescope meeting optical quality requirements for the space environment).

Redesign and modification of commercial components (e.g., a miniaturized fine steering mirror improved to handle high-power laser beams in a vacuum).

Active utilization of existing components (e.g., repurposing high-speed optical transceivers for data centers and incorporating them into modems).

2 Tbit/s compact modem prototype developed by NICT. Credit: National Institute of Information and Communications Technology (NICT)

By implementing these approaches, NICT was able to reduce the size, weight, and power consumption of the entire device while maintaining all the required functions and minimizing the burden on the platform.

Additionally, to handle dynamic environments assumed for mobile operation, NICT implemented high-precision alignment using coarse acquisition and fine tracking. NICT also implemented its proprietary Beam Divergence Control (BDC) technology, which dynamically adjusts the laser beam divergence according to link conditions.

This design, enabling stable communication in mobile environments, is a key feature of these terminals and distinguishes them from conventional fixed-station experimental equipment.

Furthermore, the developed terminals allow for flexible selection of configuration (ST or FX) and modem type (10 Gbit/s type or 100 Gbit/s type) based on communication requirements, as well as adaptive operation according to link conditions via internal adjustment functions.

This demonstration was achieved by overcoming technical challenges for mounting on mobile platforms—such as miniaturization of optical systems and high-precision, flexible beam control—through the development of novel functions such as variable transmission speeds and variable beam widths tailored to the communication environment. It represents a significant step toward the practical realization of Beyond 5G/6G Non-Terrestrial Networks.

Future prospects...As the next step, NICT is preparing for a new experimental campaign in 2026 in which the small optical communication terminals (ST and FX) will be mounted on mobile platforms to simulate realistic links involving satellites and HAPS.

In these experiments, NICT plans to verify the performance of the coarse acquisition and tracking system and the fine tracking system while both communicating terminals are in motion, demonstrating the feasibility of a multi-terabit optical backbone under dynamic conditions for Non-Terrestrial 6G Networks.

Simultaneously, NICT is working on a CubeSat mission scheduled for launch in 2026, aiming to verify a gimbal-less FX terminal (called CubeSOTA) combined with a 10 Gbit/s modem in orbit.

While the CubeSat form factor cannot yet accommodate the power and volume of a 2 Tbit/s modem, NICT is proceeding with the miniaturization and environmental hardening of multi-Tbit/s modems for future on-orbit demonstrations. NICT aims to realize optical communication links in the multi-Tbit/s range between satellites, HAPS, and ground stations within the next 10 years.

Provided by National Institute of Information and Communications Technology (NICT)


DIGITAL LIFE


When AI learns the why, it becomes smarter—and more responsible

Which headline are you more likely to click on? 

Headline A: "Stocks Plunge Amid Global Fears."

Headline B: "Markets Decline Today."

Online publications frequently test headline options like this in what's called an A/B test. In this case, a publication shows headline A to half of its readers, headline B to the other half, then measures which receives more clicks.

Marketers have long used A/B tests to determine what drives engagement. Generative AI is now positioned to accelerate the process, automating the tests and iterating rapidly on headlines—or any other content—to optimize outcomes like click-through rates. But sometimes, according to Yale SOM's Tong Wang and K. Sudhir, simply knowing what works and shaping content accordingly leads to bad outcomes.

"After fine-tuning an LLM—such as GPT-5—on A/B test data, it may conclude that the winning strategy is simply to use words like 'shocking' as often as possible, essentially producing clickbait," Sudhir says.

"The model is exploiting superficial correlations in the data. Our idea was: if the AI can develop a deeper understanding of why things work—not just what works—would that knowledge help it avoid these shallow patterns and instead generate content that is more robust and meaningful?"

Tong and Sudhir, working with pre-doctoral research associate Hengguang Zhou, used an LLM designed to generate competing hypotheses about why one headline is more engaging than another. The model then tested these hypotheses against the full dataset to see which ones generalized broadly. Through repeated rounds of this process, the LLM converged on a small set of validated hypotheses grounded not in superficial correlations but in deeper behavioral principles.

This method mirrors how researchers develop knowledge: starting with abduction, where a small set of observations sparks potential explanations, and then moving to induction, where those explanations are tested on a broader sample to see which ones hold. The team believed that this knowledge-guided approach would allow the LLM to boost engagement without tricking readers—teaching it to write headlines people click on because they are genuinely interesting and relevant, not because they rely on superficial clickbait cues.

For their new study, they set out to test and refine this approach. They started with 23,000 headlines, describing 4,500 articles, from the online media brand Upworthy, which is focused on positive stories. The publication had already run A/B tests on all of these headlines, so the researchers knew which headlines would induce more readers to click through.

The team began by giving the LLM various subsets of articles and their associated headlines, along with their click-through rates. Using this information, the model generated a set of hypotheses about why one headline might be more compelling than another. After forming these hypotheses, the researchers asked the LLM to generate new headlines for a larger sample of articles, systematically varying the hypotheses used. They then evaluated the quality of each generated headline with a pre-trained scoring model built on Upworthy's A/B-test results.

This process allowed the team to identify the combination of hypotheses—or the "knowledge"—that consistently improved headline quality. Once this knowledge was extracted, they fine-tuned the LLM to write headlines that maximize click-through rates while being guided by the validated hypotheses. In other words, the model learned not only to optimize for engagement, but to do so for the right underlying reasons.

"A headline should be interesting enough for people to be curious, but they should be interesting for the right reasons—something deeper than just using clickbait words to trick users to click," Wang says.

"The problem with the standard approach of fine-tuning an AI model is that it focuses narrowly on improving a metric, which can lead to deceptive headlines that ultimately disappoint or even annoy readers. Our point is that when an LLM understands why certain content is more engaging, it becomes much more likely to generate headlines that are genuinely better, not just superficially optimized."

The researchers tested the results of their model with about 150 people recruited to judge the quality of headlines from three different sources: the original Upworthy headlines (written by people), headlines generated by standard AI, and then headlines generated by the new framework. They found that human-generated and standard AI headlines performed about equally well, chosen as the best one roughly 30% of the time. The new model ranked best 44% of the time.

When participants were asked about their choices, many of them noted the traditional AI model created "catchy" headlines that evoked curiosity, but that they resembled clickbait, which made participants wary. An analysis of the language used in the headlines—comparing word choice from traditional AI with the new model—corroborated this skepticism, revealing that the standard AI model did, in fact, rely much more heavily on sensational language.

"Importantly, the potential for this work is not simply about realizing better content generation," Wang says; what makes it even more consequential is how the content generation was improved: by teaching an LLM to generate its own hypotheses. "The fact that this can propose hypotheses from a small set of data allows it to generate new theories and, ideally, improve our understanding of the world."

Sudhir points to ongoing work with a company in developing personalized AI coaching for customer service agents. If some interactions lead to better outcomes than others, then this new framework could be used to review scripts from customer interactions and generate hypotheses about why one approach is superior to others; and after validation, that knowledge could be used to offer personalized advice to agents on how to do better.

"In many social science problems, there is not a well-defined body of knowledge," Sudhir says. "We now have an approach that can help discover it."

The input data needn't be textual, either; it could be audio or visual. "In a larger sense, this is not just about better headlines—it's about accelerating knowledge generation. As it turns out, knowledge-guided AI is also more responsible and trustworthy."

Provided by Yale University

Monday, December 15, 2025

 

TECH


Perfect atomic layers paves the way for the next generation of quantum chips

For decades, progress in electronics has been linked to the miniaturization of components. Increasingly smaller transistors have enabled faster, more efficient, and cheaper chips. However, this strategy is reaching a delicate physical limit. When devices reach the atomic scale, almost invisible imperfections begin to seriously compromise performance. In technologies such as quantum computing, these defects can be simply fatal.

It is in this context that the recent advance by a group of researchers from South Korea gains relevance. For the first time, it was possible to manufacture atomic layers of a semiconductor continuously, virtually without flaws, and in a size compatible with industrial production.

The center of the discovery is molybdenum disulfide, known as MoS₂. It is a two-dimensional material, with a thickness equivalent to a single atom — more than a hundred times thinner than a human hair.

For years, MoS₂ has sparked interest because, unlike graphene, it is a "complete" semiconductor: it allows for controlled switching of electrical current on and off, something essential for transistors. The problem has always been manufacturing. Producing large areas of this material, uniform and without structural defects, seemed unfeasible outside the laboratory.

Microscopic defects, giant impacts...On an atomic scale, small flaws make a huge difference. In MoS₂, defects usually arise at the boundaries between crystalline domains. Although invisible to the naked eye, these imperfections interrupt the movement of electrons and destroy fundamental quantum properties.

For quantum chips, this means noise, loss of coherence, and processing errors. Eliminating these defects required something beyond point adjustments: it was necessary to control the positioning of atoms during the growth of the material.

The solution came from improving the so-called van der Waals epitaxy, applied to a special type of slightly inclined sapphire, known as a vicinal substrate. At the atomic level, this surface exhibits natural “steps” that act as invisible guides.

These steps orient the MoS₂ atoms during growth, forcing a more ordered organization. With precise control of temperature, pressure, and deposition, the researchers were able to form continuous, uniform, and virtually perfect monolayers in areas the size of a silicon wafer.

When the material proves its worth...Definitive validation came from electronic tests. The produced layers exhibited coherent quantum transport, with signs of phenomena such as weak localization and early indications of the quantum Hall effect. This indicates that electrons can move without losing their quantum phase—something essential for stable quantum chips.

In addition, the material exhibited high electron mobility. To demonstrate practical viability, the researchers fabricated complete arrays of transistors, which functioned efficiently at room temperature, close to the material's theoretical limits.

Why this matters for the future...Quantum computing requires extremely stable materials, and every defect is a potential source of error. A two-dimensional semiconductor, free of imperfections and capable of being manufactured on a large scale, removes one of the biggest bottlenecks in the sector.

More than a one-off breakthrough, the method can be adapted to other two-dimensional materials, expanding its impact on sensors, advanced memories, and low-power electronics. It doesn't mean perfect quantum chips tomorrow, but it shows that precise atomic manufacturing is already an industrial reality—and no longer just a scientific promise.

Key Atomic Layer Technologies:

Perfect Semiconductors: Researchers are producing continuous layers of semiconductors with the thickness of a single atom, with minimal defects, increasing the stability of qubits.

Artificial Atoms (Quantum Dots): Use electrons in silicon chips to create "atoms" that act as qubits, improving reliability compared to single-electron qubits.

Superconducting Qubits: Circuits made of materials such as aluminum, niobium, or tantalum, deposited on substrates (silicon/sapphire), which become superconductors at cryogenic temperatures, forming qubits in resonators.

Majorana Qubits: Nanowires formed by indium arsenide and aluminum that, at very low temperatures, generate quasiparticles (Majoranas) that store quantum information.

Ion Traps: Silicon chips with electrodes and waveguides (optical wiring) that use lasers to trap and manipulate individual ions, forming stable and scalable qubits.

Challenges and Requirements:

Stability: Qubits are extremely sensitive to vibrations, electromagnetic noise, and heating, requiring isolation and extreme cooling (near absolute zero).

Control: Precise manipulation of quantum states with lasers or microwaves for quantum operations.

Scalability: Industrial fabrication of perfect layers and large-scale control are crucial for practical quantum computers.

These approaches, combining the microfabrication of classical chips with new materials and precise atomic manipulation, are the basis for the next generation of quantum computing.

mundophone

 

DIGITAL LIFE


Tech savvy users have most digital concerns, study finds

Digital concerns around privacy, online misinformation, and work-life boundaries are highest among highly educated, Western European millennials, finds a new study from researchers at UCL and the University of British Columbia.

The research, published in Information, Communication & Society, also found individuals with higher levels of digital literacy are the most affected by these concerns.

Study methodology and data sources...For the study, the researchers used information from the European Social Survey (ESS)—a project that collects nationally representative data on public attitudes, beliefs and behavior, from thousands of people across Europe every two years.

They analyzed responses from nearly 50,000 people in 30 countries between 2020 and 2022.

For the ESS, participants were asked how much they thought digital tech infringes on privacy, helps spread misinformation, and causes work-life interruptions. Combining responses to the questions into a single index, the researchers generated a digital concern scale, ranging from 0 to 1, where a higher score indicates greater concern.

To establish their digital literacy and digital exposure, the respondents were asked how often they use the internet and to rate their familiarity with preference settings on a computer, advanced search on the internet, and using PDFs. At the country level, digital exposure was captured through the percentage of the population using the internet in each country.

The researchers looked at the levels of concern across different countries, as well as how the concern varies across social groups. They also looked at patterns relating to people's digital literacy and their exposure to digital tech.

Key findings on digital concerns...They found millennials (those aged 25–44 in 2022) reported greater concerns, compared to younger (15–24) and older adults (75+). They found no significant differences in the level of digital concerns between men and women, nor between income groups or between urban and rural residents.

Across the board, people were more concerned about the potential harms of digital technologies than not. Bulgaria was the only country in the study that did not exceed the mid-point (0.5) on the digital concern scale (0–1). Of all the countries studied, digital concern was lowest in Bulgaria (with a score of 0.47) and highest in the Netherlands (0.74), followed by the UK (0.73).

Compared with native-born citizens, migrants reported lower levels of digital concern, and those who were in work had a lower level of digital concerns than those out of work.

People with middle/high school education and those with a university degree reported greater levels of worry compared to their peers with no education or only primary school education.

The researchers found that those with greater tech know-how are more concerned about the negative impacts of digitalization, but this association is only observed among people who use digital technology on most days or on a daily basis.

Interpretation and expert commentary...The findings suggest that individuals may perceive the potential harms of digitalization as something that is beyond their control. So, the more they know about and are exposed to the issues, the more powerless and concerned they may feel.

Lead author Dr. Yang Hu (UCL Social Research Institute) said, "Our findings call into question the assumption that greater exposure to the digital world reduces our concern about its potential harm.

"Rather than becoming desensitized, greater use of digital technology seems to heighten our concerns about it, particularly among people who have a high level of digital literacy.

"Anxieties about digitalization have become a defining feature of today's world. As our use and understanding of technology grows, concern about its potential harm can impact individuals' mental health and quality of life, as well as wider societal well-being.

"As businesses, governments, and societies embrace new technologies, tech has become ubiquitous and digital literacy is essential for most people. The rapid development of AI is undoubtedly accelerating this process, so digital concern is not an issue that can be ignored."

Co-author Dr. Yue Qian (University of British Columbia, Canada) said, "Our results reveal dual paradoxes: those who are supposedly most vulnerable to digital harms—young people, older adults, and those with a low level of digital literacy—appear least concerned about the harms, while those with advanced digital skills report the most concern.

"While mainstream efforts at improving digital literacy have focused on bolstering practical skills, authorities should not ignore people's concerns about what rapid digitalization means for the subjective well-being of individuals and societies."

Provided by University College London

Sunday, December 14, 2025


TECH


'Periodic table' for AI methods aims to drive innovation

Artificial intelligence is increasingly used to integrate and analyze multiple types of data formats, such as text, images, audio and video. One challenge slowing advances in multimodal AI, however, is the process of choosing the algorithmic method best aligned to the specific task an AI system needs to perform.

Scientists have developed a unified view of AI methods aimed at systematizing this process. The Journal of Machine Learning Research published the new framework for deriving algorithms, developed by physicists at Emory University.

"We found that many of today's most successful AI methods boil down to a single, simple idea—compress multiple kinds of data just enough to keep the pieces that truly predict what you need," says Ilya Nemenman, Emory professor of physics and senior author of the paper.

"This gives us a kind of 'periodic table' of AI methods. Different methods fall into different cells, based on which information a method's loss function retains or discards."

An AI system's loss function is a mathematical equation that measures the error rate of the model's predictions. During training of an AI model, the goal is to minimize its loss by adjusting the model's parameters, using the error rate as a guide for improvement.

"People have devised hundreds of different loss functions for multimodal AI systems and some may be better than others, depending on context," Nemenman says. "We wondered if there was a simpler way than starting from scratch each time you confront a problem in multimodal AI."

A unifying framework...The researchers developed a unifying mathematical framework for deriving problem-specific loss functions, based on what information to keep and what information to throw away. They dubbed it the Variational Multivariate Information Bottleneck Framework.

"Our framework is essentially like a control knob," says co-author Michael Martini, who worked on the project as an Emory postdoctoral fellow and research scientist in Nemenman's group. "You can 'dial the knob' to determine the information to retain to solve a particular problem."

"Our approach is a generalized, principled one," adds Eslam Abdelaleem, first author of the paper. Abdelaleem took on the project as an Emory Ph.D. candidate in physics before graduating in May and joining Georgia Tech as a postdoctoral fellow.

"Our goal is to help people to design AI models that are tailored to the problem that they are trying to solve, while also allowing them to understand how and why each part of the model is working," he says.

AI-system developers can use the framework to propose new algorithms, to predict which ones might work, to estimate the needed data for a particular multimodal algorithm, and to anticipate when it might fail.

"Just as important," Nemenman says. "It may let us design new AI methods that are more accurate, efficient and trustworthy."

Eslam Abdelaleem led the work as an Emory graduate student. The day of the final breakthrough, the AI health tracker on his watch recorded his racing heart as three hours of cycling. “That’s how it interpreted the level of excitement I was feeling,” Abdelaleem says. Credit: Barbara Conner

A physics approach...The researchers brought a unique perspective to the problem of optimizing the design process for multimodal AI systems.

"The machine-learning community is focused on achieving accuracy in a system without necessarily understanding why a system is working," Abdelaleem explains. "As physicists, however, we want to understand how and why something works. So, we focused on finding fundamental, unifying principles to connect different AI methods together."

Abdelaleem and Martini began this quest—to distill the complexity of various AI methods to their essence—by doing math by hand.

"We spent a lot of time sitting in my office, writing on a whiteboard," Martini says. "Sometimes I'd be writing on a sheet of paper with Eslam looking over my shoulder."

The process took years, first working on mathematical foundations, discussing them with Nemenman, trying out equations on a computer, then repeating these steps after running down false trails.

"It was a lot of trial and error and going back to the whiteboard," Martini says.

Doing science with heart...They vividly recall the day of their eureka moment. They had come up with a unifying principle that described a tradeoff between compression of data and reconstruction of data.

"We tried our model on two test datasets and showed that it was automatically discovering shared, important features between them," Martini says. "That felt good."

As Abdelaleem was leaving campus after the exhausting, yet exhilarating, final push leading to the breakthrough, he happened to look at his smartwatch. It uses an AI system to track and interpret health data, such as his heart rate. The AI, however, had misunderstood the meaning of his racing heart throughout that day.

"My watch said that I had been cycling for three hours," Abdelaleem says. "That's how it interpreted the level of excitement I was feeling. I thought, 'Wow, that's really something!' Apparently, science can have that effect."

Applying the framework...The researchers applied their framework to dozens of AI methods to test its efficacy.

"We performed computer demonstrations that show that our general framework works well with test problems on benchmark datasets," Nemenman says. "We can more easily derive loss functions, which may solve the problems one cares about with smaller amounts of training data."

The framework also holds the potential to reduce the amount of computational power needed to run an AI system.

"By helping guide the best AI approach, the framework helps avoid encoding features that are not important," Nemenman says. "The less data required for a system, the less computational power required to run it, making it less environmentally harmful. That may also open the door to frontier experiments for problems that we cannot solve now because there is not enough existing data."

The researchers hope others will use the generalized framework to tailor new algorithms specific to scientific questions they want to explore.

Meanwhile, they are building on their work to explore the potential of the new framework. They are particularly interested in how the tool may help to detect patterns of biology, leading to insights into processes such as cognitive function.

"I want to understand how your brain simultaneously compresses and processes multiple sources of information," Abdelaleem says. "Can we develop a method that allows us to see the similarities between a machine-learning model and the human brain? That may help us to better understand both systems."

Provided by Emory University 

TECH Latin American countries tighten rules against Shein and Temu The Latin American textile industry is waging a regulatory battle against...