Wednesday, April 1, 2026


DIGITAL LIFE


Who is using differential privacy? A new registry aims to make it visible

When Apple discovers trending popular emojis, or when Google reports traffic at a busy restaurant, they're analyzing large datasets made up of individual people. Those people's personal information is systematically protected thanks in large part to research by Harvard computer scientists. Now, after two decades of work on the cryptography-adjacent mathematical framework known as differential privacy, researchers in the John A. Paulson School of Engineering and Applied Sciences have reached a key milestone in moving privacy best practices from academia into real-world applications.

A team led by Salil Vadhan, the Vicky Joseph Professor of Computer Science and Mathematics at SEAS, has launched the Differential Privacy Deployments Registry, a collaborative, shared database of companies and agencies actively using the highly rigorous data-protection scheme that first entered the academic literature in 2006. The theoretical privacy-protection framework has since seen growing popularity among large companies and organizations that handle sensitive information. The new database should enable even more adoptions and refinements.

"There's real societal value that differential privacy has the potential to provide, but only if we can make it easy and effective enough for people to adopt," said Vadhan, who, in 2019, co-founded the community project OpenDP, which develops open-source tools for deploying differential privacy. OpenDP emerged from a preceding National Science Foundation-supported research initiative at Harvard called the Privacy Tools Project and is led by Vadhan and Gary King, Albert J. Weatherhead III University Professor at Harvard.

The 2006 paper that described the foundational theory behind differential privacy was first authored by Cynthia Dwork, Gordon McKay Professor of Computer Science at SEAS, in collaboration with Frank McSherry, Kobbi Nissim and Adam Smith. Dwork's research in cryptography and privacy was recently awarded the National Medal of Science.

Since that time, the theoretical framework has moved into diverse real-world applications, springboarded by the U.S. government's high-profile deployment of the technology on U.S. Census Bureau data in 2020. Thanks to the protections afforded by differentially private algorithms, survey-takers who provided personal information to the government enjoyed an extra guarantee of privacy.

The National Institute of Standards and Technology, a government agency that plays a central role in developing guidelines for information security and privacy technology across the United States, has proposed hosting the new public registry, with a final decision pending.

A resource for the DP community...Billed as a resource hub for the differential privacy community to support broader understanding and communication across sectors, the new database should not only help create new users of differential privacy but also help legal and policy teams better understand existing uses. Current deployments in the database include large companies like Apple and Microsoft as well as government agencies like the National Statistics Office of Korea, who have self-reported their differential privacy deployments.

Key insights into how to design the registry came from a 2025 research study led by Priyanka Nanayakkara, a postdoctoral researcher in Vadhan's lab, who joined Harvard in 2024 with plans to develop the registry. The research has been accepted for publication by the IEEE Symposium on Security and Privacy (SP2026) and is available on the arXiv preprint server. Together, Nanayakkara, Ph.D. student Elena Ghazi, and Vadhan developed a research prototype of the registry and conducted a user study with practitioners to learn how they might use the registry in their work.

During the research process, they worked with collaborators on the OpenDP team and at Oblivious, an Ireland-based data privacy company, to incorporate their research into a live version of the registry initially started by Oblivious a year prior.

"We said, "How can we build the registry concept out into an interactive interface so that it's usable by practitioners? Longer term, it would be great to further develop the registry to be usable by policymakers and data subjects—for example, if you are contributing your personal data for model training for analysis, wouldn't it be great to be able to use the registry to see how your data has been protected?'" Nanayakkara said.

Mathematically rigorous privacy guarantee...Differential privacy is a mathematically formulated definition of privacy. Rather than a set of particular algorithms or equations, it is a benchmark for privacy protection that's afforded by the process of constructing a post-analysis dataset such that individual information cannot be extracted from it, either unintentionally or otherwise.

For example, if a medical database was used for a statistical analysis or to train a machine learning model, the data would be differentially private only if individual information would be difficult to retrieve from the published results. This standard is met by adding random statistical "noise" during computations of the data. These carefully calibrated blurring mechanisms are created via algorithms that employ specific probability distributions.

The idea for a public-facing deployment registry was initialized by a 2018 paper by Dwork and colleagues. Computer scientists label the critical parameter that must be set when using differential privacy as "epsilon," so the paper first called the idealized database an "epsilon registry."

Dwork, who has been giving talks on differential privacy for 20 years, said that the choice to implement the technology is always a policy decision, not a technical one—"yet still, every time, the first question from a general audience is, 'How should we choose epsilon?'" Dwork said.

Thus, she is "thrilled" with the establishment of the registry and "in awe" of Vadhan's leadership in building and sustaining the OpenDP community. "The collective wisdom of the community in balancing the feasible and the tolerable will aid future practice, not just in choosing epsilon but in myriad other decisions and strategies needed for the deployment of differential privacy in different settings and with different goals," Dwork said.

While it remains to be seen how the new registry will change the differential privacy landscape, initial findings from the Harvard user study are promising: For instance, many practitioners saw potential for the registry to become a needed hub for the community, helping to develop best practices and inform future deployments.

Provided by Harvard John A. Paulson School of Engineering and Applied Sciences 


DIGITAL LIFE


AI systems lack a fundamental property of human cognition: Understanding this gap may matter for safety

When a person reaches across a table to pass the salt, their brain is doing something far more complex than recognizing a request and executing a movement. It is drawing on a lifetime of bodily experience—where their hand is in space, what a saltshaker feels like, the social awareness of who asked and why. In a fraction of a second, their body and brain are working as one.

Today's most advanced artificial intelligence systems lack such bodily mechanisms and a new study by UCLA Health argues that this has significant implications for how these models behave as well as how safe and trustworthy they can become.

In a paper published in the journal Neuron, UCLA Health postdoctoral fellow Akila Kadambi and colleagues propose that current AI systems are missing two essential ingredients that humans take for granted: a body that interacts with the physical world and an internal awareness of that body's own states, such as fatigue, uncertainty or physiological need.

The researchers call this combined property "internal embodiment," and propose that building functional analogs of it into AI represents one of the most crucial and underexplored frontiers in the field.

"While there is a current focus in world modeling on external embodiment, such as our outward interactions with the world, far less attention is given to internal dynamics, or what we term 'internal embodiment.' In humans, the body acts as our experiential regulator of the world, as a kind of built-in safety system," said Akila Kadambi, a postdoctoral fellow in the Department of Psychiatry and Biobehavioral Sciences at UCLA's David Geffen School of Medicine and the paper's first author.

"If you're uncertain, if you're depleted, if something conflicts with your survival, your body registers that. AI systems right now have no equivalent. They can sound experiential, whether they should be or not, and that's a real problem for many reasons, especially when these systems are being deployed in consequential settings."

The AI body gap...The paper focuses on multimodal large language models, which is the class of technology that powers tools such as ChatGPT and Google's Gemini. While these systems can process and generate text, images and video to describe a cup of water, for example, they cannot know what it feels like to be thirsty, the authors state.

That distinction is not only philosophical, the authors state, but also has measurable consequences for how these systems perform and behave.

In one illustration from the paper, researchers showed several leading AI models a simple image: a small number of dots arranged to suggest a human figure in motion, which is a well-established perceptual test known as a point-light display that even newborns can recognize as human.

Several models failed to identify the figure as a person, with one describing it instead as a constellation of stars. When the same image was rotated just 20 degrees, even the best-performing models broke down.

Humans don't fail this test because human perception is anchored to a lifetime of bodily experience that they have moving as acting agents in the world. AI systems, trained on vast libraries of text and images but with no bodily experience, are pattern-matching without that anchor, the study authors state.

Two kinds of 'embodiment'...The paper draws a distinction that has not previously been made explicit in AI research. It defines "external embodiment" as a system's ability to interact with the physical world, to perceive its environment, plan actions and respond to real-world feedback, which is an important focus in current multimodal AI models.

Internal embodiment, however, has not been implemented in these models. The authors define this as the continuous monitoring of one's own internal states, the biological equivalent of knowing you are tired, uncertain or in need.

Humans regulate these internal states constantly and automatically using the body's organs, hormones and nervous system. Humans use that information not just to maintain physical health, but to shape attention, memory, emotion and social behavior.

"By contrast, current AI systems have no equivalent mechanism. They process inputs and generate outputs without any persistent internal state that regulates how they behave over time," said Dr. Marco Iacoboni, professor in the Department of Psychiatry and Biobehavioral Sciences at the David Geffen School of Medicine and a senior author on the paper.

"This is not just a performance limitation, but also a safety limitation. Without internal costs or constraints, an AI system has no intrinsic reason to avoid overconfident errors, resist manipulation or behave consistently."

What comes next...The authors state the paper is meant to guide future research as AI technology develops. The authors propose what they call a "dual-embodiment framework," or a set of principles for building AI systems that model both their interactions with the external world and their own internal states.

These internal state variables would not need to replicate human biology directly but would function as persistent signals tracking things like uncertainty, processing load and confidence that could shape the system's outputs and constrain its behavior over time.

The authors also propose a new class of tests, or benchmarks, designed to measure a system's internal embodiment. Existing AI benchmarks focus almost exclusively on external performance, such as if the system can navigate a space, identify an object, complete a task.

The UCLA researchers argue the field needs evaluations that probe whether a system can monitor its own internal states, maintain stability when those states are disrupted and behave pro-socially in ways that emerge from shared internal representations rather than statistical mimicry.

"What this work does is bring that insight directly to bear on AI development," Iacoboni said. "If we want AI systems that are genuinely aligned with human behavior—not just superficially fluent—we may need to give them vulnerabilities and checks that function like internal self-regulators."

Provided by University of California, Los Angeles 

Tuesday, March 31, 2026


TECH


Eco-friendly plastic plates could replace steel bars in concrete

Researchers at the University of Sharjah have demonstrated that concrete can be reinforced using polymer plates instead of steel bars, with the new material showing superior strength, ductility, and energy dissipation. The details of their findings, published in the journal Construction and Building Materials, could pave the way for more sustainable and environmentally friendly construction materials.

According to the study, polymer plates significantly outperformed steel bars, achieving nearly double the peak load capacity and absorbing five times more energy than configurations reinforced with steel bars.

"Results showed that optimized wavy geometries significantly enhanced bond strength, improved post-cracking behavior, and increased energy dissipation compared to traditional straight reinforcement," they write. "The best-performing specimens reached nearly 80% of the flexural strength of steel-reinforced samples."

Rather than simply substituting steel bars with plastic versions, the research explored how the shape and geometry of the reinforcement influence structural performance. The researchers evaluated two primary reinforcement configurations.

"We tested bars versus plates, comparing standard rod-like shapes to flat, plate-like structures," said Dr. Muhammad Talha Junaid, associate professor of materials and structures at the University of Sharjah.

"We also tested traditional straight lines against innovative wavy, serrated, and triangular patterns designed to grip the concrete better and achieve better stress transfer."

The construction sector is pivoting to Additive Manufacturing (AM) to reduce waste and automate production. Scientists have successfully tested eco-friendly plastic plates as a potential replacement for the steel bars traditionally used to reinforce concrete. Credit: Construction and Building Materials (2025)

Strong alternative to steel...Concrete, the most widely used construction material on the planet, depends heavily on steel reinforcement to provide tensile strength. Globally, it is estimated that half of all steel production, approximately 900 million tons annually, is used in construction, with a substantial portion allocated specifically to reinforcing concrete.

While effective, steel comes with drawbacks: it is heavy, costly, and susceptible to corrosion, which can compromise the longevity of structures, explained Dr. Junaid. "In our study, we investigated a cutting-edge solution: reinforcing concrete with 3D-printed polylactic acid (PLA), a biodegradable thermoplastic."

One of the key findings, according to Dr. Junaid, is that plates outperform bars. "Beams reinforced with PLA plates achieved up to twice the peak load capacity and absorbed up to five times more energy (toughness) than those using simple PLA bars. The increased surface area of the plates allowed for a much stronger bond with the concrete."

The researchers also discovered that non-traditional shapes, especially triangular and wavy forms, greatly enhanced the beam's ability to handle post-cracking stress. Dr. Junaid said, "These serrated shapes acted like teeth, locking into the concrete to prevent slipping."

However, the most effective configuration was the triangular wavy PLA plate, which achieved "nearly 80% of the bending strength of a traditional steel-reinforced beam and matched its ductility (flexibility)," added Dr. Junaid.

Thermoplastic plates outperform traditional steel bars...One of the key takeaways from the study is that it provides a pathway for the mass production of innovative reinforcing shapes, demonstrating that the performance of reinforced concrete depends not only on material itself but also on the geometry of reinforcement.

"We found that the 'wavy' or serrated shapes (resembling teeth) grip the concrete much better than straight bars, preventing the reinforcement from slipping when the beam is loaded or stressed," said Dr. Junaid. "This increased the bond to help distribute the stress, thereby enhancing the strength performance of the elements."

The construction sector is pivoting to Additive Manufacturing (AM) to reduce waste and automate production. Scientists have successfully tested eco-friendly plastic plates as a potential replacement for the steel bars traditionally used to reinforce concrete. Credit: Construction and Building Materials (2025).

The researchers also discovered that using flat plates with longitudinal reinforcing elements, rather than traditional rod-like bars alone, significantly improved performance.

"The plates provided more surface area for the concrete to bond to, resulting in beams that could handle twice the load and absorb five times more energy than those with bars only," Dr. Junaid added.

The authors emphasize that their reinforcement method "presents a viable, non-corrodible alternative to conventional steel reinforcement. While its strength is slightly lower, it has demonstrated comparable performance in certain configurations. It offers a sustainable, lightweight solution, particularly suitable for applications requiring corrosion resistance or material compatibility."

They further note that "PLA plate configurations consistently outperformed PLA bars, with their effectiveness governed by parameters such as increased bond strength, continuity of the reinforcement path, cross-sectional area, and increased bonded surface area with the surrounding concrete."

In their study, the researchers outline several practical implications, noting that the thermoplastic plates offer substantial advantages over traditional steel bars, particularly in their superior resistance to corrosion, light weight, customization on demand, and overall sustainability.

Provided by University of Sharjah 

 

DIGITAL LIFE


Supply chain attack compromises Axios and installs Trojan on Windows, macOS, and Linux

The popular Axios library, used in countless JavaScript projects to make HTTP requests, suffered a supply chain attack that compromised two specific versions published in the NPM registry. StepSecurity investigators identified versions 1.14.1 and 0.30.4 as malicious, published in the early morning of March 31, 2026. The packages injected a fake dependency that executes an installation script capable of installing a remote access trojan on developer machines.

One of the most popular HTTP clients in the JavaScript ecosystem, Axios, was the target of a supply chain attack after two versions of the package published on npm introduced a malicious dependency capable of installing a trojan on Windows, macOS, and Linux systems.

The affected versions, 1.14.1 and 0.30.4, included the fake dependency “plain-crypto-js” in version 4.2.1. According to the security company StepSecurity, these versions were published using compromised credentials of the main maintainer of Axios.

The malicious package was designed exclusively to execute a post-installation script that acts as a “dropper” – an initial installer – of a cross-platform remote access trojan (RAT). This code connects to a command and control (C2) server, downloads additional payloads specific to each operating system, and, after execution, erases its own traces to make forensic detection difficult.

According to researchers, the attack was not opportunistic. The malicious dependency was prepared in advance, with three distinct payloads developed for different operating systems. The two compromised versions of Axios were published only 39 minutes apart, in a coordinated and planned operation to maximize reach before detection.

Axios is one of the most widely used packages in the JavaScript ecosystem, with over 83 million weekly downloads, and is widely employed in front-end applications, back-end services, and enterprise systems.

The incident exposed the vast development ecosystem that depends on the library, one of the most downloaded on the platform with over 100 million weekly downloads. The attackers did not alter the core Axios code, but added a hidden dependency called plain-crypto-js@4.2.1. This dependency automatically activated when running npm install, installing specific payloads for Windows, macOS, and Linux.

How the maintenance account was compromised...Those responsible for the attack gained access to the NPM account of the project's main maintainer, identified as jasonsaayman. They changed the associated email address to ifstap@proton.me and manually published the compromised versions, bypassing the repository's automated continuous integration flows on GitHub. The first malicious version, axios@1.14.1, was released around 00:21 UTC, followed by axios@0.30.4 approximately 39 minutes later.

This approach allowed the packages to be made available without triggering signature checks or the usual CI/CD processes. The Axios maintainers reacted quickly after the discovery, and NPM removed both versions within a few hours, limiting the exposure time to about two to three hours.

The fake dependency plain-crypto-js@4.2.1 was not imported at any point in the original Axios code, serving exclusively to execute a postinstall script. The script acted as a remote access Trojan dropper, establishing contact with a command and control server to download additional payloads tailored to each operating system.

Obfuscation techniques were employed to hinder immediate analysis, with commands decoded at runtime. After successful installation, the malware removed its own traces, replacing the package.json file with a clean version to avoid detection in subsequent inspections of the node_modules folder.

Verification of affected versions with the command `npm list axios` filtering 1.14.1 or 0.30.4

Checking for the presence of the `node_modules/plain-crypto-js` folder as an indicator of compromise

Searching for artifacts such as temporary files in `/tmp/ld.py` or equivalents on other systems

Recommended mitigation measures for developers... Programmers who have installed versions 1.14.1 or 0.30.4 should consider their environment compromised and act immediately. The main recommendation is to revert to previous secure versions: axios@1.14.0 in the latest branch or axios@0.30.3 in the legacy version.

It is essential to remove the fake dependency, perform a clean installation with the –ignore-scripts flag, and rotate all sensitive credentials, including NPM tokens, SSH keys, cloud service access, and environment variables. In continuous integration pipelines, the permanent adoption of the parameter that ignores post-installation scripts helps prevent unwanted automatic executions.

Axios is among the most widely used libraries in the Node.js ecosystem and in front-end applications, being a direct or indirect dependency of numerous corporate and open-source projects. The attack highlights the inherent vulnerability of individual maintainer accounts in highly popular packages, even when the core code remains intact.

Security experts note that the method employed demonstrates operational sophistication, with prior preparation of the fake dependency in a clean version before injecting the malicious payload. This strategy complicated initial automatic detections and increased the risk during the short period that the versions were available.

Guidelines for checking and cleaning affected environments...Development teams need to audit installation logs and package history to identify if malicious versions were downloaded. The presence of the plain-crypto-js folder in node_modules serves as a strong indicator that the dropper was executed, regardless of subsequent file removal.

After the cleanup, a full scan of the systems with threat detection tools and monitoring of network connections to addresses associated with the control server is recommended. Immediate updating of security policies in private repositories also helps to reduce similar risks in other packages.

Prevention of future attacks on package registries...The incident reinforces the importance of measures such as rigorous multi-factor authentication on publishing accounts, continuous monitoring of changes in package metadata, and the adoption of more robust integrity checks. Open source projects with wide adoption may consider additional review processes before new releases.

Individual developers and companies should prioritize pinning known secure versions in project configuration files, avoiding the automatic installation of updates without prior validation. These practices help limit the attack surface in software supply chains.

The security community continues to monitor the case to map potential victims and refine detection tools. To date, there are no public reports of large-scale exploitation, but the unanimous recommendation is to treat any installation of the affected versions as a total compromise of the system involved.

mundophone

Monday, March 30, 2026


TECH


Hybrid desiccant and shallow geothermal cooling can cut energy use in humid climates

A research team from Taiwan has developed a novel hybrid air-conditioning system integrating shallow geothermal energy and a desiccant wheel. Field tests confirm it significantly reduces energy consumption by up to 34.3% in hot and humid climates, offering a promising solution for net-zero buildings.

Air conditioning is essential in hot and humid regions, but it consumes a large share of building energy. In subtropical climates such as Taiwan, cooling systems often account for up to 40% of electricity use. To address this challenge, researchers have developed a new hybrid air-conditioning system that combines shallow geothermal energy with desiccant-based dehumidification.

Unlike conventional systems that cool and dehumidify air simultaneously, the new approach separates these processes. A desiccant wheel removes moisture from ventilation air, while a ground-source heat pump and shallow geothermal energy handle temperature control. This design avoids energy-intensive condensation dehumidification and improves overall system efficiency.

The research team conducted on-site experiments in Taiwan under various seasonal conditions. Results showed that the system maintained indoor comfort across a wide range of temperatures and humidity levels. The study is published in Energy Conversion and Management.

During hot and humid summer conditions, the system reduced energy consumption by approximately 34.3% compared to a conventional air-conditioning system. Even during milder spring and autumn conditions, energy savings of 18.7% were achieved.

A hybrid air-conditioning system integrating shallow geothermal energy and desiccant dehumidification was tested under real-world conditions in Taiwan, proving highly effective in slashing energy consumption for hot and humid climates. Credit: National Taiwan University

A year-round analysis further revealed that shallow geothermal energy alone could meet indoor cooling or heating needs for nearly 40% of the time. However, due to consistently high humidity, the desiccant dehumidification system remained essential for about half of the annual operation. The study also demonstrated that the system could flexibly switch between different operating modes depending on outdoor conditions, ensuring both efficiency and comfort.

In addition to energy savings, the system offers practical advantages. It uses stable underground temperatures as a natural heat source or sink, improving performance compared to traditional air-cooled systems.

This research highlights a promising pathway toward low-energy and sustainable cooling technologies, particularly in regions facing increasing cooling demand due to climate change.

"By combining renewable shallow geothermal energy with innovative humidity control, we can significantly reduce the energy burden of air conditioning in hot and humid climates," says corresponding author Dr. Sih-Li Chen, professor of mechanical engineering at National Taiwan University.

Provided by National Taiwan University  


TECH


Dubious AI detectors drive 'pay-to-humanize' scam

Feed an Iranian news dispatch or a literary classic into some text detectors, and they return the same verdict: AI-generated. Then comes the pitch: pay to "humanize" the writing, a pattern experts say bears the hallmarks of a scam.

As AI falsehoods explode across social media, often outpacing the capacity of professional fact-checkers, bogus detectors risk adding another layer of deception to an already fractured information ecosystem.

While even reliable AI detectors can produce false results, researchers say a crop of fraudulent tools has emerged online, easily weaponized to discredit authentic content and tarnish reputations.

AFP's fact-checkers identified three such text detectors that claim to estimate what percentage is AI-generated. The tools—prompted in four languages—not only misidentified authentic text as AI-generated but also attempted to monetize those errors.

One detector, JustDone AI, processed a human-written report about the US-Iran war and wrongly concluded it contained "88% AI content." It then offered to scrub any trace of AI for a fee.

"Your AI text is humanizing," the site claimed, leading to a page where "100% unique text" was locked behind a paywall charging up to $9.99.

Two other tools—TextGuard and Refinely—produced similar false positives and sought to monetize them.

'Scams'...'Liar's dividend'...Illustrating how such tools can be used to discredit individuals, pro‑government influencers in Hungary claimed earlier this year that a document outlining the opposition's election campaign had been entirely created by AI.

To support the unfounded allegation, they circulated screenshots on social media showing results from JustDone.

The tools tested by AFP sought to lure students and academics as clients, with two of them claiming their users came from top institutions such as Cornell University.

Cornell University told AFP it "does not have any established relations with AI detector companies."

"Generative AI does provide an increased risk that students may use it to submit work that is not their own," the university said.

"Unfortunately, it is unlikely that detection technologies will provide a workable solution to this problem. It can be very difficult to accurately detect AI-generated content."

Fact-checkers, including those from AFP, often rely on AI visual detection tools developed by experts, which typically look for hidden watermarks and other digital clues.

However, they too can sometimes produce errors, making it necessary to supplement their findings with additional evidence such as open-source data.

The stakes are high as false readings from unreliable detectors threaten to erode trust in AI verification broadly—and feed a disinformation tactic researchers have dubbed the "liar's dividend": dismissing authentic content as AI fabrications.

"We often report on misinformers and other hoaxsters using AI to fabricate false images and videos," said Waqar Rizvi from the misinformation tracker NewsGuard.

"Now, (we are) monitoring the opposite, but no less insidious phenomenon: claims that a visual was created by AI when in fact, it's authentic."AFP presented its findings to all three detectors.

"Our system operates using modern AI models, and the results it provides are considered accurate within our technology," TextGuard's support team told AFP.

"At the same time, we cannot guarantee or compare results with other systems."

JustDone also reiterated that "no AI detector can guarantee 100% accuracy."

It acknowledged the free version of its AI detector "may provide less precise results" due to "high demand and the use of a lighter model designed for quick access."

Echoing AFP's findings, one user on a review platform complained that "even with 100% human-written material, JustDone still flags it as AI."

AFP fed the tools multiple human-written samples—in Dutch, Greek, Hungarian, and English. All were wrongly flagged as having high AI content, including passages from an acclaimed 1916 Hungarian classic.

The tools returned AI flags regardless of input—even for nonsensical text.

JustDone and Refinely appeared to operate even without an internet connection, suggesting their results may be scripted rather than genuine technical analysis.

"These are not AI detectors but scams to sell a 'humanizing' tool that will often return what we call 'tortured phrases'"—unrelated jargon or nonsensical alternatives—Debora Weber-Wulff, a Germany-based academic who has researched detection tools, told AFP.

© 2026 AFP

Sunday, March 29, 2026

 

TECH


Five things to know about rare earth element

Aside from oil, rare earth elements may be the most buzzworthy thing coming out of the ground these days. Headlines trumpet news about new partnerships to produce rare earths, warn of potential shortages and analyze steps to curb China's role in rare earth markets.

What are rare earth elements? Where do they come from? What's the big deal? Julie Klinger, associate professor at UW–Madison's Nelson Institute for Environmental Studies, explored these questions and more in her 2018 book "Rare Earth Frontiers: From Terrestrial Subsoils to Lunar Landscapes."

Recently, she's been making the media rounds, including with an essay in the New York Times and an interview that is expected to air on CBS's "60 Minutes" this Sunday. Klinger breaks down everything you need to know about rare earth elements—and the geopolitical concerns around them.

1. Everyday devices are possible because of rare earths...The phrase "rare earth elements" generally refers to 17 chemical elements, including Scandium, Yttrium and a 15-member family from atomic number 57 (Lanthanum) to 71 (Lutetium) called the lanthanides.

Many of them share magnetic, conductive and optical properties that make them useful as coatings and additives in alloys and glass and other materials used in a wide range of modern technology. These include jet engines, LED bulbs, fiber-optic cables, lasers and a lot of military technology.

"In some of those applications, it's safe to say rare earths are irreplaceable," Klinger says. "For example, neodymium and praseodymium make super powerful magnets that have enabled the miniaturization of technologies in phones and computers. These really powerful magnets make the magic happen in high-speed trains and MRI machines, too."

Not every application feels particularly high-tech. Seat belts in cars also use rare earth magnets.

"It's not due to a particular engineering need either," Klinger says. "It turns out that when folks were developing the seat belt retracting mechanism, that was the type of magnet they had on the shelf."

2. 'Rare' is a misnomer...Rare earths are not, in fact, particularly rare. The rare earths name is a holdover from the 18th century, when Yttrium was discovered by a miner in Sweden. These elements were "rare" then, because nobody had seen them before. But now we know they can be found around the globe.

"Seventeen elements is actually a sizable chunk of the periodic table," Klinger says. "We're talking about a fair amount of the stuff that makes up Earth's crust, from an elemental and mineralogical standpoint. The rare earths that we use most commonly are as abundant as copper or lead."

They're just not particularly fun to dig up.

"The geological conditions that cause rare earths to come together in higher concentrations can also concentrate radioactive materials," Klinger says. "That makes them hard to mine safely, and can really increase costs."

That doesn't mean rare earths are expensive. They're actually relatively cheap, according to Klinger, trading at prices far lower than precious metals like gold or platinum. In China, which has 30% of the world's proven rare earth reserves, mines typically discard as much as half of the rare earths they dig up, because prices aren't high enough to put the effort into recovering more.

3. Rare earths may seem so scarce because 'Avatar' was so popular...In December 2009, the sci-fi film "Avatar" was released, and it remained the most popular film in U.S. theaters for months. The plot was built around humans displacing a native race on another planet to make way for mining a fabulously valuable material called "unobtanium."

In 2010, in the real world, a diplomatic dispute led China to cut off Japan's access to rare earth elements—a very temporary blow (the embargo didn't even last as long as "Avatar" did as the No. 1 film) to Japanese tech manufacturers.

"There were headlines that said something like "China cuts off access to unobtanium,'" Klinger says. "Our popular imagination was kind of primed by the movie, and then this short-term crisis happened. The narrative—which has continued to support a lot of other politics over the years—stuck, and it's been hard to get unstuck."

4. It's unlikely one country would just turn off the rare earths tap...While China does have ample rare earths reserves, we know the elements are distributed all around the world. Aside from China's willingness to take on the environmental price of rare earths mining, the real source of the country's market dominance is the expertise and infrastructure it has developed to process what it mines.

"Where China does have an outsized share of the rare earth economy is in the crucial intermediate steps involved in transforming a rock in the ground into useful technological components," Klinger says.

5. Abandoned mines could be a rare earths gold mine—and sustainable solution—for the U.S...A recent study showed that much of the domestic demand for rare earths (and other important minerals) can be satisfied by recovering the rare earth elements from the waste piled up around old and active mines in the United States.

"A lot of these materials are already present in what was cast off by other mines," Klinger says. "Maybe we could actually get what we need by cleaning up these long-standing, problematic, abandoned mine waste sites. It could literally be trash to treasure."

That's where Klinger's research comes in. One of her areas of study is the resource requirements of technology required to transition away from fossil fuels and mitigate climate change. Will it take some deforestation to save more forests? How much of the raw materials like rare earths needed for solar arrays and wind farms are already being used in climate-harming technologies—like the equipment needed to pump oil out of the ground and refine it?

The good news, according to Klinger, is that there are ways forward for rare earths and other critical resources that support other important environmental imperatives.

"We really can shift to a circular economy paradigm while also building out the technologies that we need, while also protecting sensitive environmental areas, while also cleaning up heavily contaminated areas," she says. "That's entirely possible. In fact, doing all of those things together is the best way to get them done."

Provided by University of Wisconsin-Madison

DIGITAL LIFE Who is using differential privacy? A new registry aims to make it visible When Apple discovers trending popular emojis, or when...