Thursday, April 16, 2026


TECH


Turning CO₂ from urban waste into useful consumer products

EU researchers are turning carbon emissions from urban waste into everyday household products—from cleaning liquids to leather. Europe's cities emit huge amounts of greenhouse gases into the atmosphere. Two essential urban services—waste incineration and wastewater treatment—are among the biggest contributors to municipal CO2 emissions in the EU.

These systems are vital for public health and urban life, yet they produce emissions that are difficult to eliminate entirely. But what if that CO2 did not have to go to waste?

For an international group of researchers, urban carbon pollution presents an opportunity. Working together in the WaterProof initiative, they are developing a way to capture CO2 from these processes and convert it into formic acid: a simple, highly versatile chemical used across many industries.

This could allow emissions from waste incinerators and wastewater to be turned into the cleaning products under our sink, or even the leather on our shoes.

Turning a problem into a resource...Efforts to tackle climate change focus largely on renewable energy, electrification, and improved efficiency. But some sources remain stubbornly hard to eliminate.

"Some emissions are difficult to stop," said Annelie Jongerius, an electrochemist and program manager at Dutch chemical company Avantium, which coordinates the research.

One option is to capture the CO2 and store it underground. But the WaterProof team is exploring a more circular alternative: keeping carbon in use rather than locking it away.

"It would be nicer if we could use it," Jongerius said. "At the same time, we need alternatives to fossil feedstocks for producing chemicals."

That challenge is particularly visible at facilities like those operated by Dutch waste management company HVC, which runs two major waste incinerators in the Netherlands.

"We have to take in whatever waste society produces," said Jan Peter Born, HVC's waste-to-energy innovation manager. "We have no means of regulating CO2 emissions, apart from encouraging people to buy less and recycle more."

HVC already captures some CO2 and sells it to greenhouse farmers, who use it to increase the yields of crops such as tomatoes and cucumbers. But this is only a partial solution.

"Most of the CO2 administered to the plants is released again through the greenhouse roof," Born explained. "From our legal perspective, it's a delayed emission. It is the farmer who achieves the emission reduction as he avoids gas-firing to produce CO2."

The WaterProof researchers aim to go a step further by turning captured carbon into useful products that keep it out of the atmosphere for longer.

From CO2 to cleaning products...At the heart of the WaterProof innovation is an electrochemical process that converts captured CO2 into formic acid using renewable electricity.

"It's one of the simplest conversions you can make," said Jongerius.

An electrical current drives the reaction in a specialized cell, reducing CO2 into formic acid. Because the system runs on renewable electricity and uses waste-derived carbon, it reduces reliance on fossil-based raw materials.

The process may also offer additional benefits. In an electrochemical cell, two reactions take place at the same time, one at each electrode. While the WaterProof team focuses on converting CO2 into formic acid, they have also explored pairing this with a second reaction that produces hydrogen peroxide and related compounds.

These substances can help break down stubborn pollutants in wastewater, including residues from pharmaceuticals and pesticides. However, this part of the process is still at an early stage and is not being implemented in the current demonstration system.

The team is testing their CO2-derived formic acid in eco-friendly cleaning products such as toilet and surface cleaners.

"It performs exactly the same as conventionally produced formic acid," Jongerius said. "It's the same molecule."

Beyond cleaning, the project is exploring the use of CO2-derived formic acid in leather tanning. While the acid can be used for all types of leather, the team is currently working with Icelandic company Nordic Fish Leather to bring eco-friendly fish leather—a more sustainable alternative to traditional cattle-based leather—to market.

Scaling up for real-world impact...While the chemistry is promising, scaling up is the next challenge. Building on earlier research, the team is now working on a large-scale pilot unit in which multiple electrochemical cells are stacked together, increasing the volume of CO2 that can be processed. If successful, it will pave the way for commercial‑scale plants.

The modular design allows the system to be adapted to different sites, from wastewater plants to incinerators. The aim is to demonstrate the WaterProof process in the summer of 2026, showing that a fossil fuel-free production chain can operate under real-world conditions.

Such systems could eventually be integrated into urban infrastructure, turning cities into hubs for circular chemical production rather than sources of emissions.

Recovering valuable materials from waste...The potential of the work being carried out goes beyond carbon reuse. The researchers are also exploring how formic acid can be used to recover valuable materials from waste streams.

By combining it with other compounds, they are developing deep eutectic solvents—low-toxicity liquids capable of dissolving and binding to metals in waste so that the metals can be extracted.

Many valuable materials end up in incinerator ash and wastewater sludge, including copper, lithium, cobalt, and even small amounts of gold—all critical for modern technologies and the green transition.

HVC already uses mechanical processes to recover metals, separating heavier particles from ash in a process similar to gold panning. But this produces mixed metal streams that are less valuable. The new solvents could allow more precise separation.

"These eutectic solvents can be tailored to target specific metals," Born said. "That means you could recover individual materials rather than mixtures, which increases their value."

However, economic realities remain a barrier. Gold is the only recovered metal that commands a decent price, Born explained. For many others, including rare earths, the market price is still too low to justify the cost.

This raises broader questions about policy and priorities, particularly as demand for critical materials continues to grow: how much societies are willing to subsidize recovery from waste, and whether strategic value should win out over purely market‑driven decisions.

Closing the loop...This kind of "waste-to-resource" thinking is gaining traction across Europe. New EU rules planned for 2026 aim to make recycled materials more widely available—and more widely used.

If successful, they could help turn circular ideas like those behind WaterProof into everyday reality, supporting Europe's ambition to lead the world in circular production by 2030.

By linking carbon capture, chemical production, water treatment, and material recovery, the researchers are bringing together multiple elements of that vision in a single system.

For Jongerius, the concept is both practical and symbolic.

"If you take CO2 from wastewater, turn it into a product, and then use that product to clean your toilet so it flows back into the wastewater system, you create a complete loop," she said. "It is the ultimate example of the circular economy."

Provided by Horizon: The EU Research & Innovation Magazine


DIGITAL LIFE


Thousands of AI‑written, edited or 'polished' books are being sold, an eerie echo of Orwell's 'novel‑writing machines'

At some point in the next several months, I am hoping to receive a modest check as a member of the class covered in the class-action settlement Bartz v. Anthropic.

In 2025, the artificial intelligence company Anthropic, best known for creating the chatbot Clude, agreed to pay up to US$1.5 billion to thousands of authors after a judge ruled that the company had infringed upon their copyrights.

When I first learned about the settlement, I assumed that Anthropic was primarily interested in teaching Claude about the subject of my stolen work, former socialist activist, British Labor politician and feminist Ellen Wilkinson.

It did not initially occur to me that Claude might also be learning about how I, Laura Beers, political historian, craft my sentences and translate my voice to the page.

Yet there is increasing evidence that chatbots like Claude can be trained not only to regurgitate an author's content, but also to mimic their voice. In March 2026, journalist Julia Angwin filed a class action suit against the owners of Grammarly, alleging that the company misappropriated her and other writers' identities to build its "Expert Review" AI tool, which offers to give editorial feedback in the voices of various authors, living and dead.

That a machine might use my writing not only to learn about my subject matter, but also to analyze and ultimately mimic my authorial voice, points to a future that George Orwell envisioned with eerie prescience. In his 1949 dystopian novel "1984," Orwell imagined "novel-writing machines" capable of mass-producing literature, employing programmed mechanical "kaleidoscopes" as substitutes for individual artistic process.

'In place of a human mind, a statistical average'...To what extent, I wondered, could Claude mimic my own voice? I prompted the chatbot to write an essay "in the style of Laura Beers" about an early 20th-century politician. I struggled to recognize myself in the resulting prose. But maybe Claude, having only consumed one of my books, didn't have enough to go on to really nail my tone.

I decided to try again. This time, I would ask it to write a brief essay in the style of Orwell. The subject of my book "Orwell's Ghosts: Wisdom and Warnings for the Twenty-First Century," Orwell is one of the most distinctive literary essayists and prose stylists of the 20th century.

Most of Orwell's work is in the public domain, meaning it could have legally been used to train Claude.

I prompted Claude to write an essay "in the style of George Orwell about the threat to individual identity and freedom of thought posed by AI and particularly by the ability of 'novel writing machines' to produce literature."

Primed with ample source material, the AI did a passable job:

"Here is a type of comfort, familiar to anyone who has ever been slowly dispossessed of something, that consists not in denying the loss but in not yet noticing it. The frog does not leap from the gradually heated water. The citizen does not protest the gradual narrowing of the permitted thought. And the reader, it seems, does not mourn the novel until the novel has already been replaced by something that resembles it in every outward particular—same chapters, same characters, same approximate sequence of feeling—yet contains, in place of a human mind, a statistical average of all the human minds that came before it."

The final sentence about the statistical average rings false. But Orwell would, I suspect, have liked the image of the slowly boiling frog. "Here is a type of comfort" is also a phrase that Orwell might well have written.

I am skeptical that anyone would classify Claude's efforts as indistinguishable from Orwell's prose. But when it comes to machine-produced "literature," perhaps it doesn't really matter whether it can fully approximate original art, as long as it's good enough to function as entertainment and distraction for the masses.

Jam, bootlaces and books...This was Orwell's own dispirited suggestion in "1984. ith the help of "novel-writing machines," the employees of the Ministry of Truth—the government department responsible for controlling information and rewriting history—are able to mass-produce not only novels, but also "newspapers, films, textbooks, telescreen programs [and] plays." 

They churn out "rubbishy newspapers containing almost nothing except sport, crime and astrology, sensational five-cent novelettes" and "films oozing with sex," along with cheap pornography intended for the "proles," as the uneducated working classes of Big Brother's Oceania were known.

The technology disgusts Orwell's protagonist, Winston Smith, who pointedly decides to purchase a diary and pen to write down his own independent thoughts. But to Julia, Winston's nymphomaniac, anti-intellectual lover who works as a mechanic servicing the machines, "Books were just a commodity that had to be produced, like jam or bootlaces."

'Full-length novels in seconds'...According to estimates, thousands of books for sale on Amazon have been written in whole or in part using AI. In other words, today's AI is also being used to mass-produce literature like jam or bootlaces.

Many of these works are not fully machine-written. Instead, they've been, as the AI writing tool Sudowrite advertises, "polished by AI." With its "Rewrite" function, the company promises to give users an opportunity to "refine your prose while staying true to your style, with multiple AI-suggested revisions to choose from." The service is akin to the "touching up" provided by the Ministry of Truth's Rewrite Squad in "1984."

Other books for sale on Amazon are, however, entirely machine-generated. The AI writing tool Squibler promises that if you give it an overarching prompt, it can produce "Full-Length Novels in Seconds."

The potential of AI-generated "literature" to turn a quick-and-easy profit ensures that readers will continue to encounter more of this content in the future, especially as AI's large language models become more refined. Already, studies have shown that readers cannot easily distinguish AI-generated forgeries from original prose.

Last year, I had lunch with a screenwriter friend in Los Angeles. He told me that his colleagues are particularly nervous about the use of AI to produce sequels. Once you have an established cast of characters for a movie franchise like, say, "Fast & Furious," audiences will likely see the next installment whether it's written by man or machine.

Yet my own brief experiments with Claude give me at least some hope for the future of literary art. A chatbot like Claude might be able to absorb and analyze "a statistical average of all the human minds that came before it," but without the input of actual human experience and sensibility, it is hard to envisage them ever producing true art.

Whether AI can produce the next George Orwell novel or essay remains to be seen. That it can and will churn out an increasing volume of popular fiction and screenplays like "Fast & Furious 25" seems less in doubt.

Provided by The Conversation

Wednesday, April 15, 2026


TECH


Printed neurons communicate with living brain cells

Northwestern University engineers printed artificial neurons that don't just imitate the brain—they talk to it. In a new study, the Northwestern team developed flexible, low-cost devices that generate electrical signals realistic enough to activate living brain cells. When tested on slices of tissue from mouse brains, the artificial neurons successfully triggered responses from real neurons, demonstrating a new level of biocompatibility.

The work marks a step toward electronics that can communicate directly with the nervous system, with potential applications in brain-machine interfaces and neuroprosthetics, including implants for hearing, vision and movement.

It also lays the groundwork for more efficient, brain-like computing systems. By mimicking how neurons signal—a key feature of the brain, which is the most energy-efficient computer known—futuristic systems could perform complex operations using far less power than today's data-hungry technologies.

The study is published in the journal Nature Nanotechnology.

"The world we live in today is dominated by artificial intelligence (AI)," said Northwestern's Mark C. Hersam, who led the study. "The way you make AI smarter is by training it on more and more data.

"This data-intensive training leads to a massive power-consumption problem. Therefore, we have to come up with more efficient hardware to handle big data and AI. Because the brain is five orders of magnitude more energy efficient than a digital computer, it makes sense to look to the brain for inspiration for next-generation computing."

An expert in brain-like computing, Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern's McCormick School of Engineering, professor of medicine at Northwestern University Feinberg School of Medicine and professor of chemistry at Northwestern's Weinberg College of Arts and Sciences.

He is also the chair of the Department of Materials Science and Engineering, director of the Materials Research Science and Engineering Center and a member of the International Institute for Nanotechnology. Hersam co-led the study with Vinod K. Sangwan, a research associate professor at McCormick.

From rigid silicon to dynamic brains...As computing tasks become more complex and data-intensive, computers meet these demands by adding more identical components—billions of transistors packed onto rigid, two-dimensional silicon chips. Each transistor behaves the same way. And, once fabricated, those systems remain fixed.

The brain operates in a strikingly different way. Rather than comprising uniform building blocks, the brain relies on diverse types of neurons—each performing specialized roles—organized across regions. These soft, three-dimensional networks constantly change, forming and reshaping connections over time as people learn and adapt.

"Silicon achieves complexity by having billions of identical devices," Hersam said. "Everything is the same, rigid and fixed once it's fabricated. The brain is the opposite. It's heterogeneous, dynamic and three-dimensional. To move in that direction, we need new materials and new ways to build electronics."

While other artificial neurons do exist, they fall short of biological realism. Most produce simplified signals, forcing engineers to rely on large, energy-intensive networks of devices to achieve complex behavior.

An aerosol jet printer in Hersam's laboratory deposits electronic inks onto a flexible polymer substrate. Credit: Mark Hersam/Northwestern University

Turning an imperfection into a feature...To move closer to a biological model, Hersam's team developed artificial neurons using soft, printable materials that better mimic the brain's structure and behavior. The backbone of that advance is a series of electronic inks, formulated from nanoscale flakes of molybdenum disulfide (MoS2), which acts as a semiconductor, and graphene, which serves as an electrical conductor.

Using a specialized printing technique called aerosol jet printing, the researchers deposited these inks onto flexible polymer substrates.

In the past, other researchers viewed the stabilizing polymer in the inks as a problem that interfered with electrical current flow, so they burned the polymer away after printing the electronic circuit. But Hersam leveraged this minor imperfection to add brain-like functionality to his device.

"Instead of fully removing the polymer, we partially decompose it," he said. "Then, when we pass current through the device, we drive further decomposition of the polymer. This decomposition occurs in a spatially inhomogeneous manner, leading to formation of a conductive filament, such that all the current is constricted into a narrow region in space."

This narrow region becomes a localized pathway that produces a sudden, neuron-like electrical response. The result is a new type of artificial neuron that can generate a rich range of electrical signals. Instead of generating simple, one-off pulses, the new device produces more complex signaling patterns—including single spikes, continuous firing and bursting patterns—that resemble how real neurons communicate.

By capturing this signaling diversity, each neuron can encode more information and perform more sophisticated functions. And that can reduce the number of components needed in a computing system, drastically improving overall efficiency.

Putting artificial neurons to the test...To test whether its artificial neurons truly could interface with biology, Hersam's team collaborated with Indira M. Raman, the Bill and Gayle Cook Professor of Neurobiology at Weinberg. Raman's team applied electrical signals from the artificial neurons to slices of mouse cerebellum.

They found the artificial voltage spikes matched key biological features, including timing and duration of living neuron voltage spikes. This reliably triggered activity in real neurons, activating neural circuits in a way similar to natural signals.

"Other labs have tried to make artificial neurons with organic materials, and they spiked too slowly," Hersam said. "Or they used metal oxides, which are too fast. We are within a temporal range that was not previously demonstrated for artificial neurons. You can see the living neurons respond to our artificial neuron. So, we've demonstrated signals that are not only the right timescale but also the right spike shape to interact directly with living neurons."

The approach comes with several environmentally friendly advantages. In addition to improving energy efficiency, the neuron's manufacturing process is simple and low-cost. Because the printing process is additive—placing material only where it's needed—it also reduces waste.

"To meet the energy demands of AI, tech companies are building gigawatt data centers powered by dedicated nuclear power plants," Hersam said.

"It is evident that this massive power consumption will limit further scaling of computing, since it's hard to imagine a next-generation data center requiring 100 nuclear power plants. The other issue is that when you're dissipating gigawatts of power, there's a lot of heat. Because data centers are cooled with water, AI is putting severe stress on the water supply. However you look at it, we need to come up with more energy-efficient hardware for AI."

Provided by Northwestern University


TECH


China tests underwater cutter at 4,000 meters: innovation or instrument of intimidation?

A Chinese deep-sea mission has successfully tested an advanced device capable of cutting through underwater structures such as submarine cable at a depth of thousands of metres.

China has successfully tested a specialized deep-sea electro-hydrostatic actuator capable of severing undersea telecommunications cables at depths of 3,500 meters, marking a significant leap in its deep-sea intervention (i.e. military) capabilities. The trial, conducted by researchers from the Chinese Academy of Sciences and reported by state media, confirms that the device can operate at the abyssal zone where most of the world's critical internet and data infrastructure resides.

With the electro-hydrostatic design, the tool is self-contained and highly efficient, potentially allowing it to be mounted on small, unmanned underwater vehicles (UUVs). During the latest tests, the cutter successfully sliced through high-tension cables without the need for a massive surface support fleet and cumbersome umbilicals. 

Strategically, the ability to operate at 3,500 meters places almost all of the South China Sea’s seabed infrastructure within reach. While China has officially framed the technology as a tool for deep-sea maintenance, salvage, and scientific exploration, state-affiliated reports have hinted at its deployment readiness for more assertive roles. The compact nature of the actuator means it could be deployed from standard research vessels or even commercial ships, making detection of such activities significantly more difficult for foreign maritime powers.

The “Haiyang Dizhi 2” research vessel completed its first deep-sea scientific mission of 2026 on Saturday, according to the Ministry of Natural Resources.

The expedition included a cutting test of a deep-sea electro-hydrostatic actuator at a depth of 3,500 metres (11,483 feet), using technology that has drawn attention for its potential military use.

“The sea trial has bridged the ‘last mile’ from deep-sea equipment development to engineering application,” the official China Science Daily reported on Saturday, suggesting the equipment was poised for actual deployment.

According to the report, the 'Haiyang Dizhi 2' completed the first deep-sea mission of the year on April 11. The electro-hydrostatic actuator (EHA), uses hydraulics, an electric motor, and a control unit combined into a single device, jettisoning the requirement for lengthy and cumbersome external oil piping. The device was reportedly further strengthened against deep-sea pressure and corrosion, enabling "precise mechanical tasks" at very low depths. A September report cited by the article notes that this technology has previously been touted "for cutting subsea cables and operating deep-sea grabs."

The project isn't purely destructive in nature, with obvious applications in the repair and building of underwater oil and gas pipelines. However, given the global context and the timing, the implications for military and nefarious use are obvious. Several projects from China's undersea initiative have reportedly drastically improved the effectiveness of such tasks. A 2022 offshore pipeline repair took crews five hours "just to make a single cut" on an 18-inch section of damaged pipe. Just one year later, homegrown vessels operated remotely could cut pipes up to 38 inches in diameter at a depth of 2,000 feet, including one repair where an eight-inch pipe was cut through in just 20 minutes. The latest testing extends these capabilities to at least 3,500 meters, almost 11,500 feet.

Most pressingly, China's test highlights a growing vulnerability in the physical layer of the digital world. International law regarding the protection of undersea cables remains murky, particularly in international waters where these new devices can operate. The quiet efficiency of the electro-hydrostatic cutter could mean that the next major disruption to global comms may not come just from a cyberattack, but also from a mechanical blade in the deep.

The Haiyang Dizhi 2 (also known as Haiyang Dizhi Shihao, IMO: 9795751) is a high-tech geological research and survey vessel operating under the Chinese flag. Built in 2017, the vessel measures approximately 75.8 meters in length and 15.4 meters in width.

Highlights and recent activities:

State-of-the-art technology: The vessel is equipped with advanced systems, including a 150-ton active offset offshore crane, a 10-kilometer fiber optic winch, and a geological winch. It has a 730-square-meter deck and a helicopter platform.

Subsea operation capability: In April 2026, the vessel successfully conducted tests of an electro-hydrostatic actuator (EHA) at depths exceeding 3,500 meters. This technology is described as capable of performing precise mechanical tasks at great depths, such as cutting submarine cables and operating grabs.

Fuel ice research: In October 2025, the vessel operated on igniting fuel ice (methane hydrate) in deep water, using the ROV (remotely operated vehicle) "Haima" at a depth of 1,522 meters to collect samples.

Scientific missions: The vessel is frequently used for deep-sea research, with missions reported in the South China Sea.

The vessel plays an important role in China's deep-sea marine research activities, with capabilities ranging from geological studies to the handling of subsea infrastructure.

mundophone

Tuesday, April 14, 2026



DIGITAL LIFE




Tiny cameras in earbuds let users talk with AI about what they see

University of Washington researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. For instance, a user might turn to a Korean food package and say, "Hey Vue, translate this for me." They'd then hear an AI voice say, "The visible text translates to 'Cold Noodles' in English."
The prototype system called VueBuds takes low-resolution, black-and-white images, which it transmits over Bluetooth to a phone or other nearby device. A small artificial intelligence model on the device then answers questions about the images within around a second. For privacy, all of the processing happens on the device, a small light turns on when the system is recording, and users can immediately delete images.

The team presented its research April 14 at the CHI 2026 conference in Barcelona. The study is published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems.

"We haven't seen most people adopt smart glasses or VR headsets, in part because a lot of people don't like wearing glasses, and they often come with privacy concerns, such as recording high-resolution video and processing it in the cloud," said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. "But almost everyone wears earbuds already, so we wanted to see if we could put visual intelligence into tiny, low-power earbuds, and also address privacy concerns in the process."
Cameras use far more power than the microphones already in earbuds, so using the same sort of high-res cameras as those in smart glasses wouldn't work. Also, large amounts of information can't stream continuously over Bluetooth, so the system can't run continuous video.
The team found that using a low-power camera—roughly the size of a grain of rice—to shoot low-resolution, black-and-white still images limited battery drain and allowed for Bluetooth transmission while preserving performance.

There was also the matter of placement.

"One big question we had was: Will your face obscure the view too much? Can earbud cameras capture the user's view of the world reliably?" said lead author Maruchi Kim, who completed this work as a UW doctoral student in the Allen School.

The team found that angling each camera 5–10 degrees outward provides a 98–108 degree field of view. While this creates a small blind spot when objects are held closer than 20 centimeters from the user, people rarely hold things that close to examine them—making it a non-issue for typical interactions.

Researchers also discovered that while the vision language model was largely able to make sense of the images from each earbud, having to process images from both earbuds slowed it down. So they had the system "stitch" the two images into one, identifying overlapping imagery and combining it. This allows the system to respond in one second—quick enough to feel like real-time for users—rather than the two seconds it takes with separate images.

The team then had 74 participants compare recorded outputs from VueBuds with outputs from Ray-Ban Meta Glasses in a series of tests. Despite VueBuds using low-resolution images with greater privacy controls and the Ray-Bans taking high-res images processed on the cloud, the two systems performed equivalently. Participants preferred VueBuds' translations, while the Ray-Bans did better at counting objects.

Sixteen participants also wore VueBuds and tested the system's ability to translate and answer basic questions about objects. VueBuds achieved 83%–84% accuracy when translating or identifying objects and 93% when identifying the author and title of a book.

This study was designed to gauge the feasibility of integrating cameras in wireless earbuds. Since the system only takes grayscale images, it can't answer questions that involve color in the scene.

The team wants to add color to the system—color cameras require more power—and to train specialized AI models for specific use cases, such as translation.

"This study lets us glimpse what's possible just using a general purpose language model and our wireless earbuds with cameras," Kim said. "But we'd like to study the system more rigorously for applications like reading a book—for people who have low vision or are blind, for instance—or translating text for travelers."


Provided by University of Washington


TECH


Freestanding silicon anode design improves fast charging and cycle life in lithium-ion batteries

Sejong University said Tuesday that a research team had developed a next-generation silicon anode that enables faster charging and longer battery life, a potential advance for electric vehicles and energy storage systems.

The team, led by Yang Hyeon-woo and Kim Sun-jae of the department of nanotechnology and advanced materials engineering, developed a freestanding silicon anode that maintains high performance without conventional components like current collectors, binders or conductive additives.

The findings were published online April 6 in Advanced Fiber Materials, an international journal with an impact factor of 21.3 — a measure of how often its research is cited — placing it among the more influential publications in its field, according to the university.

The researchers at Sejong University introduced a novel electrode architecture that uses carbon nanofibers as a foundational framework, a design intended to overcome the historic fragility of silicon-based components. By engineering precise hydrolysis and condensation reactions directly onto the surface of each fiber, the team achieved a uniform silicon coating.

This structural refinement not only bolsters the anode’s physical stability — preventing the degradation typical of repeated charging cycles — but also significantly enhances electrical connectivity, a crucial step toward the next generation of high-endurance energy storage.

“Silicon anodes have faced limitations due to structural damage during repeated charge and discharge cycles despite their high capacity,” Kim said. “This study presents a new design approach that could overcome those issues and be widely applied in next-generation lithium-ion batteries where fast charging and long life are critical,” Kim said.

The research was supported by the Ministry of Education’s Basic Science Research Capacity Enhancement Program and the National Research Foundation of Korea.

Silicon has long been seen as a promising anode material for next-generation lithium-ion batteries because it can store much more lithium than graphite. But silicon also expands and contracts sharply during charging and discharging, which can crack the electrode, disrupt electrical pathways and shorten battery life.

Researchers at Sejong University have developed a freestanding silicon anode designed to address that problem. Their study is published in Advanced Fiber Materials under the title "CNF-Supported Si Freestanding Anode with a Conformal Granular Si/SiOx Interphase for High-Rate, Long-Life Li-Ion Batteries."

Schematic illustration of a CNF-supported Si freestanding anode fabrication process. Credit: Advanced Fiber Materials (2026)

Conventional silicon electrodes are often made by casting slurry mixtures onto current collectors, a design that can add inactive weight and introduce interfaces that become unstable during repeated cycling. By contrast, the Sejong University team designed a freestanding architecture in which carbon nanofibers, or CNFs, act as both the structural scaffold and conductive framework of the anode.

The researchers then engineered a hydrolysis-condensation reaction on the surface of each fiber so that silicon formed uniformly along the CNF network as a conformal Si/SiOx interphase. A schematic illustration outlines how this step-by-step process produced the final freestanding anode architecture.

That structure is important because it helps the electrode maintain its porous network and electrical connections even as silicon changes volume during repeated cycling.

Microscopy and spectroscopy analyses showed that the silicon-containing layer formed a thin, continuous shell around the carbon nanofiber core without excessive aggregation or overcoating. This helped preserve fiber-to-fiber junctions and open pathways for ion transport.

In electrochemical tests, the anode delivered 727.1 mAh g⁻¹ at 0.1 A g⁻¹. Under a high-rate condition of 1 A g⁻¹, it retained 79.8% of its capacity after 2,000 cycles. In full-cell tests with an NCM622 cathode, it delivered 176.5 mAh g⁻¹ and retained 91.6% of its capacity after 300 cycles.

The team also reported reduced charge-transfer resistance during cycling, indicating that the structure could support faster electrochemical transport over extended operation.

According to the researchers, that combination of structural stability and rate capability could make the design useful for applications where both fast charging and long cycle life matter, including electric vehicles and energy storage systems.

"The key difference in this work is that carbon nanofibers were used not simply as a support, but as the structural and conductive backbone of a freestanding silicon anode," said Professor Hyeon-Woo Yang. "By enabling silicon to form uniformly along each fiber, we were able to improve both structural stability and electrochemical performance."

Professor Sun-Jae Kim added, "Silicon anodes have long been limited by structural degradation during repeated cycling. This study suggests a new route to overcome that problem and expand the use of high-capacity silicon anodes in next-generation lithium-ion batteries."

Provided by Sejong University

Monday, April 13, 2026


DIGITAL LIFE


Revealing the hidden logic behind AI's judgments of people

In a world where artificial intelligence is quietly shaping who gets hired, who receives loans, and even how medical decisions are made, a new question is emerging: How does AI judge us? A new study by Prof. Yaniv Dover and Valeria Lerman from Hebrew University suggests the answer is both reassuring and deeply unsettling. The study is published in the journal Proceedings of the Royal Society A Mathematical Physical and Engineering Science.

How AI learns to 'trust' people...Drawing on more than 43,000 simulated decisions alongside around a thousand human participants, the research reveals that today's most advanced AI systems, including models similar to ChatGPT and Google's Gemini, do not simply process information. They make judgments about people. And in doing so, they appear to form something that looks a lot like "trust."

But that effective trust doesn't work quite like ours.

The study placed both humans and AI in familiar situations: deciding how much money to lend a small business owner, whether to trust a babysitter, how to rate a boss, or how much to donate to a nonprofit founder.

Across these scenarios, a clear pattern emerged.

Both humans and AI favored people who seemed competent, honest, and well-intentioned. In other words, the machines appeared to grasp the basic ingredients of trust: competence, integrity, and benevolence, much like we do.

"That's the good news," says Prof. Dover. "AI is not making random decisions. It captures something real about how humans evaluate one another."

Where machine judgment diverges from humans

But the resemblance stops there—look closer, and the differences become striking.

Is this a good person? Humans tend to form a general impression by blending multiple traits into a single, intuitive and holistic judgment.

AI does something very different.

It breaks people down into components, scoring competence, integrity, and kindness almost like separate columns in a spreadsheet. The result is a more rigid, "by-the-book" style of judgment, consistent, but less human.

"People in our study are messy and holistic in how they judge others," explains Valeria Lerman. "AI is cleaner, more systematic and that can lead to very different outcomes."

Bias gets amplified in high-stakes decisions

Nevertheless, a troubling pattern of amplified bias emerged.

In financial scenarios, such as deciding how much money to lend or donate, AI systems showed consistent and sometimes sizable differences based solely on demographic traits.

For example: Older individuals were frequently given more favorable outcomes, though in some cases the opposite pattern appeared.

Religion also had a significant effect on the outcomes, especially the monetary ones.

Gender also influenced decisions in certain models and scenarios.

These differences appeared even when every other detail about the person was identical.

"Humans have biases, of course," says Prof. Dover. "But what surprised us is that AI's biases can be more systematic, more predictable, and sometimes stronger."

Different models, different moral compasses

Another key insight: there is no single "AI opinion."

Different models often made different judgments about the same person. In some cases, one system rewarded a trait that another penalized.

That means the choice of AI system could quietly shape real-world outcomes.

"Which model you use really matters," Lerman notes. "Two systems can look similar on the surface but behave very differently when making decisions about people."

Why understanding AI's judgment now matters

AI is already being used to screen job candidates, assess creditworthiness, recommend medical actions, and guide organizational decisions.

As these systems move from assistants to decision-makers, understanding how they "think" becomes critical.

The study suggests that while AI can mimic the structure of human judgment, it does so in a more rigid, less nuanced way and with biases that may be harder to detect.

The researchers emphasize that their findings are not a warning against AI, but rather a call for awareness.

"These systems are powerful," says Dover. "They can model aspects of human reasoning in a consistent way. But they are not human and we shouldn't assume they see people the way we do."

As AI becomes more embedded in everyday life, the question is no longer whether we trust machines. It's whether we understand how they trust us.

Provided by Hebrew University of Jerusalem 

TECH Turning CO₂ from urban waste into useful consumer products EU researchers are turning carbon emissions from urban waste into everyday h...