Friday, April 17, 2026


DIGITAL LIFE


What could your voice give away?

With AI, the voice has acquired a new significance. Behind the words lies data that can be used both to diagnose a health problem and to steal someone's identity. Speaking to machines is no longer the stuff of science fiction. Alexa (Amazon) has been present in homes for over a decade, and an increasing number of users now favor voice interactions with chatbots.

Whether dictating a message or asking for directions, this shift is not only technical—although AI systems are becoming ever more powerful—but also societal, reflecting how humans engage with machines. Behind the words, however, lies data.

Unlike a password, a voice cannot easily be changed. It is shaped by physiological, linguistic and personal characteristics. This "voiceprint" can identify an individual and reveal sensitive information such as origin or gender. Voice is, therefore, an especially rich form of biometric data.

"When a user interacts with a voice-based system, they not only convey content but also implicit information: emotions, physical traits or behavioral patterns," explains Andrea Cavallaro, professor and head of the Multimedia and Intelligent Sensing Laboratory at EPFL.

The voice indeed contains numerous, sometimes subtle, features like rhythm, accent, tone, speed, intonation, volume or vocabulary that can all reveal something about its owner.

A resource for health care...Cavallaro's research shows that such information can be exploited by analytical systems, raising significant privacy concerns. Far from being a simple communication channel, voice constitutes a dataset in its own right.

The potential uses of voice data are numerous, particularly in health care. The same characteristics making voice identifiable also make it highly informative. Subtle variations in speech may reveal neurological disorders, respiratory diseases or emotional states. This is the premise behind Virtuosis AI, a start-up led by EPFL alumna Lara Gervaise, which explores the use of voice as a diagnostic tool.

Voice analysis could offer a noninvasive approach to medical monitoring. However, this promise also entails greater responsibility, as health data remains among the most sensitive categories of personal information.

Legal challenges...In another context, actors and dubbing professionals have taken legal action against companies accused of using their voices to train AI models without consent. The argument is straightforward: a voice is part of a person's identity and is therefore protected under personality or image rights.

At the same time, voice cloning tools are now widely accessible, sometimes even free of charge. It is no longer only the voices of professional actors that can be replicated, but potentially anyone's.

"You can imagine the scenarios: spam phone calls, deceiving relatives, or fabricating audio evidence. The voice has long been perceived as a personal signature. With AI, it becomes a vector for identity theft at scale," warns Cavallaro.

Protecting privacy from the start...How, then, can voice data be protected? One promising avenue is voice anonymization. Cavallaro's work explores ways of transforming speech to preserve intelligibility while masking the speaker's identity or gender. The approach involves generating "ambiguous" voices, reducing the ability of systems to detect sensitive attributes.

The challenge lies in balancing utility and privacy. Excessive transformation degrades the quality of the signal, while insufficient modification leaves personal information exposed. This research shows that a compromise is achievable.

"We are seeing a broader shift towards 'privacy by design,' where data protection is embedded from the outset in system development," says Cavallaro.

As the voice becomes a dominant interface, it invites us to rethink the relationship between technology, identity and privacy. Speaking may feel ephemeral, words seem to vanish as soon as they are uttered. Yet with AI, they are captured, analyzed and potentially stored.

Mass adoption...The adoption of voice-enabled AI systems has accelerated rapidly in recent years. Today, nine out of ten organizations use AI in at least one of their departments, according to McKinsey. Companies are moving swiftly from experimentation to large-scale deployment, with voice technologies among the most visible interfaces between humans and machines.

On the consumer side, widespread usage is now well established. As early as 2025, Forbes reported that around 60% of smartphone users regularly used a voice assistant, highlighting a clear increase over recent years.

Globally, the number of voice assistants is estimated at 8.4 billion, more than the world's population. This is explained by the multiple devices used within a single household, including smartphones, televisions and cars.

This rapid adoption is driven not only by technological progress but also by behavioral factors. Advances in natural language processing and generative AI have enabled smoother, conversational, hands-free interactions.

The voice is no longer just about issuing commands: it represents a new form of interaction that is reshaping how we access and process information, services and artificial intelligence itself.

Your voice acts as a rich, biometric dataset that can reveal significant personal, physical, and emotional information, especially when analyzed by artificial intelligence. Beyond just the words spoken, it acts as a "voiceprint" that can identify you, map your physical appearance, and expose sensitive health data. 

Here is what your voice can give away(below):

1. Physical characteristics and identity

-Physical appearance: AI systems can analyze voice prints to estimate facial features, such as face shape, lip thickness, and nose structure.

-Age and gender: Voice, rhythm, and pitch can reveal a person's sex and approximate age.

-Aging Processes: Changes in vocal cord thickness (often 1% loss of muscle mass per year after age 50) and reduced respiratory capacity can reveal a speaker's age and vocal aging, such as a higher pitch in men and a lower, thicker pitch in women post-menopause. 

2. Physical and mental health status

-Diseases and neurological conditions: Subtle variations in speech can indicate health issues, including Parkinson’s disease, amyotrophic lateral sclerosis (ALS), vocal cord paralysis, or cognitive changes.

-Vocal damage/abuse: Persistent hoarseness or a "gravelly" voice can reveal vocal overuse (nodules or polyps), smoking, or gastroesophageal reflux (GERD).

-Emotional state and stress: The voice acts as an "emotional valve." It can give away stress, sadness, anger, or fatigue.

Acute Illness: Respiratory infections, allergies, or chronic dehydration can be detected through voice quality. 

3. Identity and privacy risks

-Personal identification: Because the voice is unique, it is used for biometric identification.

-Voice cloning/identity Theft: AI can clone voices with high accuracy using short audio clips, allowing for scams, fraud, or the creation of false evidence.

-Origin/demographics: Accents, vocabulary, and speech patterns can reveal geographic origin or social background. 

4. Behavioral patterns

-Personality traits: Speech tempo, volume, and rhythm can suggest personality traits, including confidence, anxiety, or dominance.

-Emotional reactions: In conversational AI, the voice can betray user frustration or satisfaction. 

To protect this information, researchers are developing voice anonymization tools that alter the voice to mask identity, gender, or sensitive attributes while keeping speech intelligible.

Provided by Ecole Polytechnique Federale de Lausanne 

 

TECH


Simple robots inspired by ants collectively build and excavate

When it comes to teamwork, we could all learn something from ants. These relatively simple, small-brained animals are famous for their ability to collectively build massive, intricate, climate-controlled structures, despite having neither a blueprint nor a worksite foreman.

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Faculty of Arts and Sciences, who have long been fascinated by physical phenomena that orchestrate extraordinarily complex natural processes—from ant colonies, to brain folds, to the human gut—have developed a fleet of cooperative robots that, very much like ants, can spontaneously organize to build and dismantle structures. They are governed only by environmental cues and minimal physical rules.

Their study, published in PRX Life, demonstrates how a decentralized swarm of agents, whether robots or insects, can coordinate to complete tasks without central control, offering insights into both biological systems and future autonomous robotic systems.

The study was led by Professor L. Mahadevan, whose lab has studied social insect communities for many years, and previously showed they could mimic ants' excavation and escape prowess using robots.

"Our new study shows how simple, local rules can lead to the emergence of complex task completion that is self-organized and thus robust and adaptive," said Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, Organismic and Evolutionary Biology, and Physics at SEAS and FAS.

"We also introduce the concept of 'exbodied intelligence," where collective cognition arises not solely from individual agents, but from their ongoing interaction with an evolving environment."

The work could lead to many potential applications, from autonomous construction in hazardous environments, to planetary exploration, to experimental models for studying animal behavior.

Identifying parameters that produce collective behaviors...In their latest study, the team refined their robotic platform to show both excavation and building performance while, crucially, identifying the key parameters needed to achieve those behaviors.

Social insects like ants and termites build complex structures using a biological technique called stigmergy, in which individuals modify their environments and respond to those modifications.

Ants emit pheromones—like a chemical perfume—from their bodies that fellow ants then respond to. Inspired by this natural cascade, Mahadevan's lab uses robotic ants, a.k.a. RAnts, which respond to "photormones"—light fields as digital stand-ins for pheromone fields.

Each robot senses gradients in the photormone field and leaves behind signals as it moves, creating a feedback loop between robots and their environment. This enables coordination across the swarm.

The agents operate using only a few rules: follow gradients in the photormone signal; pick up and transport building blocks in response to these cues; and deposit materials when signal thresholds are met.

These simple rules nonetheless trigger sophisticated behaviors. Robots spontaneously cluster to form nucleation sites, where structures begin to emerge. These sites arise through a mechanism called trapping instability, in which robots become temporarily confined by the signals they generate. As more robots converge on these locations, construction accelerates, producing organized aggregates of building material.

An illustration of how the collective, decentralized behavior of ants has inspired experiments with cooperative robots that can complete tasks without central control.  Credit: Harvard John A. Paulson School of Engineering and Applied Sciences

Swarms that switch between construction and dismantling...Echoing previous work, the researchers found that the swarm's behavior can be tuned by adjusting two key parameters: cooperation strength, or how strongly robots follow the signal gradient; and deposition rate, or whether robots deposit or remove material. By changing these parameters, the researchers showed that their swarm could switch between construction of new structures, or dismantling of existing ones.

The experiments were accompanied by a theoretical framework that describes how agent density, communication signals, and environmental structure evolve together. The model extends classical biological aggregation theories to account for environments that change dynamically as agents act on them.

The collective of ants, known as a colony or anthill, is a society organized into castes (queen, workers, and males) where each individual acts with high efficiency, chemical communication through pheromones, and division of tasks for the survival of the group. This collective intelligence allows them to solve complex problems and optimize resources.

Main aspects of social organization (below):

Castes and functions: The queen is responsible for reproduction, while the workers (females) perform tasks such as foraging, cleaning, and caring for the offspring. Males appear during specific periods for reproduction.

Communication: They use pheromones to mark paths, indicate food sources, and warn of dangers.

Teamwork: Specialist ants analyze the environment and coordinate efforts to move large objects or overcome obstacles.

Energy efficiency: They throw grains or pieces of leaves to optimize transport and save energy.

Collective intelligence: The colony functions as a system that, even without central coordination, solves complex problems efficiently.

The social structure of ants is considered an example of evolutionary success, allowing them to survive in diverse environments through group work.

Provided by Harvard John A. Paulson School of Engineering and Applied Sciences 

Thursday, April 16, 2026


TECH


Turning CO₂ from urban waste into useful consumer products

EU researchers are turning carbon emissions from urban waste into everyday household products—from cleaning liquids to leather. Europe's cities emit huge amounts of greenhouse gases into the atmosphere. Two essential urban services—waste incineration and wastewater treatment—are among the biggest contributors to municipal CO2 emissions in the EU.

These systems are vital for public health and urban life, yet they produce emissions that are difficult to eliminate entirely. But what if that CO2 did not have to go to waste?

For an international group of researchers, urban carbon pollution presents an opportunity. Working together in the WaterProof initiative, they are developing a way to capture CO2 from these processes and convert it into formic acid: a simple, highly versatile chemical used across many industries.

This could allow emissions from waste incinerators and wastewater to be turned into the cleaning products under our sink, or even the leather on our shoes.

Turning a problem into a resource...Efforts to tackle climate change focus largely on renewable energy, electrification, and improved efficiency. But some sources remain stubbornly hard to eliminate.

"Some emissions are difficult to stop," said Annelie Jongerius, an electrochemist and program manager at Dutch chemical company Avantium, which coordinates the research.

One option is to capture the CO2 and store it underground. But the WaterProof team is exploring a more circular alternative: keeping carbon in use rather than locking it away.

"It would be nicer if we could use it," Jongerius said. "At the same time, we need alternatives to fossil feedstocks for producing chemicals."

That challenge is particularly visible at facilities like those operated by Dutch waste management company HVC, which runs two major waste incinerators in the Netherlands.

"We have to take in whatever waste society produces," said Jan Peter Born, HVC's waste-to-energy innovation manager. "We have no means of regulating CO2 emissions, apart from encouraging people to buy less and recycle more."

HVC already captures some CO2 and sells it to greenhouse farmers, who use it to increase the yields of crops such as tomatoes and cucumbers. But this is only a partial solution.

"Most of the CO2 administered to the plants is released again through the greenhouse roof," Born explained. "From our legal perspective, it's a delayed emission. It is the farmer who achieves the emission reduction as he avoids gas-firing to produce CO2."

The WaterProof researchers aim to go a step further by turning captured carbon into useful products that keep it out of the atmosphere for longer.

From CO2 to cleaning products...At the heart of the WaterProof innovation is an electrochemical process that converts captured CO2 into formic acid using renewable electricity.

"It's one of the simplest conversions you can make," said Jongerius.

An electrical current drives the reaction in a specialized cell, reducing CO2 into formic acid. Because the system runs on renewable electricity and uses waste-derived carbon, it reduces reliance on fossil-based raw materials.

The process may also offer additional benefits. In an electrochemical cell, two reactions take place at the same time, one at each electrode. While the WaterProof team focuses on converting CO2 into formic acid, they have also explored pairing this with a second reaction that produces hydrogen peroxide and related compounds.

These substances can help break down stubborn pollutants in wastewater, including residues from pharmaceuticals and pesticides. However, this part of the process is still at an early stage and is not being implemented in the current demonstration system.

The team is testing their CO2-derived formic acid in eco-friendly cleaning products such as toilet and surface cleaners.

"It performs exactly the same as conventionally produced formic acid," Jongerius said. "It's the same molecule."

Beyond cleaning, the project is exploring the use of CO2-derived formic acid in leather tanning. While the acid can be used for all types of leather, the team is currently working with Icelandic company Nordic Fish Leather to bring eco-friendly fish leather—a more sustainable alternative to traditional cattle-based leather—to market.

Scaling up for real-world impact...While the chemistry is promising, scaling up is the next challenge. Building on earlier research, the team is now working on a large-scale pilot unit in which multiple electrochemical cells are stacked together, increasing the volume of CO2 that can be processed. If successful, it will pave the way for commercial‑scale plants.

The modular design allows the system to be adapted to different sites, from wastewater plants to incinerators. The aim is to demonstrate the WaterProof process in the summer of 2026, showing that a fossil fuel-free production chain can operate under real-world conditions.

Such systems could eventually be integrated into urban infrastructure, turning cities into hubs for circular chemical production rather than sources of emissions.

Recovering valuable materials from waste...The potential of the work being carried out goes beyond carbon reuse. The researchers are also exploring how formic acid can be used to recover valuable materials from waste streams.

By combining it with other compounds, they are developing deep eutectic solvents—low-toxicity liquids capable of dissolving and binding to metals in waste so that the metals can be extracted.

Many valuable materials end up in incinerator ash and wastewater sludge, including copper, lithium, cobalt, and even small amounts of gold—all critical for modern technologies and the green transition.

HVC already uses mechanical processes to recover metals, separating heavier particles from ash in a process similar to gold panning. But this produces mixed metal streams that are less valuable. The new solvents could allow more precise separation.

"These eutectic solvents can be tailored to target specific metals," Born said. "That means you could recover individual materials rather than mixtures, which increases their value."

However, economic realities remain a barrier. Gold is the only recovered metal that commands a decent price, Born explained. For many others, including rare earths, the market price is still too low to justify the cost.

This raises broader questions about policy and priorities, particularly as demand for critical materials continues to grow: how much societies are willing to subsidize recovery from waste, and whether strategic value should win out over purely market‑driven decisions.

Closing the loop...This kind of "waste-to-resource" thinking is gaining traction across Europe. New EU rules planned for 2026 aim to make recycled materials more widely available—and more widely used.

If successful, they could help turn circular ideas like those behind WaterProof into everyday reality, supporting Europe's ambition to lead the world in circular production by 2030.

By linking carbon capture, chemical production, water treatment, and material recovery, the researchers are bringing together multiple elements of that vision in a single system.

For Jongerius, the concept is both practical and symbolic.

"If you take CO2 from wastewater, turn it into a product, and then use that product to clean your toilet so it flows back into the wastewater system, you create a complete loop," she said. "It is the ultimate example of the circular economy."

Provided by Horizon: The EU Research & Innovation Magazine


DIGITAL LIFE


Thousands of AI‑written, edited or 'polished' books are being sold, an eerie echo of Orwell's 'novel‑writing machines'

At some point in the next several months, I am hoping to receive a modest check as a member of the class covered in the class-action settlement Bartz v. Anthropic.

In 2025, the artificial intelligence company Anthropic, best known for creating the chatbot Clude, agreed to pay up to US$1.5 billion to thousands of authors after a judge ruled that the company had infringed upon their copyrights.

When I first learned about the settlement, I assumed that Anthropic was primarily interested in teaching Claude about the subject of my stolen work, former socialist activist, British Labor politician and feminist Ellen Wilkinson.

It did not initially occur to me that Claude might also be learning about how I, Laura Beers, political historian, craft my sentences and translate my voice to the page.

Yet there is increasing evidence that chatbots like Claude can be trained not only to regurgitate an author's content, but also to mimic their voice. In March 2026, journalist Julia Angwin filed a class action suit against the owners of Grammarly, alleging that the company misappropriated her and other writers' identities to build its "Expert Review" AI tool, which offers to give editorial feedback in the voices of various authors, living and dead.

That a machine might use my writing not only to learn about my subject matter, but also to analyze and ultimately mimic my authorial voice, points to a future that George Orwell envisioned with eerie prescience. In his 1949 dystopian novel "1984," Orwell imagined "novel-writing machines" capable of mass-producing literature, employing programmed mechanical "kaleidoscopes" as substitutes for individual artistic process.

'In place of a human mind, a statistical average'...To what extent, I wondered, could Claude mimic my own voice? I prompted the chatbot to write an essay "in the style of Laura Beers" about an early 20th-century politician. I struggled to recognize myself in the resulting prose. But maybe Claude, having only consumed one of my books, didn't have enough to go on to really nail my tone.

I decided to try again. This time, I would ask it to write a brief essay in the style of Orwell. The subject of my book "Orwell's Ghosts: Wisdom and Warnings for the Twenty-First Century," Orwell is one of the most distinctive literary essayists and prose stylists of the 20th century.

Most of Orwell's work is in the public domain, meaning it could have legally been used to train Claude.

I prompted Claude to write an essay "in the style of George Orwell about the threat to individual identity and freedom of thought posed by AI and particularly by the ability of 'novel writing machines' to produce literature."

Primed with ample source material, the AI did a passable job:

"Here is a type of comfort, familiar to anyone who has ever been slowly dispossessed of something, that consists not in denying the loss but in not yet noticing it. The frog does not leap from the gradually heated water. The citizen does not protest the gradual narrowing of the permitted thought. And the reader, it seems, does not mourn the novel until the novel has already been replaced by something that resembles it in every outward particular—same chapters, same characters, same approximate sequence of feeling—yet contains, in place of a human mind, a statistical average of all the human minds that came before it."

The final sentence about the statistical average rings false. But Orwell would, I suspect, have liked the image of the slowly boiling frog. "Here is a type of comfort" is also a phrase that Orwell might well have written.

I am skeptical that anyone would classify Claude's efforts as indistinguishable from Orwell's prose. But when it comes to machine-produced "literature," perhaps it doesn't really matter whether it can fully approximate original art, as long as it's good enough to function as entertainment and distraction for the masses.

Jam, bootlaces and books...This was Orwell's own dispirited suggestion in "1984. ith the help of "novel-writing machines," the employees of the Ministry of Truth—the government department responsible for controlling information and rewriting history—are able to mass-produce not only novels, but also "newspapers, films, textbooks, telescreen programs [and] plays." 

They churn out "rubbishy newspapers containing almost nothing except sport, crime and astrology, sensational five-cent novelettes" and "films oozing with sex," along with cheap pornography intended for the "proles," as the uneducated working classes of Big Brother's Oceania were known.

The technology disgusts Orwell's protagonist, Winston Smith, who pointedly decides to purchase a diary and pen to write down his own independent thoughts. But to Julia, Winston's nymphomaniac, anti-intellectual lover who works as a mechanic servicing the machines, "Books were just a commodity that had to be produced, like jam or bootlaces."

'Full-length novels in seconds'...According to estimates, thousands of books for sale on Amazon have been written in whole or in part using AI. In other words, today's AI is also being used to mass-produce literature like jam or bootlaces.

Many of these works are not fully machine-written. Instead, they've been, as the AI writing tool Sudowrite advertises, "polished by AI." With its "Rewrite" function, the company promises to give users an opportunity to "refine your prose while staying true to your style, with multiple AI-suggested revisions to choose from." The service is akin to the "touching up" provided by the Ministry of Truth's Rewrite Squad in "1984."

Other books for sale on Amazon are, however, entirely machine-generated. The AI writing tool Squibler promises that if you give it an overarching prompt, it can produce "Full-Length Novels in Seconds."

The potential of AI-generated "literature" to turn a quick-and-easy profit ensures that readers will continue to encounter more of this content in the future, especially as AI's large language models become more refined. Already, studies have shown that readers cannot easily distinguish AI-generated forgeries from original prose.

Last year, I had lunch with a screenwriter friend in Los Angeles. He told me that his colleagues are particularly nervous about the use of AI to produce sequels. Once you have an established cast of characters for a movie franchise like, say, "Fast & Furious," audiences will likely see the next installment whether it's written by man or machine.

Yet my own brief experiments with Claude give me at least some hope for the future of literary art. A chatbot like Claude might be able to absorb and analyze "a statistical average of all the human minds that came before it," but without the input of actual human experience and sensibility, it is hard to envisage them ever producing true art.

Whether AI can produce the next George Orwell novel or essay remains to be seen. That it can and will churn out an increasing volume of popular fiction and screenplays like "Fast & Furious 25" seems less in doubt.

Provided by The Conversation

Wednesday, April 15, 2026


TECH


Printed neurons communicate with living brain cells

Northwestern University engineers printed artificial neurons that don't just imitate the brain—they talk to it. In a new study, the Northwestern team developed flexible, low-cost devices that generate electrical signals realistic enough to activate living brain cells. When tested on slices of tissue from mouse brains, the artificial neurons successfully triggered responses from real neurons, demonstrating a new level of biocompatibility.

The work marks a step toward electronics that can communicate directly with the nervous system, with potential applications in brain-machine interfaces and neuroprosthetics, including implants for hearing, vision and movement.

It also lays the groundwork for more efficient, brain-like computing systems. By mimicking how neurons signal—a key feature of the brain, which is the most energy-efficient computer known—futuristic systems could perform complex operations using far less power than today's data-hungry technologies.

The study is published in the journal Nature Nanotechnology.

"The world we live in today is dominated by artificial intelligence (AI)," said Northwestern's Mark C. Hersam, who led the study. "The way you make AI smarter is by training it on more and more data.

"This data-intensive training leads to a massive power-consumption problem. Therefore, we have to come up with more efficient hardware to handle big data and AI. Because the brain is five orders of magnitude more energy efficient than a digital computer, it makes sense to look to the brain for inspiration for next-generation computing."

An expert in brain-like computing, Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern's McCormick School of Engineering, professor of medicine at Northwestern University Feinberg School of Medicine and professor of chemistry at Northwestern's Weinberg College of Arts and Sciences.

He is also the chair of the Department of Materials Science and Engineering, director of the Materials Research Science and Engineering Center and a member of the International Institute for Nanotechnology. Hersam co-led the study with Vinod K. Sangwan, a research associate professor at McCormick.

From rigid silicon to dynamic brains...As computing tasks become more complex and data-intensive, computers meet these demands by adding more identical components—billions of transistors packed onto rigid, two-dimensional silicon chips. Each transistor behaves the same way. And, once fabricated, those systems remain fixed.

The brain operates in a strikingly different way. Rather than comprising uniform building blocks, the brain relies on diverse types of neurons—each performing specialized roles—organized across regions. These soft, three-dimensional networks constantly change, forming and reshaping connections over time as people learn and adapt.

"Silicon achieves complexity by having billions of identical devices," Hersam said. "Everything is the same, rigid and fixed once it's fabricated. The brain is the opposite. It's heterogeneous, dynamic and three-dimensional. To move in that direction, we need new materials and new ways to build electronics."

While other artificial neurons do exist, they fall short of biological realism. Most produce simplified signals, forcing engineers to rely on large, energy-intensive networks of devices to achieve complex behavior.

An aerosol jet printer in Hersam's laboratory deposits electronic inks onto a flexible polymer substrate. Credit: Mark Hersam/Northwestern University

Turning an imperfection into a feature...To move closer to a biological model, Hersam's team developed artificial neurons using soft, printable materials that better mimic the brain's structure and behavior. The backbone of that advance is a series of electronic inks, formulated from nanoscale flakes of molybdenum disulfide (MoS2), which acts as a semiconductor, and graphene, which serves as an electrical conductor.

Using a specialized printing technique called aerosol jet printing, the researchers deposited these inks onto flexible polymer substrates.

In the past, other researchers viewed the stabilizing polymer in the inks as a problem that interfered with electrical current flow, so they burned the polymer away after printing the electronic circuit. But Hersam leveraged this minor imperfection to add brain-like functionality to his device.

"Instead of fully removing the polymer, we partially decompose it," he said. "Then, when we pass current through the device, we drive further decomposition of the polymer. This decomposition occurs in a spatially inhomogeneous manner, leading to formation of a conductive filament, such that all the current is constricted into a narrow region in space."

This narrow region becomes a localized pathway that produces a sudden, neuron-like electrical response. The result is a new type of artificial neuron that can generate a rich range of electrical signals. Instead of generating simple, one-off pulses, the new device produces more complex signaling patterns—including single spikes, continuous firing and bursting patterns—that resemble how real neurons communicate.

By capturing this signaling diversity, each neuron can encode more information and perform more sophisticated functions. And that can reduce the number of components needed in a computing system, drastically improving overall efficiency.

Putting artificial neurons to the test...To test whether its artificial neurons truly could interface with biology, Hersam's team collaborated with Indira M. Raman, the Bill and Gayle Cook Professor of Neurobiology at Weinberg. Raman's team applied electrical signals from the artificial neurons to slices of mouse cerebellum.

They found the artificial voltage spikes matched key biological features, including timing and duration of living neuron voltage spikes. This reliably triggered activity in real neurons, activating neural circuits in a way similar to natural signals.

"Other labs have tried to make artificial neurons with organic materials, and they spiked too slowly," Hersam said. "Or they used metal oxides, which are too fast. We are within a temporal range that was not previously demonstrated for artificial neurons. You can see the living neurons respond to our artificial neuron. So, we've demonstrated signals that are not only the right timescale but also the right spike shape to interact directly with living neurons."

The approach comes with several environmentally friendly advantages. In addition to improving energy efficiency, the neuron's manufacturing process is simple and low-cost. Because the printing process is additive—placing material only where it's needed—it also reduces waste.

"To meet the energy demands of AI, tech companies are building gigawatt data centers powered by dedicated nuclear power plants," Hersam said.

"It is evident that this massive power consumption will limit further scaling of computing, since it's hard to imagine a next-generation data center requiring 100 nuclear power plants. The other issue is that when you're dissipating gigawatts of power, there's a lot of heat. Because data centers are cooled with water, AI is putting severe stress on the water supply. However you look at it, we need to come up with more energy-efficient hardware for AI."

Provided by Northwestern University


TECH


China tests underwater cutter at 4,000 meters: innovation or instrument of intimidation?

A Chinese deep-sea mission has successfully tested an advanced device capable of cutting through underwater structures such as submarine cable at a depth of thousands of metres.

China has successfully tested a specialized deep-sea electro-hydrostatic actuator capable of severing undersea telecommunications cables at depths of 3,500 meters, marking a significant leap in its deep-sea intervention (i.e. military) capabilities. The trial, conducted by researchers from the Chinese Academy of Sciences and reported by state media, confirms that the device can operate at the abyssal zone where most of the world's critical internet and data infrastructure resides.

With the electro-hydrostatic design, the tool is self-contained and highly efficient, potentially allowing it to be mounted on small, unmanned underwater vehicles (UUVs). During the latest tests, the cutter successfully sliced through high-tension cables without the need for a massive surface support fleet and cumbersome umbilicals. 

Strategically, the ability to operate at 3,500 meters places almost all of the South China Sea’s seabed infrastructure within reach. While China has officially framed the technology as a tool for deep-sea maintenance, salvage, and scientific exploration, state-affiliated reports have hinted at its deployment readiness for more assertive roles. The compact nature of the actuator means it could be deployed from standard research vessels or even commercial ships, making detection of such activities significantly more difficult for foreign maritime powers.

The “Haiyang Dizhi 2” research vessel completed its first deep-sea scientific mission of 2026 on Saturday, according to the Ministry of Natural Resources.

The expedition included a cutting test of a deep-sea electro-hydrostatic actuator at a depth of 3,500 metres (11,483 feet), using technology that has drawn attention for its potential military use.

“The sea trial has bridged the ‘last mile’ from deep-sea equipment development to engineering application,” the official China Science Daily reported on Saturday, suggesting the equipment was poised for actual deployment.

According to the report, the 'Haiyang Dizhi 2' completed the first deep-sea mission of the year on April 11. The electro-hydrostatic actuator (EHA), uses hydraulics, an electric motor, and a control unit combined into a single device, jettisoning the requirement for lengthy and cumbersome external oil piping. The device was reportedly further strengthened against deep-sea pressure and corrosion, enabling "precise mechanical tasks" at very low depths. A September report cited by the article notes that this technology has previously been touted "for cutting subsea cables and operating deep-sea grabs."

The project isn't purely destructive in nature, with obvious applications in the repair and building of underwater oil and gas pipelines. However, given the global context and the timing, the implications for military and nefarious use are obvious. Several projects from China's undersea initiative have reportedly drastically improved the effectiveness of such tasks. A 2022 offshore pipeline repair took crews five hours "just to make a single cut" on an 18-inch section of damaged pipe. Just one year later, homegrown vessels operated remotely could cut pipes up to 38 inches in diameter at a depth of 2,000 feet, including one repair where an eight-inch pipe was cut through in just 20 minutes. The latest testing extends these capabilities to at least 3,500 meters, almost 11,500 feet.

Most pressingly, China's test highlights a growing vulnerability in the physical layer of the digital world. International law regarding the protection of undersea cables remains murky, particularly in international waters where these new devices can operate. The quiet efficiency of the electro-hydrostatic cutter could mean that the next major disruption to global comms may not come just from a cyberattack, but also from a mechanical blade in the deep.

The Haiyang Dizhi 2 (also known as Haiyang Dizhi Shihao, IMO: 9795751) is a high-tech geological research and survey vessel operating under the Chinese flag. Built in 2017, the vessel measures approximately 75.8 meters in length and 15.4 meters in width.

Highlights and recent activities:

State-of-the-art technology: The vessel is equipped with advanced systems, including a 150-ton active offset offshore crane, a 10-kilometer fiber optic winch, and a geological winch. It has a 730-square-meter deck and a helicopter platform.

Subsea operation capability: In April 2026, the vessel successfully conducted tests of an electro-hydrostatic actuator (EHA) at depths exceeding 3,500 meters. This technology is described as capable of performing precise mechanical tasks at great depths, such as cutting submarine cables and operating grabs.

Fuel ice research: In October 2025, the vessel operated on igniting fuel ice (methane hydrate) in deep water, using the ROV (remotely operated vehicle) "Haima" at a depth of 1,522 meters to collect samples.

Scientific missions: The vessel is frequently used for deep-sea research, with missions reported in the South China Sea.

The vessel plays an important role in China's deep-sea marine research activities, with capabilities ranging from geological studies to the handling of subsea infrastructure.

mundophone

Tuesday, April 14, 2026



DIGITAL LIFE




Tiny cameras in earbuds let users talk with AI about what they see

University of Washington researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. For instance, a user might turn to a Korean food package and say, "Hey Vue, translate this for me." They'd then hear an AI voice say, "The visible text translates to 'Cold Noodles' in English."
The prototype system called VueBuds takes low-resolution, black-and-white images, which it transmits over Bluetooth to a phone or other nearby device. A small artificial intelligence model on the device then answers questions about the images within around a second. For privacy, all of the processing happens on the device, a small light turns on when the system is recording, and users can immediately delete images.

The team presented its research April 14 at the CHI 2026 conference in Barcelona. The study is published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems.

"We haven't seen most people adopt smart glasses or VR headsets, in part because a lot of people don't like wearing glasses, and they often come with privacy concerns, such as recording high-resolution video and processing it in the cloud," said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. "But almost everyone wears earbuds already, so we wanted to see if we could put visual intelligence into tiny, low-power earbuds, and also address privacy concerns in the process."
Cameras use far more power than the microphones already in earbuds, so using the same sort of high-res cameras as those in smart glasses wouldn't work. Also, large amounts of information can't stream continuously over Bluetooth, so the system can't run continuous video.
The team found that using a low-power camera—roughly the size of a grain of rice—to shoot low-resolution, black-and-white still images limited battery drain and allowed for Bluetooth transmission while preserving performance.

There was also the matter of placement.

"One big question we had was: Will your face obscure the view too much? Can earbud cameras capture the user's view of the world reliably?" said lead author Maruchi Kim, who completed this work as a UW doctoral student in the Allen School.

The team found that angling each camera 5–10 degrees outward provides a 98–108 degree field of view. While this creates a small blind spot when objects are held closer than 20 centimeters from the user, people rarely hold things that close to examine them—making it a non-issue for typical interactions.

Researchers also discovered that while the vision language model was largely able to make sense of the images from each earbud, having to process images from both earbuds slowed it down. So they had the system "stitch" the two images into one, identifying overlapping imagery and combining it. This allows the system to respond in one second—quick enough to feel like real-time for users—rather than the two seconds it takes with separate images.

The team then had 74 participants compare recorded outputs from VueBuds with outputs from Ray-Ban Meta Glasses in a series of tests. Despite VueBuds using low-resolution images with greater privacy controls and the Ray-Bans taking high-res images processed on the cloud, the two systems performed equivalently. Participants preferred VueBuds' translations, while the Ray-Bans did better at counting objects.

Sixteen participants also wore VueBuds and tested the system's ability to translate and answer basic questions about objects. VueBuds achieved 83%–84% accuracy when translating or identifying objects and 93% when identifying the author and title of a book.

This study was designed to gauge the feasibility of integrating cameras in wireless earbuds. Since the system only takes grayscale images, it can't answer questions that involve color in the scene.

The team wants to add color to the system—color cameras require more power—and to train specialized AI models for specific use cases, such as translation.

"This study lets us glimpse what's possible just using a general purpose language model and our wireless earbuds with cameras," Kim said. "But we'd like to study the system more rigorously for applications like reading a book—for people who have low vision or are blind, for instance—or translating text for travelers."


Provided by University of Washington

DIGITAL LIFE What could your voice give away? With AI, the voice has acquired a new significance. Behind the words lies data that can be use...