
mundophone
Smartphone and Technology
Saturday, April 18, 2026

TECH

TUM SEACLEAR 2.0: the system that is transforming ocean cleanup
An underwater robot with artificial intelligence is already operating in complex environments, identifying debris and acting with precision. What it does underwater could change the future of the seas.
For years, ocean cleanup has been associated with visible actions: nets collecting plastic on the surface, campaigns on beaches, impactful images on social media. But the real problem is often far from sight — at the bottom of the sea. That's where tons of waste remain forgotten. Now, a new technology promises to act precisely at this critical point, where humans can hardly operate.
The proposal goes far beyond a simple robot. It is an integrated system that combines different technologies to act in a coordinated manner. While an underwater vehicle performs the collection, other units provide support on the surface and in the air.
An autonomous main vessel acts as a base of operations. Near it, an auxiliary boat helps with logistics. And, above all, an aerial drone contributes with strategic vision, mapping areas and identifying potential targets.
This set allows for something essential: understanding the environment before acting. The seabed is not a simple space. There is low visibility, unpredictable currents, and a constant mix of natural and artificial elements.
In tests conducted in port environments, the system demonstrated the ability to locate and remove various objects — from abandoned fishing nets to tires and plastic fragments. All this without causing damage to the surroundings.
This approach changes the paradigm. Instead of isolated actions, a model of continuous, coordinated, and potentially scalable operation emerges.
The artificial intelligence that decides what should be removed...The great differentiator of the technology is not only in the strength or collection capacity, but in the decision-making. The system uses artificial intelligence to distinguish what is trash from what is part of the ecosystem.
It may seem simple, but it is not. On the seabed, objects may be covered by organisms, partially buried, or confused with natural formations. Differentiating a rock from debris requires more than basic sensors.
For this, the system was trained with thousands of underwater images. Based on these references, it can recognize patterns, identify objects, and even reconstruct three-dimensional models of the environment.
This analysis allows for precise planning of each movement. The robotic arm, equipped with multiple contact points, applies enough force to remove heavy objects, but delicately enough not to damage fragile items or disturb the surrounding marine life.
This balance is crucial. Cleaning without destroying has always been one of the greatest challenges of interventions in natural environments.
Furthermore, the robot was designed to operate stably at depth. A special flotation system reduces abrupt movements and prevents sediment suspension, maintaining visibility and protecting the local ecosystem.
Operating at depth is one of the most relevant aspects of this advancement. Beyond certain levels, human intervention becomes complex, expensive, and risky. Traditional equipment also faces limitations.
It is in this scenario that automation gains real value. The robot can operate for extended periods, maintaining precision and efficiency, without the risks associated with direct human presence.
In addition, its connection to the surface ensures a continuous supply of energy and operational control, allowing for adjustments when necessary. This creates a balance between autonomy and supervision.
The potential impact goes beyond waste collection. By enabling regular operations in previously inaccessible areas, the technology paves the way for a new form of environmental management of the oceans.
It's not just about removing existing trash, but about continuously monitoring, identifying pollution patterns, and acting preventively.
A future where the oceans can begin to recover... Despite the progress, the researchers themselves acknowledge that this technology does not solve the problem at its source. Waste production remains the main challenge.
However, it offers something that was missing: an effective tool to deal with what is already in the environment.
In a scenario where the oceans accumulate decades of pollution, solutions like this represent a concrete step. They are not immediate or definitive, but they point to a change in approach.
The answer to the title lies precisely there: the robot can do what humans cannot because it acts where we cannot go safely, precisely, and consistently.
And perhaps that is the most important point. For the first time, a real possibility has emerged of cleaning the seabed continuously and on a large scale.
If this expands, the impact could be profound. Not only in the removal of waste, but in how we think about the relationship between technology and nature.
Because, in the end, the question stops being whether we can clean the oceans... and becomes whether we will use these tools in time.
The SEACLEAR 2.0 project (Autonomous Underwater Technology for Cleaner Oceans) is a European initiative involving researchers from the Technical University of Munich (TUM), focused on developing autonomous underwater robots for the detection, classification, and collection of marine litter.
Here are the project highlights (below):
Autonomous technology: The diving robot is capable of operating autonomously to clean the seabed, having carried out operations in the port of Marseille, France.
Identification and collection: The system uses artificial intelligence and computer vision (developed at MIRMI - Munich Institute of Robotics and Machine Intelligence at TUM) to differentiate marine litter from native flora and fauna, collecting only the debris.
SEACLEAR 2.0 Project: This is a continuation that seeks to improve litter removal capacity, involving multiple European partners and TUM technology.
Partnership: The project is supported by the European Union and focuses on robotics solutions for a cleaner marine environment.
The robot stands out for its ability to operate in hard-to-reach areas and actively remove debris, contributing to marine preservation.
by mundophone
Friday, April 17, 2026
DIGITAL LIFE

What could your voice give away?
With AI, the voice has acquired a new significance. Behind the words lies data that can be used both to diagnose a health problem and to steal someone's identity. Speaking to machines is no longer the stuff of science fiction. Alexa (Amazon) has been present in homes for over a decade, and an increasing number of users now favor voice interactions with chatbots.
Whether dictating a message or asking for directions, this shift is not only technical—although AI systems are becoming ever more powerful—but also societal, reflecting how humans engage with machines. Behind the words, however, lies data.
Unlike a password, a voice cannot easily be changed. It is shaped by physiological, linguistic and personal characteristics. This "voiceprint" can identify an individual and reveal sensitive information such as origin or gender. Voice is, therefore, an especially rich form of biometric data.
"When a user interacts with a voice-based system, they not only convey content but also implicit information: emotions, physical traits or behavioral patterns," explains Andrea Cavallaro, professor and head of the Multimedia and Intelligent Sensing Laboratory at EPFL.
The voice indeed contains numerous, sometimes subtle, features like rhythm, accent, tone, speed, intonation, volume or vocabulary that can all reveal something about its owner.
A resource for health care...Cavallaro's research shows that such information can be exploited by analytical systems, raising significant privacy concerns. Far from being a simple communication channel, voice constitutes a dataset in its own right.
The potential uses of voice data are numerous, particularly in health care. The same characteristics making voice identifiable also make it highly informative. Subtle variations in speech may reveal neurological disorders, respiratory diseases or emotional states. This is the premise behind Virtuosis AI, a start-up led by EPFL alumna Lara Gervaise, which explores the use of voice as a diagnostic tool.
Voice analysis could offer a noninvasive approach to medical monitoring. However, this promise also entails greater responsibility, as health data remains among the most sensitive categories of personal information.
Legal challenges...In another context, actors and dubbing professionals have taken legal action against companies accused of using their voices to train AI models without consent. The argument is straightforward: a voice is part of a person's identity and is therefore protected under personality or image rights.
At the same time, voice cloning tools are now widely accessible, sometimes even free of charge. It is no longer only the voices of professional actors that can be replicated, but potentially anyone's.
"You can imagine the scenarios: spam phone calls, deceiving relatives, or fabricating audio evidence. The voice has long been perceived as a personal signature. With AI, it becomes a vector for identity theft at scale," warns Cavallaro.
Protecting privacy from the start...How, then, can voice data be protected? One promising avenue is voice anonymization. Cavallaro's work explores ways of transforming speech to preserve intelligibility while masking the speaker's identity or gender. The approach involves generating "ambiguous" voices, reducing the ability of systems to detect sensitive attributes.
The challenge lies in balancing utility and privacy. Excessive transformation degrades the quality of the signal, while insufficient modification leaves personal information exposed. This research shows that a compromise is achievable.
"We are seeing a broader shift towards 'privacy by design,' where data protection is embedded from the outset in system development," says Cavallaro.
As the voice becomes a dominant interface, it invites us to rethink the relationship between technology, identity and privacy. Speaking may feel ephemeral, words seem to vanish as soon as they are uttered. Yet with AI, they are captured, analyzed and potentially stored.
Mass adoption...The adoption of voice-enabled AI systems has accelerated rapidly in recent years. Today, nine out of ten organizations use AI in at least one of their departments, according to McKinsey. Companies are moving swiftly from experimentation to large-scale deployment, with voice technologies among the most visible interfaces between humans and machines.
On the consumer side, widespread usage is now well established. As early as 2025, Forbes reported that around 60% of smartphone users regularly used a voice assistant, highlighting a clear increase over recent years.
Globally, the number of voice assistants is estimated at 8.4 billion, more than the world's population. This is explained by the multiple devices used within a single household, including smartphones, televisions and cars.
This rapid adoption is driven not only by technological progress but also by behavioral factors. Advances in natural language processing and generative AI have enabled smoother, conversational, hands-free interactions.
The voice is no longer just about issuing commands: it represents a new form of interaction that is reshaping how we access and process information, services and artificial intelligence itself.
Your voice acts as a rich, biometric dataset that can reveal significant personal, physical, and emotional information, especially when analyzed by artificial intelligence. Beyond just the words spoken, it acts as a "voiceprint" that can identify you, map your physical appearance, and expose sensitive health data.
Here is what your voice can give away(below):
1. Physical characteristics and identity
-Physical appearance: AI systems can analyze voice prints to estimate facial features, such as face shape, lip thickness, and nose structure.
-Age and gender: Voice, rhythm, and pitch can reveal a person's sex and approximate age.
-Aging Processes: Changes in vocal cord thickness (often 1% loss of muscle mass per year after age 50) and reduced respiratory capacity can reveal a speaker's age and vocal aging, such as a higher pitch in men and a lower, thicker pitch in women post-menopause.
2. Physical and mental health status
-Diseases and neurological conditions: Subtle variations in speech can indicate health issues, including Parkinson’s disease, amyotrophic lateral sclerosis (ALS), vocal cord paralysis, or cognitive changes.
-Vocal damage/abuse: Persistent hoarseness or a "gravelly" voice can reveal vocal overuse (nodules or polyps), smoking, or gastroesophageal reflux (GERD).
-Emotional state and stress: The voice acts as an "emotional valve." It can give away stress, sadness, anger, or fatigue.
Acute Illness: Respiratory infections, allergies, or chronic dehydration can be detected through voice quality.
3. Identity and privacy risks
-Personal identification: Because the voice is unique, it is used for biometric identification.
-Voice cloning/identity Theft: AI can clone voices with high accuracy using short audio clips, allowing for scams, fraud, or the creation of false evidence.
-Origin/demographics: Accents, vocabulary, and speech patterns can reveal geographic origin or social background.
4. Behavioral patterns
-Personality traits: Speech tempo, volume, and rhythm can suggest personality traits, including confidence, anxiety, or dominance.
-Emotional reactions: In conversational AI, the voice can betray user frustration or satisfaction.
To protect this information, researchers are developing voice anonymization tools that alter the voice to mask identity, gender, or sensitive attributes while keeping speech intelligible.
Provided by Ecole Polytechnique Federale de Lausanne
TECH
![]()
Simple robots inspired by ants collectively build and excavate
When it comes to teamwork, we could all learn something from ants. These relatively simple, small-brained animals are famous for their ability to collectively build massive, intricate, climate-controlled structures, despite having neither a blueprint nor a worksite foreman.
Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Faculty of Arts and Sciences, who have long been fascinated by physical phenomena that orchestrate extraordinarily complex natural processes—from ant colonies, to brain folds, to the human gut—have developed a fleet of cooperative robots that, very much like ants, can spontaneously organize to build and dismantle structures. They are governed only by environmental cues and minimal physical rules.
Their study, published in PRX Life, demonstrates how a decentralized swarm of agents, whether robots or insects, can coordinate to complete tasks without central control, offering insights into both biological systems and future autonomous robotic systems.
The study was led by Professor L. Mahadevan, whose lab has studied social insect communities for many years, and previously showed they could mimic ants' excavation and escape prowess using robots.
"Our new study shows how simple, local rules can lead to the emergence of complex task completion that is self-organized and thus robust and adaptive," said Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, Organismic and Evolutionary Biology, and Physics at SEAS and FAS.
"We also introduce the concept of 'exbodied intelligence," where collective cognition arises not solely from individual agents, but from their ongoing interaction with an evolving environment."
The work could lead to many potential applications, from autonomous construction in hazardous environments, to planetary exploration, to experimental models for studying animal behavior.
Identifying parameters that produce collective behaviors...In their latest study, the team refined their robotic platform to show both excavation and building performance while, crucially, identifying the key parameters needed to achieve those behaviors.
Social insects like ants and termites build complex structures using a biological technique called stigmergy, in which individuals modify their environments and respond to those modifications.
Ants emit pheromones—like a chemical perfume—from their bodies that fellow ants then respond to. Inspired by this natural cascade, Mahadevan's lab uses robotic ants, a.k.a. RAnts, which respond to "photormones"—light fields as digital stand-ins for pheromone fields.
Each robot senses gradients in the photormone field and leaves behind signals as it moves, creating a feedback loop between robots and their environment. This enables coordination across the swarm.
The agents operate using only a few rules: follow gradients in the photormone signal; pick up and transport building blocks in response to these cues; and deposit materials when signal thresholds are met.
These simple rules nonetheless trigger sophisticated behaviors. Robots spontaneously cluster to form nucleation sites, where structures begin to emerge. These sites arise through a mechanism called trapping instability, in which robots become temporarily confined by the signals they generate. As more robots converge on these locations, construction accelerates, producing organized aggregates of building material.
An illustration of how the collective, decentralized behavior of ants has inspired experiments with cooperative robots that can complete tasks without central control. Credit: Harvard John A. Paulson School of Engineering and Applied SciencesSwarms that switch between construction and dismantling...Echoing previous work, the researchers found that the swarm's behavior can be tuned by adjusting two key parameters: cooperation strength, or how strongly robots follow the signal gradient; and deposition rate, or whether robots deposit or remove material. By changing these parameters, the researchers showed that their swarm could switch between construction of new structures, or dismantling of existing ones.
The experiments were accompanied by a theoretical framework that describes how agent density, communication signals, and environmental structure evolve together. The model extends classical biological aggregation theories to account for environments that change dynamically as agents act on them.
The collective of ants, known as a colony or anthill, is a society organized into castes (queen, workers, and males) where each individual acts with high efficiency, chemical communication through pheromones, and division of tasks for the survival of the group. This collective intelligence allows them to solve complex problems and optimize resources.
Main aspects of social organization (below):
Castes and functions: The queen is responsible for reproduction, while the workers (females) perform tasks such as foraging, cleaning, and caring for the offspring. Males appear during specific periods for reproduction.
Communication: They use pheromones to mark paths, indicate food sources, and warn of dangers.
Teamwork: Specialist ants analyze the environment and coordinate efforts to move large objects or overcome obstacles.
Energy efficiency: They throw grains or pieces of leaves to optimize transport and save energy.
Collective intelligence: The colony functions as a system that, even without central coordination, solves complex problems efficiently.
The social structure of ants is considered an example of evolutionary success, allowing them to survive in diverse environments through group work.
Provided by Harvard John A. Paulson School of Engineering and Applied Sciences
Thursday, April 16, 2026
TECH
Turning CO₂ from urban waste into useful consumer products
EU researchers are turning carbon emissions from urban waste into everyday household products—from cleaning liquids to leather. Europe's cities emit huge amounts of greenhouse gases into the atmosphere. Two essential urban services—waste incineration and wastewater treatment—are among the biggest contributors to municipal CO2 emissions in the EU.
These systems are vital for public health and urban life, yet they produce emissions that are difficult to eliminate entirely. But what if that CO2 did not have to go to waste?
For an international group of researchers, urban carbon pollution presents an opportunity. Working together in the WaterProof initiative, they are developing a way to capture CO2 from these processes and convert it into formic acid: a simple, highly versatile chemical used across many industries.
This could allow emissions from waste incinerators and wastewater to be turned into the cleaning products under our sink, or even the leather on our shoes.
Turning a problem into a resource...Efforts to tackle climate change focus largely on renewable energy, electrification, and improved efficiency. But some sources remain stubbornly hard to eliminate.
"Some emissions are difficult to stop," said Annelie Jongerius, an electrochemist and program manager at Dutch chemical company Avantium, which coordinates the research.
One option is to capture the CO2 and store it underground. But the WaterProof team is exploring a more circular alternative: keeping carbon in use rather than locking it away.
"It would be nicer if we could use it," Jongerius said. "At the same time, we need alternatives to fossil feedstocks for producing chemicals."
That challenge is particularly visible at facilities like those operated by Dutch waste management company HVC, which runs two major waste incinerators in the Netherlands.
"We have to take in whatever waste society produces," said Jan Peter Born, HVC's waste-to-energy innovation manager. "We have no means of regulating CO2 emissions, apart from encouraging people to buy less and recycle more."
HVC already captures some CO2 and sells it to greenhouse farmers, who use it to increase the yields of crops such as tomatoes and cucumbers. But this is only a partial solution.
"Most of the CO2 administered to the plants is released again through the greenhouse roof," Born explained. "From our legal perspective, it's a delayed emission. It is the farmer who achieves the emission reduction as he avoids gas-firing to produce CO2."
The WaterProof researchers aim to go a step further by turning captured carbon into useful products that keep it out of the atmosphere for longer.
From CO2 to cleaning products...At the heart of the WaterProof innovation is an electrochemical process that converts captured CO2 into formic acid using renewable electricity.
"It's one of the simplest conversions you can make," said Jongerius.
An electrical current drives the reaction in a specialized cell, reducing CO2 into formic acid. Because the system runs on renewable electricity and uses waste-derived carbon, it reduces reliance on fossil-based raw materials.
The process may also offer additional benefits. In an electrochemical cell, two reactions take place at the same time, one at each electrode. While the WaterProof team focuses on converting CO2 into formic acid, they have also explored pairing this with a second reaction that produces hydrogen peroxide and related compounds.
These substances can help break down stubborn pollutants in wastewater, including residues from pharmaceuticals and pesticides. However, this part of the process is still at an early stage and is not being implemented in the current demonstration system.
The team is testing their CO2-derived formic acid in eco-friendly cleaning products such as toilet and surface cleaners.
"It performs exactly the same as conventionally produced formic acid," Jongerius said. "It's the same molecule."
Beyond cleaning, the project is exploring the use of CO2-derived formic acid in leather tanning. While the acid can be used for all types of leather, the team is currently working with Icelandic company Nordic Fish Leather to bring eco-friendly fish leather—a more sustainable alternative to traditional cattle-based leather—to market.
Scaling up for real-world impact...While the chemistry is promising, scaling up is the next challenge. Building on earlier research, the team is now working on a large-scale pilot unit in which multiple electrochemical cells are stacked together, increasing the volume of CO2 that can be processed. If successful, it will pave the way for commercial‑scale plants.
The modular design allows the system to be adapted to different sites, from wastewater plants to incinerators. The aim is to demonstrate the WaterProof process in the summer of 2026, showing that a fossil fuel-free production chain can operate under real-world conditions.
Such systems could eventually be integrated into urban infrastructure, turning cities into hubs for circular chemical production rather than sources of emissions.
Recovering valuable materials from waste...The potential of the work being carried out goes beyond carbon reuse. The researchers are also exploring how formic acid can be used to recover valuable materials from waste streams.
By combining it with other compounds, they are developing deep eutectic solvents—low-toxicity liquids capable of dissolving and binding to metals in waste so that the metals can be extracted.
Many valuable materials end up in incinerator ash and wastewater sludge, including copper, lithium, cobalt, and even small amounts of gold—all critical for modern technologies and the green transition.
HVC already uses mechanical processes to recover metals, separating heavier particles from ash in a process similar to gold panning. But this produces mixed metal streams that are less valuable. The new solvents could allow more precise separation.
"These eutectic solvents can be tailored to target specific metals," Born said. "That means you could recover individual materials rather than mixtures, which increases their value."
However, economic realities remain a barrier. Gold is the only recovered metal that commands a decent price, Born explained. For many others, including rare earths, the market price is still too low to justify the cost.
This raises broader questions about policy and priorities, particularly as demand for critical materials continues to grow: how much societies are willing to subsidize recovery from waste, and whether strategic value should win out over purely market‑driven decisions.
Closing the loop...This kind of "waste-to-resource" thinking is gaining traction across Europe. New EU rules planned for 2026 aim to make recycled materials more widely available—and more widely used.
If successful, they could help turn circular ideas like those behind WaterProof into everyday reality, supporting Europe's ambition to lead the world in circular production by 2030.
By linking carbon capture, chemical production, water treatment, and material recovery, the researchers are bringing together multiple elements of that vision in a single system.
For Jongerius, the concept is both practical and symbolic.
"If you take CO2 from wastewater, turn it into a product, and then use that product to clean your toilet so it flows back into the wastewater system, you create a complete loop," she said. "It is the ultimate example of the circular economy."
Provided by Horizon: The EU Research & Innovation Magazine
DIGITAL LIFE

Thousands of AI‑written, edited or 'polished' books are being sold, an eerie echo of Orwell's 'novel‑writing machines'
At some point in the next several months, I am hoping to receive a modest check as a member of the class covered in the class-action settlement Bartz v. Anthropic.
In 2025, the artificial intelligence company Anthropic, best known for creating the chatbot Clude, agreed to pay up to US$1.5 billion to thousands of authors after a judge ruled that the company had infringed upon their copyrights.
When I first learned about the settlement, I assumed that Anthropic was primarily interested in teaching Claude about the subject of my stolen work, former socialist activist, British Labor politician and feminist Ellen Wilkinson.
It did not initially occur to me that Claude might also be learning about how I, Laura Beers, political historian, craft my sentences and translate my voice to the page.
Yet there is increasing evidence that chatbots like Claude can be trained not only to regurgitate an author's content, but also to mimic their voice. In March 2026, journalist Julia Angwin filed a class action suit against the owners of Grammarly, alleging that the company misappropriated her and other writers' identities to build its "Expert Review" AI tool, which offers to give editorial feedback in the voices of various authors, living and dead.
That a machine might use my writing not only to learn about my subject matter, but also to analyze and ultimately mimic my authorial voice, points to a future that George Orwell envisioned with eerie prescience. In his 1949 dystopian novel "1984," Orwell imagined "novel-writing machines" capable of mass-producing literature, employing programmed mechanical "kaleidoscopes" as substitutes for individual artistic process.
'In place of a human mind, a statistical average'...To what extent, I wondered, could Claude mimic my own voice? I prompted the chatbot to write an essay "in the style of Laura Beers" about an early 20th-century politician. I struggled to recognize myself in the resulting prose. But maybe Claude, having only consumed one of my books, didn't have enough to go on to really nail my tone.
I decided to try again. This time, I would ask it to write a brief essay in the style of Orwell. The subject of my book "Orwell's Ghosts: Wisdom and Warnings for the Twenty-First Century," Orwell is one of the most distinctive literary essayists and prose stylists of the 20th century.
Most of Orwell's work is in the public domain, meaning it could have legally been used to train Claude.
I prompted Claude to write an essay "in the style of George Orwell about the threat to individual identity and freedom of thought posed by AI and particularly by the ability of 'novel writing machines' to produce literature."
Primed with ample source material, the AI did a passable job:
"Here is a type of comfort, familiar to anyone who has ever been slowly dispossessed of something, that consists not in denying the loss but in not yet noticing it. The frog does not leap from the gradually heated water. The citizen does not protest the gradual narrowing of the permitted thought. And the reader, it seems, does not mourn the novel until the novel has already been replaced by something that resembles it in every outward particular—same chapters, same characters, same approximate sequence of feeling—yet contains, in place of a human mind, a statistical average of all the human minds that came before it."
The final sentence about the statistical average rings false. But Orwell would, I suspect, have liked the image of the slowly boiling frog. "Here is a type of comfort" is also a phrase that Orwell might well have written.
I am skeptical that anyone would classify Claude's efforts as indistinguishable from Orwell's prose. But when it comes to machine-produced "literature," perhaps it doesn't really matter whether it can fully approximate original art, as long as it's good enough to function as entertainment and distraction for the masses.
Jam, bootlaces and books...This was Orwell's own dispirited suggestion in "1984. ith the help of "novel-writing machines," the employees of the Ministry of Truth—the government department responsible for controlling information and rewriting history—are able to mass-produce not only novels, but also "newspapers, films, textbooks, telescreen programs [and] plays."
They churn out "rubbishy newspapers containing almost nothing except sport, crime and astrology, sensational five-cent novelettes" and "films oozing with sex," along with cheap pornography intended for the "proles," as the uneducated working classes of Big Brother's Oceania were known.
The technology disgusts Orwell's protagonist, Winston Smith, who pointedly decides to purchase a diary and pen to write down his own independent thoughts. But to Julia, Winston's nymphomaniac, anti-intellectual lover who works as a mechanic servicing the machines, "Books were just a commodity that had to be produced, like jam or bootlaces."
'Full-length novels in seconds'...According to estimates, thousands of books for sale on Amazon have been written in whole or in part using AI. In other words, today's AI is also being used to mass-produce literature like jam or bootlaces.
Many of these works are not fully machine-written. Instead, they've been, as the AI writing tool Sudowrite advertises, "polished by AI." With its "Rewrite" function, the company promises to give users an opportunity to "refine your prose while staying true to your style, with multiple AI-suggested revisions to choose from." The service is akin to the "touching up" provided by the Ministry of Truth's Rewrite Squad in "1984."
Other books for sale on Amazon are, however, entirely machine-generated. The AI writing tool Squibler promises that if you give it an overarching prompt, it can produce "Full-Length Novels in Seconds."
The potential of AI-generated "literature" to turn a quick-and-easy profit ensures that readers will continue to encounter more of this content in the future, especially as AI's large language models become more refined. Already, studies have shown that readers cannot easily distinguish AI-generated forgeries from original prose.
Last year, I had lunch with a screenwriter friend in Los Angeles. He told me that his colleagues are particularly nervous about the use of AI to produce sequels. Once you have an established cast of characters for a movie franchise like, say, "Fast & Furious," audiences will likely see the next installment whether it's written by man or machine.
Yet my own brief experiments with Claude give me at least some hope for the future of literary art. A chatbot like Claude might be able to absorb and analyze "a statistical average of all the human minds that came before it," but without the input of actual human experience and sensibility, it is hard to envisage them ever producing true art.
Whether AI can produce the next George Orwell novel or essay remains to be seen. That it can and will churn out an increasing volume of popular fiction and screenplays like "Fast & Furious 25" seems less in doubt.
Provided by The Conversation
Wednesday, April 15, 2026
TECH

Printed neurons communicate with living brain cells
Northwestern University engineers printed artificial neurons that don't just imitate the brain—they talk to it. In a new study, the Northwestern team developed flexible, low-cost devices that generate electrical signals realistic enough to activate living brain cells. When tested on slices of tissue from mouse brains, the artificial neurons successfully triggered responses from real neurons, demonstrating a new level of biocompatibility.
The work marks a step toward electronics that can communicate directly with the nervous system, with potential applications in brain-machine interfaces and neuroprosthetics, including implants for hearing, vision and movement.
It also lays the groundwork for more efficient, brain-like computing systems. By mimicking how neurons signal—a key feature of the brain, which is the most energy-efficient computer known—futuristic systems could perform complex operations using far less power than today's data-hungry technologies.
The study is published in the journal Nature Nanotechnology.
"The world we live in today is dominated by artificial intelligence (AI)," said Northwestern's Mark C. Hersam, who led the study. "The way you make AI smarter is by training it on more and more data.
"This data-intensive training leads to a massive power-consumption problem. Therefore, we have to come up with more efficient hardware to handle big data and AI. Because the brain is five orders of magnitude more energy efficient than a digital computer, it makes sense to look to the brain for inspiration for next-generation computing."
An expert in brain-like computing, Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern's McCormick School of Engineering, professor of medicine at Northwestern University Feinberg School of Medicine and professor of chemistry at Northwestern's Weinberg College of Arts and Sciences.
He is also the chair of the Department of Materials Science and Engineering, director of the Materials Research Science and Engineering Center and a member of the International Institute for Nanotechnology. Hersam co-led the study with Vinod K. Sangwan, a research associate professor at McCormick.
From rigid silicon to dynamic brains...As computing tasks become more complex and data-intensive, computers meet these demands by adding more identical components—billions of transistors packed onto rigid, two-dimensional silicon chips. Each transistor behaves the same way. And, once fabricated, those systems remain fixed.
The brain operates in a strikingly different way. Rather than comprising uniform building blocks, the brain relies on diverse types of neurons—each performing specialized roles—organized across regions. These soft, three-dimensional networks constantly change, forming and reshaping connections over time as people learn and adapt.
"Silicon achieves complexity by having billions of identical devices," Hersam said. "Everything is the same, rigid and fixed once it's fabricated. The brain is the opposite. It's heterogeneous, dynamic and three-dimensional. To move in that direction, we need new materials and new ways to build electronics."
While other artificial neurons do exist, they fall short of biological realism. Most produce simplified signals, forcing engineers to rely on large, energy-intensive networks of devices to achieve complex behavior.
An aerosol jet printer in Hersam's laboratory deposits electronic inks onto a flexible polymer substrate. Credit: Mark Hersam/Northwestern UniversityTurning an imperfection into a feature...To move closer to a biological model, Hersam's team developed artificial neurons using soft, printable materials that better mimic the brain's structure and behavior. The backbone of that advance is a series of electronic inks, formulated from nanoscale flakes of molybdenum disulfide (MoS2), which acts as a semiconductor, and graphene, which serves as an electrical conductor.
Using a specialized printing technique called aerosol jet printing, the researchers deposited these inks onto flexible polymer substrates.
In the past, other researchers viewed the stabilizing polymer in the inks as a problem that interfered with electrical current flow, so they burned the polymer away after printing the electronic circuit. But Hersam leveraged this minor imperfection to add brain-like functionality to his device.
"Instead of fully removing the polymer, we partially decompose it," he said. "Then, when we pass current through the device, we drive further decomposition of the polymer. This decomposition occurs in a spatially inhomogeneous manner, leading to formation of a conductive filament, such that all the current is constricted into a narrow region in space."
This narrow region becomes a localized pathway that produces a sudden, neuron-like electrical response. The result is a new type of artificial neuron that can generate a rich range of electrical signals. Instead of generating simple, one-off pulses, the new device produces more complex signaling patterns—including single spikes, continuous firing and bursting patterns—that resemble how real neurons communicate.
By capturing this signaling diversity, each neuron can encode more information and perform more sophisticated functions. And that can reduce the number of components needed in a computing system, drastically improving overall efficiency.
Putting artificial neurons to the test...To test whether its artificial neurons truly could interface with biology, Hersam's team collaborated with Indira M. Raman, the Bill and Gayle Cook Professor of Neurobiology at Weinberg. Raman's team applied electrical signals from the artificial neurons to slices of mouse cerebellum.
They found the artificial voltage spikes matched key biological features, including timing and duration of living neuron voltage spikes. This reliably triggered activity in real neurons, activating neural circuits in a way similar to natural signals.
"Other labs have tried to make artificial neurons with organic materials, and they spiked too slowly," Hersam said. "Or they used metal oxides, which are too fast. We are within a temporal range that was not previously demonstrated for artificial neurons. You can see the living neurons respond to our artificial neuron. So, we've demonstrated signals that are not only the right timescale but also the right spike shape to interact directly with living neurons."
The approach comes with several environmentally friendly advantages. In addition to improving energy efficiency, the neuron's manufacturing process is simple and low-cost. Because the printing process is additive—placing material only where it's needed—it also reduces waste.
"To meet the energy demands of AI, tech companies are building gigawatt data centers powered by dedicated nuclear power plants," Hersam said.
"It is evident that this massive power consumption will limit further scaling of computing, since it's hard to imagine a next-generation data center requiring 100 nuclear power plants. The other issue is that when you're dissipating gigawatts of power, there's a lot of heat. Because data centers are cooled with water, AI is putting severe stress on the water supply. However you look at it, we need to come up with more energy-efficient hardware for AI."
Provided by Northwestern University
RICOH RICOH Meeting Hub 360: more inclusive, immersive, and integrated remote meetings Remote work has many advantages for many people, but ...
-
TECH In Memoriam: The Tech That Died in 2018 A bongo enthusiast once said, "time is a flat circle," which is a pretentious...
-
OPPO New and unprecedented model Find X6 will debut with a huge photo sensor Oppo has a strong smartphone lineup lately, offering great fo...
-
TSMC TSMC unit, NXP of Netherlands unveil Singapore chip plant plan An affiliate of Taiwan chip titan TSMC has joined Dutch chip maker NXP...


