Thursday, April 16, 2026


DIGITAL LIFE


Thousands of AI‑written, edited or 'polished' books are being sold, an eerie echo of Orwell's 'novel‑writing machines'

At some point in the next several months, I am hoping to receive a modest check as a member of the class covered in the class-action settlement Bartz v. Anthropic.

In 2025, the artificial intelligence company Anthropic, best known for creating the chatbot Clude, agreed to pay up to US$1.5 billion to thousands of authors after a judge ruled that the company had infringed upon their copyrights.

When I first learned about the settlement, I assumed that Anthropic was primarily interested in teaching Claude about the subject of my stolen work, former socialist activist, British Labor politician and feminist Ellen Wilkinson.

It did not initially occur to me that Claude might also be learning about how I, Laura Beers, political historian, craft my sentences and translate my voice to the page.

Yet there is increasing evidence that chatbots like Claude can be trained not only to regurgitate an author's content, but also to mimic their voice. In March 2026, journalist Julia Angwin filed a class action suit against the owners of Grammarly, alleging that the company misappropriated her and other writers' identities to build its "Expert Review" AI tool, which offers to give editorial feedback in the voices of various authors, living and dead.

That a machine might use my writing not only to learn about my subject matter, but also to analyze and ultimately mimic my authorial voice, points to a future that George Orwell envisioned with eerie prescience. In his 1949 dystopian novel "1984," Orwell imagined "novel-writing machines" capable of mass-producing literature, employing programmed mechanical "kaleidoscopes" as substitutes for individual artistic process.

'In place of a human mind, a statistical average'...To what extent, I wondered, could Claude mimic my own voice? I prompted the chatbot to write an essay "in the style of Laura Beers" about an early 20th-century politician. I struggled to recognize myself in the resulting prose. But maybe Claude, having only consumed one of my books, didn't have enough to go on to really nail my tone.

I decided to try again. This time, I would ask it to write a brief essay in the style of Orwell. The subject of my book "Orwell's Ghosts: Wisdom and Warnings for the Twenty-First Century," Orwell is one of the most distinctive literary essayists and prose stylists of the 20th century.

Most of Orwell's work is in the public domain, meaning it could have legally been used to train Claude.

I prompted Claude to write an essay "in the style of George Orwell about the threat to individual identity and freedom of thought posed by AI and particularly by the ability of 'novel writing machines' to produce literature."

Primed with ample source material, the AI did a passable job:

"Here is a type of comfort, familiar to anyone who has ever been slowly dispossessed of something, that consists not in denying the loss but in not yet noticing it. The frog does not leap from the gradually heated water. The citizen does not protest the gradual narrowing of the permitted thought. And the reader, it seems, does not mourn the novel until the novel has already been replaced by something that resembles it in every outward particular—same chapters, same characters, same approximate sequence of feeling—yet contains, in place of a human mind, a statistical average of all the human minds that came before it."

The final sentence about the statistical average rings false. But Orwell would, I suspect, have liked the image of the slowly boiling frog. "Here is a type of comfort" is also a phrase that Orwell might well have written.

I am skeptical that anyone would classify Claude's efforts as indistinguishable from Orwell's prose. But when it comes to machine-produced "literature," perhaps it doesn't really matter whether it can fully approximate original art, as long as it's good enough to function as entertainment and distraction for the masses.

Jam, bootlaces and books...This was Orwell's own dispirited suggestion in "1984. ith the help of "novel-writing machines," the employees of the Ministry of Truth—the government department responsible for controlling information and rewriting history—are able to mass-produce not only novels, but also "newspapers, films, textbooks, telescreen programs [and] plays." 

They churn out "rubbishy newspapers containing almost nothing except sport, crime and astrology, sensational five-cent novelettes" and "films oozing with sex," along with cheap pornography intended for the "proles," as the uneducated working classes of Big Brother's Oceania were known.

The technology disgusts Orwell's protagonist, Winston Smith, who pointedly decides to purchase a diary and pen to write down his own independent thoughts. But to Julia, Winston's nymphomaniac, anti-intellectual lover who works as a mechanic servicing the machines, "Books were just a commodity that had to be produced, like jam or bootlaces."

'Full-length novels in seconds'...According to estimates, thousands of books for sale on Amazon have been written in whole or in part using AI. In other words, today's AI is also being used to mass-produce literature like jam or bootlaces.

Many of these works are not fully machine-written. Instead, they've been, as the AI writing tool Sudowrite advertises, "polished by AI." With its "Rewrite" function, the company promises to give users an opportunity to "refine your prose while staying true to your style, with multiple AI-suggested revisions to choose from." The service is akin to the "touching up" provided by the Ministry of Truth's Rewrite Squad in "1984."

Other books for sale on Amazon are, however, entirely machine-generated. The AI writing tool Squibler promises that if you give it an overarching prompt, it can produce "Full-Length Novels in Seconds."

The potential of AI-generated "literature" to turn a quick-and-easy profit ensures that readers will continue to encounter more of this content in the future, especially as AI's large language models become more refined. Already, studies have shown that readers cannot easily distinguish AI-generated forgeries from original prose.

Last year, I had lunch with a screenwriter friend in Los Angeles. He told me that his colleagues are particularly nervous about the use of AI to produce sequels. Once you have an established cast of characters for a movie franchise like, say, "Fast & Furious," audiences will likely see the next installment whether it's written by man or machine.

Yet my own brief experiments with Claude give me at least some hope for the future of literary art. A chatbot like Claude might be able to absorb and analyze "a statistical average of all the human minds that came before it," but without the input of actual human experience and sensibility, it is hard to envisage them ever producing true art.

Whether AI can produce the next George Orwell novel or essay remains to be seen. That it can and will churn out an increasing volume of popular fiction and screenplays like "Fast & Furious 25" seems less in doubt.

Provided by The Conversation

Wednesday, April 15, 2026


TECH


Printed neurons communicate with living brain cells

Northwestern University engineers printed artificial neurons that don't just imitate the brain—they talk to it. In a new study, the Northwestern team developed flexible, low-cost devices that generate electrical signals realistic enough to activate living brain cells. When tested on slices of tissue from mouse brains, the artificial neurons successfully triggered responses from real neurons, demonstrating a new level of biocompatibility.

The work marks a step toward electronics that can communicate directly with the nervous system, with potential applications in brain-machine interfaces and neuroprosthetics, including implants for hearing, vision and movement.

It also lays the groundwork for more efficient, brain-like computing systems. By mimicking how neurons signal—a key feature of the brain, which is the most energy-efficient computer known—futuristic systems could perform complex operations using far less power than today's data-hungry technologies.

The study is published in the journal Nature Nanotechnology.

"The world we live in today is dominated by artificial intelligence (AI)," said Northwestern's Mark C. Hersam, who led the study. "The way you make AI smarter is by training it on more and more data.

"This data-intensive training leads to a massive power-consumption problem. Therefore, we have to come up with more efficient hardware to handle big data and AI. Because the brain is five orders of magnitude more energy efficient than a digital computer, it makes sense to look to the brain for inspiration for next-generation computing."

An expert in brain-like computing, Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern's McCormick School of Engineering, professor of medicine at Northwestern University Feinberg School of Medicine and professor of chemistry at Northwestern's Weinberg College of Arts and Sciences.

He is also the chair of the Department of Materials Science and Engineering, director of the Materials Research Science and Engineering Center and a member of the International Institute for Nanotechnology. Hersam co-led the study with Vinod K. Sangwan, a research associate professor at McCormick.

From rigid silicon to dynamic brains...As computing tasks become more complex and data-intensive, computers meet these demands by adding more identical components—billions of transistors packed onto rigid, two-dimensional silicon chips. Each transistor behaves the same way. And, once fabricated, those systems remain fixed.

The brain operates in a strikingly different way. Rather than comprising uniform building blocks, the brain relies on diverse types of neurons—each performing specialized roles—organized across regions. These soft, three-dimensional networks constantly change, forming and reshaping connections over time as people learn and adapt.

"Silicon achieves complexity by having billions of identical devices," Hersam said. "Everything is the same, rigid and fixed once it's fabricated. The brain is the opposite. It's heterogeneous, dynamic and three-dimensional. To move in that direction, we need new materials and new ways to build electronics."

While other artificial neurons do exist, they fall short of biological realism. Most produce simplified signals, forcing engineers to rely on large, energy-intensive networks of devices to achieve complex behavior.

An aerosol jet printer in Hersam's laboratory deposits electronic inks onto a flexible polymer substrate. Credit: Mark Hersam/Northwestern University

Turning an imperfection into a feature...To move closer to a biological model, Hersam's team developed artificial neurons using soft, printable materials that better mimic the brain's structure and behavior. The backbone of that advance is a series of electronic inks, formulated from nanoscale flakes of molybdenum disulfide (MoS2), which acts as a semiconductor, and graphene, which serves as an electrical conductor.

Using a specialized printing technique called aerosol jet printing, the researchers deposited these inks onto flexible polymer substrates.

In the past, other researchers viewed the stabilizing polymer in the inks as a problem that interfered with electrical current flow, so they burned the polymer away after printing the electronic circuit. But Hersam leveraged this minor imperfection to add brain-like functionality to his device.

"Instead of fully removing the polymer, we partially decompose it," he said. "Then, when we pass current through the device, we drive further decomposition of the polymer. This decomposition occurs in a spatially inhomogeneous manner, leading to formation of a conductive filament, such that all the current is constricted into a narrow region in space."

This narrow region becomes a localized pathway that produces a sudden, neuron-like electrical response. The result is a new type of artificial neuron that can generate a rich range of electrical signals. Instead of generating simple, one-off pulses, the new device produces more complex signaling patterns—including single spikes, continuous firing and bursting patterns—that resemble how real neurons communicate.

By capturing this signaling diversity, each neuron can encode more information and perform more sophisticated functions. And that can reduce the number of components needed in a computing system, drastically improving overall efficiency.

Putting artificial neurons to the test...To test whether its artificial neurons truly could interface with biology, Hersam's team collaborated with Indira M. Raman, the Bill and Gayle Cook Professor of Neurobiology at Weinberg. Raman's team applied electrical signals from the artificial neurons to slices of mouse cerebellum.

They found the artificial voltage spikes matched key biological features, including timing and duration of living neuron voltage spikes. This reliably triggered activity in real neurons, activating neural circuits in a way similar to natural signals.

"Other labs have tried to make artificial neurons with organic materials, and they spiked too slowly," Hersam said. "Or they used metal oxides, which are too fast. We are within a temporal range that was not previously demonstrated for artificial neurons. You can see the living neurons respond to our artificial neuron. So, we've demonstrated signals that are not only the right timescale but also the right spike shape to interact directly with living neurons."

The approach comes with several environmentally friendly advantages. In addition to improving energy efficiency, the neuron's manufacturing process is simple and low-cost. Because the printing process is additive—placing material only where it's needed—it also reduces waste.

"To meet the energy demands of AI, tech companies are building gigawatt data centers powered by dedicated nuclear power plants," Hersam said.

"It is evident that this massive power consumption will limit further scaling of computing, since it's hard to imagine a next-generation data center requiring 100 nuclear power plants. The other issue is that when you're dissipating gigawatts of power, there's a lot of heat. Because data centers are cooled with water, AI is putting severe stress on the water supply. However you look at it, we need to come up with more energy-efficient hardware for AI."

Provided by Northwestern University


TECH


China tests underwater cutter at 4,000 meters: innovation or instrument of intimidation?

A Chinese deep-sea mission has successfully tested an advanced device capable of cutting through underwater structures such as submarine cable at a depth of thousands of metres.

China has successfully tested a specialized deep-sea electro-hydrostatic actuator capable of severing undersea telecommunications cables at depths of 3,500 meters, marking a significant leap in its deep-sea intervention (i.e. military) capabilities. The trial, conducted by researchers from the Chinese Academy of Sciences and reported by state media, confirms that the device can operate at the abyssal zone where most of the world's critical internet and data infrastructure resides.

With the electro-hydrostatic design, the tool is self-contained and highly efficient, potentially allowing it to be mounted on small, unmanned underwater vehicles (UUVs). During the latest tests, the cutter successfully sliced through high-tension cables without the need for a massive surface support fleet and cumbersome umbilicals. 

Strategically, the ability to operate at 3,500 meters places almost all of the South China Sea’s seabed infrastructure within reach. While China has officially framed the technology as a tool for deep-sea maintenance, salvage, and scientific exploration, state-affiliated reports have hinted at its deployment readiness for more assertive roles. The compact nature of the actuator means it could be deployed from standard research vessels or even commercial ships, making detection of such activities significantly more difficult for foreign maritime powers.

The “Haiyang Dizhi 2” research vessel completed its first deep-sea scientific mission of 2026 on Saturday, according to the Ministry of Natural Resources.

The expedition included a cutting test of a deep-sea electro-hydrostatic actuator at a depth of 3,500 metres (11,483 feet), using technology that has drawn attention for its potential military use.

“The sea trial has bridged the ‘last mile’ from deep-sea equipment development to engineering application,” the official China Science Daily reported on Saturday, suggesting the equipment was poised for actual deployment.

According to the report, the 'Haiyang Dizhi 2' completed the first deep-sea mission of the year on April 11. The electro-hydrostatic actuator (EHA), uses hydraulics, an electric motor, and a control unit combined into a single device, jettisoning the requirement for lengthy and cumbersome external oil piping. The device was reportedly further strengthened against deep-sea pressure and corrosion, enabling "precise mechanical tasks" at very low depths. A September report cited by the article notes that this technology has previously been touted "for cutting subsea cables and operating deep-sea grabs."

The project isn't purely destructive in nature, with obvious applications in the repair and building of underwater oil and gas pipelines. However, given the global context and the timing, the implications for military and nefarious use are obvious. Several projects from China's undersea initiative have reportedly drastically improved the effectiveness of such tasks. A 2022 offshore pipeline repair took crews five hours "just to make a single cut" on an 18-inch section of damaged pipe. Just one year later, homegrown vessels operated remotely could cut pipes up to 38 inches in diameter at a depth of 2,000 feet, including one repair where an eight-inch pipe was cut through in just 20 minutes. The latest testing extends these capabilities to at least 3,500 meters, almost 11,500 feet.

Most pressingly, China's test highlights a growing vulnerability in the physical layer of the digital world. International law regarding the protection of undersea cables remains murky, particularly in international waters where these new devices can operate. The quiet efficiency of the electro-hydrostatic cutter could mean that the next major disruption to global comms may not come just from a cyberattack, but also from a mechanical blade in the deep.

The Haiyang Dizhi 2 (also known as Haiyang Dizhi Shihao, IMO: 9795751) is a high-tech geological research and survey vessel operating under the Chinese flag. Built in 2017, the vessel measures approximately 75.8 meters in length and 15.4 meters in width.

Highlights and recent activities:

State-of-the-art technology: The vessel is equipped with advanced systems, including a 150-ton active offset offshore crane, a 10-kilometer fiber optic winch, and a geological winch. It has a 730-square-meter deck and a helicopter platform.

Subsea operation capability: In April 2026, the vessel successfully conducted tests of an electro-hydrostatic actuator (EHA) at depths exceeding 3,500 meters. This technology is described as capable of performing precise mechanical tasks at great depths, such as cutting submarine cables and operating grabs.

Fuel ice research: In October 2025, the vessel operated on igniting fuel ice (methane hydrate) in deep water, using the ROV (remotely operated vehicle) "Haima" at a depth of 1,522 meters to collect samples.

Scientific missions: The vessel is frequently used for deep-sea research, with missions reported in the South China Sea.

The vessel plays an important role in China's deep-sea marine research activities, with capabilities ranging from geological studies to the handling of subsea infrastructure.

mundophone

Tuesday, April 14, 2026



DIGITAL LIFE




Tiny cameras in earbuds let users talk with AI about what they see

University of Washington researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. For instance, a user might turn to a Korean food package and say, "Hey Vue, translate this for me." They'd then hear an AI voice say, "The visible text translates to 'Cold Noodles' in English."
The prototype system called VueBuds takes low-resolution, black-and-white images, which it transmits over Bluetooth to a phone or other nearby device. A small artificial intelligence model on the device then answers questions about the images within around a second. For privacy, all of the processing happens on the device, a small light turns on when the system is recording, and users can immediately delete images.

The team presented its research April 14 at the CHI 2026 conference in Barcelona. The study is published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems.

"We haven't seen most people adopt smart glasses or VR headsets, in part because a lot of people don't like wearing glasses, and they often come with privacy concerns, such as recording high-resolution video and processing it in the cloud," said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. "But almost everyone wears earbuds already, so we wanted to see if we could put visual intelligence into tiny, low-power earbuds, and also address privacy concerns in the process."
Cameras use far more power than the microphones already in earbuds, so using the same sort of high-res cameras as those in smart glasses wouldn't work. Also, large amounts of information can't stream continuously over Bluetooth, so the system can't run continuous video.
The team found that using a low-power camera—roughly the size of a grain of rice—to shoot low-resolution, black-and-white still images limited battery drain and allowed for Bluetooth transmission while preserving performance.

There was also the matter of placement.

"One big question we had was: Will your face obscure the view too much? Can earbud cameras capture the user's view of the world reliably?" said lead author Maruchi Kim, who completed this work as a UW doctoral student in the Allen School.

The team found that angling each camera 5–10 degrees outward provides a 98–108 degree field of view. While this creates a small blind spot when objects are held closer than 20 centimeters from the user, people rarely hold things that close to examine them—making it a non-issue for typical interactions.

Researchers also discovered that while the vision language model was largely able to make sense of the images from each earbud, having to process images from both earbuds slowed it down. So they had the system "stitch" the two images into one, identifying overlapping imagery and combining it. This allows the system to respond in one second—quick enough to feel like real-time for users—rather than the two seconds it takes with separate images.

The team then had 74 participants compare recorded outputs from VueBuds with outputs from Ray-Ban Meta Glasses in a series of tests. Despite VueBuds using low-resolution images with greater privacy controls and the Ray-Bans taking high-res images processed on the cloud, the two systems performed equivalently. Participants preferred VueBuds' translations, while the Ray-Bans did better at counting objects.

Sixteen participants also wore VueBuds and tested the system's ability to translate and answer basic questions about objects. VueBuds achieved 83%–84% accuracy when translating or identifying objects and 93% when identifying the author and title of a book.

This study was designed to gauge the feasibility of integrating cameras in wireless earbuds. Since the system only takes grayscale images, it can't answer questions that involve color in the scene.

The team wants to add color to the system—color cameras require more power—and to train specialized AI models for specific use cases, such as translation.

"This study lets us glimpse what's possible just using a general purpose language model and our wireless earbuds with cameras," Kim said. "But we'd like to study the system more rigorously for applications like reading a book—for people who have low vision or are blind, for instance—or translating text for travelers."


Provided by University of Washington


TECH


Freestanding silicon anode design improves fast charging and cycle life in lithium-ion batteries

Sejong University said Tuesday that a research team had developed a next-generation silicon anode that enables faster charging and longer battery life, a potential advance for electric vehicles and energy storage systems.

The team, led by Yang Hyeon-woo and Kim Sun-jae of the department of nanotechnology and advanced materials engineering, developed a freestanding silicon anode that maintains high performance without conventional components like current collectors, binders or conductive additives.

The findings were published online April 6 in Advanced Fiber Materials, an international journal with an impact factor of 21.3 — a measure of how often its research is cited — placing it among the more influential publications in its field, according to the university.

The researchers at Sejong University introduced a novel electrode architecture that uses carbon nanofibers as a foundational framework, a design intended to overcome the historic fragility of silicon-based components. By engineering precise hydrolysis and condensation reactions directly onto the surface of each fiber, the team achieved a uniform silicon coating.

This structural refinement not only bolsters the anode’s physical stability — preventing the degradation typical of repeated charging cycles — but also significantly enhances electrical connectivity, a crucial step toward the next generation of high-endurance energy storage.

“Silicon anodes have faced limitations due to structural damage during repeated charge and discharge cycles despite their high capacity,” Kim said. “This study presents a new design approach that could overcome those issues and be widely applied in next-generation lithium-ion batteries where fast charging and long life are critical,” Kim said.

The research was supported by the Ministry of Education’s Basic Science Research Capacity Enhancement Program and the National Research Foundation of Korea.

Silicon has long been seen as a promising anode material for next-generation lithium-ion batteries because it can store much more lithium than graphite. But silicon also expands and contracts sharply during charging and discharging, which can crack the electrode, disrupt electrical pathways and shorten battery life.

Researchers at Sejong University have developed a freestanding silicon anode designed to address that problem. Their study is published in Advanced Fiber Materials under the title "CNF-Supported Si Freestanding Anode with a Conformal Granular Si/SiOx Interphase for High-Rate, Long-Life Li-Ion Batteries."

Schematic illustration of a CNF-supported Si freestanding anode fabrication process. Credit: Advanced Fiber Materials (2026)

Conventional silicon electrodes are often made by casting slurry mixtures onto current collectors, a design that can add inactive weight and introduce interfaces that become unstable during repeated cycling. By contrast, the Sejong University team designed a freestanding architecture in which carbon nanofibers, or CNFs, act as both the structural scaffold and conductive framework of the anode.

The researchers then engineered a hydrolysis-condensation reaction on the surface of each fiber so that silicon formed uniformly along the CNF network as a conformal Si/SiOx interphase. A schematic illustration outlines how this step-by-step process produced the final freestanding anode architecture.

That structure is important because it helps the electrode maintain its porous network and electrical connections even as silicon changes volume during repeated cycling.

Microscopy and spectroscopy analyses showed that the silicon-containing layer formed a thin, continuous shell around the carbon nanofiber core without excessive aggregation or overcoating. This helped preserve fiber-to-fiber junctions and open pathways for ion transport.

In electrochemical tests, the anode delivered 727.1 mAh g⁻¹ at 0.1 A g⁻¹. Under a high-rate condition of 1 A g⁻¹, it retained 79.8% of its capacity after 2,000 cycles. In full-cell tests with an NCM622 cathode, it delivered 176.5 mAh g⁻¹ and retained 91.6% of its capacity after 300 cycles.

The team also reported reduced charge-transfer resistance during cycling, indicating that the structure could support faster electrochemical transport over extended operation.

According to the researchers, that combination of structural stability and rate capability could make the design useful for applications where both fast charging and long cycle life matter, including electric vehicles and energy storage systems.

"The key difference in this work is that carbon nanofibers were used not simply as a support, but as the structural and conductive backbone of a freestanding silicon anode," said Professor Hyeon-Woo Yang. "By enabling silicon to form uniformly along each fiber, we were able to improve both structural stability and electrochemical performance."

Professor Sun-Jae Kim added, "Silicon anodes have long been limited by structural degradation during repeated cycling. This study suggests a new route to overcome that problem and expand the use of high-capacity silicon anodes in next-generation lithium-ion batteries."

Provided by Sejong University

Monday, April 13, 2026


DIGITAL LIFE


Revealing the hidden logic behind AI's judgments of people

In a world where artificial intelligence is quietly shaping who gets hired, who receives loans, and even how medical decisions are made, a new question is emerging: How does AI judge us? A new study by Prof. Yaniv Dover and Valeria Lerman from Hebrew University suggests the answer is both reassuring and deeply unsettling. The study is published in the journal Proceedings of the Royal Society A Mathematical Physical and Engineering Science.

How AI learns to 'trust' people...Drawing on more than 43,000 simulated decisions alongside around a thousand human participants, the research reveals that today's most advanced AI systems, including models similar to ChatGPT and Google's Gemini, do not simply process information. They make judgments about people. And in doing so, they appear to form something that looks a lot like "trust."

But that effective trust doesn't work quite like ours.

The study placed both humans and AI in familiar situations: deciding how much money to lend a small business owner, whether to trust a babysitter, how to rate a boss, or how much to donate to a nonprofit founder.

Across these scenarios, a clear pattern emerged.

Both humans and AI favored people who seemed competent, honest, and well-intentioned. In other words, the machines appeared to grasp the basic ingredients of trust: competence, integrity, and benevolence, much like we do.

"That's the good news," says Prof. Dover. "AI is not making random decisions. It captures something real about how humans evaluate one another."

Where machine judgment diverges from humans

But the resemblance stops there—look closer, and the differences become striking.

Is this a good person? Humans tend to form a general impression by blending multiple traits into a single, intuitive and holistic judgment.

AI does something very different.

It breaks people down into components, scoring competence, integrity, and kindness almost like separate columns in a spreadsheet. The result is a more rigid, "by-the-book" style of judgment, consistent, but less human.

"People in our study are messy and holistic in how they judge others," explains Valeria Lerman. "AI is cleaner, more systematic and that can lead to very different outcomes."

Bias gets amplified in high-stakes decisions

Nevertheless, a troubling pattern of amplified bias emerged.

In financial scenarios, such as deciding how much money to lend or donate, AI systems showed consistent and sometimes sizable differences based solely on demographic traits.

For example: Older individuals were frequently given more favorable outcomes, though in some cases the opposite pattern appeared.

Religion also had a significant effect on the outcomes, especially the monetary ones.

Gender also influenced decisions in certain models and scenarios.

These differences appeared even when every other detail about the person was identical.

"Humans have biases, of course," says Prof. Dover. "But what surprised us is that AI's biases can be more systematic, more predictable, and sometimes stronger."

Different models, different moral compasses

Another key insight: there is no single "AI opinion."

Different models often made different judgments about the same person. In some cases, one system rewarded a trait that another penalized.

That means the choice of AI system could quietly shape real-world outcomes.

"Which model you use really matters," Lerman notes. "Two systems can look similar on the surface but behave very differently when making decisions about people."

Why understanding AI's judgment now matters

AI is already being used to screen job candidates, assess creditworthiness, recommend medical actions, and guide organizational decisions.

As these systems move from assistants to decision-makers, understanding how they "think" becomes critical.

The study suggests that while AI can mimic the structure of human judgment, it does so in a more rigid, less nuanced way and with biases that may be harder to detect.

The researchers emphasize that their findings are not a warning against AI, but rather a call for awareness.

"These systems are powerful," says Dover. "They can model aspects of human reasoning in a consistent way. But they are not human and we shouldn't assume they see people the way we do."

As AI becomes more embedded in everyday life, the question is no longer whether we trust machines. It's whether we understand how they trust us.

Provided by Hebrew University of Jerusalem 


DIGITAL LIFE


When AI seems to know you better than you know yourself

I was at my clinic the other day and asked an AI assistant about the differential diagnosis of a rash in a child. A routine question. The response came back clear and sensible. And then it added, "Are you asking about one of your patients, or one of your grandchildren?"

I was taken aback. Because it was right, I have grandchildren. And it remembered that I have grandchildren.

That moment pointed to something new. Not just smarter AI, but a fundamentally different kind of relationship—one that feels, unsettlingly, like being known.

A machine that knows you...At the end of last year, ChatGPT presented me with a summary of my year, 909 chat conversations, and three recurring themes it had identified unprompted—building AI tools for general practice, teaching and writing about planetary health and creative time with family.

Then it went further.

It offered a visual portrait, rendered in pixel art and titled, "Still Life with Stethoscope and Hang Drum." A stethoscope, a hang drum, an open MacBook, a glowing QR code, and a turquoise mug of peppermint tea.

No face, no figure. Just the objects of a life, selected and arranged by a machine that had been paying attention.

It was accurate. Uncomfortably so.

What bothered me was that I accepted it without question, as though it were a considered verdict rather than a pattern extracted from thousands of exchanges.

I had done none of the work that usually produces that kind of self-knowledge. These insights just arrived, pre-packaged and convincing.

That unease has a long history.

The ancient Greeks had a phrase for this idea of self-knowledge: gnōthi seauton or know thyself. Carved above the entrance to the Oracle at Delphi, it set the terms for a lifetime of inquiry.

Self-knowledge, in that tradition, was hard-won, always incomplete and very personal—something that you pursued, not something a machine just offers us, ready-made.

From remembering to constructing

This shift is not accidental.

Early large language models (LLMs) could hold around 1,000 to 2,000 tokens (a token is a chunk of text, roughly a word or part of a word, that an LLM processes as a single unit) at a time.

Today, contemporary systems can process up to 1 or 2 million tokens in a single context window. That is a thousandfold increase in working memory, which is enough to hold entire books, months of conversation and large portions of a personal history in a single pass.

Add persistent memory across sessions, which is the default setting for a number of the LLMs, and something important changes.

AI is no longer storing isolated details. It is building a model of you: what questions you ask, what topics you return to, what seems to matter most.

From construction to influence...Memory on its own is passive. But organized memory becomes narrative, and narrative shapes identity.

The ancient Greek philosopher, Aristotle, observed that character is formed and revealed not in isolated moments but in the patterns of a life, in what we repeatedly choose and repeatedly avoid.

AI systems are now positioned to observe exactly those patterns with a consistency no human could match. They don't just recall—they select, organize and reflect back.

Systems are being developed that can do exactly this. Imagine your AI says, "Over the past three months, your questions have shifted. You're asking more about stress, sleep and coping. Are you doing OK?"

That example is worth sitting with.

AI is increasingly capable of drawing inferences about emotional state from patterns in language and timing. This is not because you disclosed anything directly, but because the accumulated pattern told its own story.

This has genuine clinical promise.

Early detection of mood deterioration or burnout through natural language patterns is an emerging area of real research interest.

The idea that AI might flag warning signs before a person has consciously registered them carries genuine public health potential.

As a clinician, I find that possibility genuinely exciting.

But these inferences are still interpretations. Research shows that people readily incorporate external classifications into their self-understanding, particularly when they carry an air of authority.

And when AI presents a coherent version of you, it doesn't just describe, it begins to define.

Remaining agents in our own lives...There is a significant shift underway. Not just in what is remembered, but in who decides what matters.

AI systems can detect patterns across time, synthesize them and present a distilled portrait of who you are.

That portrait may feel clearer than your own recollection—a bit more consistent, more complete. And coherence is persuasive.

If a system can tell you what defines you and what themes run through your life, the inner work of constructing that meaning begins to feel unnecessary.

But that internal work matters deeply.

Constructing meaning from experience is how identity forms and how we remain agents in our own lives.

Without it, the self risks becoming thinner, more malleable and more easily steered.

We need to return regularly to the hard questions ourselves. Who am I? What matters to me? How have I changed?

These are not questions to outsource. The Delphic Oracle did not promise that self-knowledge would be comfortable, only that it was yours to seek.

In an age when AI is increasingly willing to do that seeking for us, the most human thing left may be to insist on doing it yourself.

Provided by University of Melbourne 

DIGITAL LIFE Thousands of AI‑written, edited or 'polished' books are being sold, an eerie echo of Orwell's ' novel‑writing m...