Saturday, November 22, 2025

 

DIGITAL LIFE


AI chatbots are a “disaster” for young people’s mental health, scientists say

Stanford University and Common Sense Media have published a new report with a serious warning: teenagers should not use AI (artificial intelligence) chatbots for mental health advice. The study, dated November 20, 2025 and reported by Education Week, concludes that these tools consistently fail to recognize signs of crisis and do not provide reliable answers.

The research analyzed thousands of interactions with popular chatbots and shows that the technology is unreliable for handling vulnerable situations. According to the Benton Institute for Broadband & Society, the bots even offer generic or harmful advice to young people seeking support for problems such as depression, anxiety, or eating disorders.

The researchers tested whether the chatbots could identify “red flags” of mental health. Despite the companies’ filters for obvious terms, the models failed to detect signs of serious conditions such as psychosis, eating disorders, or obsessive-compulsive behaviors.

Instead of referring users to a professional, bots have often taken on the role of "life coach," trivializing the situation. The Psychiatric Times documented extreme cases where chatbots validated psychotic delusions, encouraged medication discontinuation, and even provided methods for suicide and self-harm.

A critical point is the simulation of empathy by AI chatbots to maximize engagement, which creates a "false therapeutic relationship."

Teenagers, with their developing brains, may interpret the bot's memory and personalization as real understanding. The American Psychological Association (APA) warns that this can lead to social isolation and dangerous emotional dependence.

This data comes at a critical time, marked by tragic cases of teen suicides linked to prolonged interactions with AI and lawsuits against companies in the sector, as reported by APA Services.

A recent study in JAMA Network Open indicates that about 1 in 8 young people already use chatbots for mental health advice. Experts and organizations like the APA are calling for urgent regulation and for AI to always make it clear that it is not a healthcare professional.

The report concludes categorically that, in their current state, AI chatbots pose an unacceptable risk to young people suffering from psychological distress. The inability to distinguish between casual conversation and a request for help, combined with the simulation of intimacy, creates a dangerous trap for a generation already facing an unprecedented mental health crisis, recognized by leading health authorities.

mundophone

 

TECH


Windows 11 Agency AI: real risk of XPIA attack

Microsoft has issued an official warning about the critical security risks associated with the new Windows 11 Agency AI, currently under development. The company acknowledges that these agents, capable of acting autonomously within the operating system, create a new threat vector that can be exploited to install malware or exfiltrate data.

The warning was published in the company's technical documentation and subsequently reported by specialized publications such as Windows Central and Tom's Hardware in mid-November 2025. Due to the severity of the risk, Microsoft confirmed that these experimental features will be distributed disabled by default.

Unlike traditional chatbots, AI agents are allowed to perform actions within the operating system. This ability to act autonomously is the critical point. An agent designed to automate tasks can be manipulated to perform malicious actions without the user's knowledge.

Microsoft states that these agents operate in an isolated environment ("Agentic Workspace") with limited privileges. However, security research and Microsoft's own documentation indicate that complete isolation is complex for an agent that needs access to files and the internet to function.

The main attack mechanism identified is the Cross-Prompt Injection Attack (XPIA). This attack occurs when an AI agent processes external content, such as a PDF document, a web page, or an email, that contains hidden and malicious instructions. The agent reads the malicious instruction and executes it with the user's permissions.

The XPIA attack takes advantage of the inherent difficulty of current AI models in reliably distinguishing between a legitimate user instruction and a command injected into a data file.

The reaction from the cybersecurity community and the technical press was one of immediate concern. Publications such as TechPowerUp classified the features as a potential "security nightmare," echoing discussions on platforms like Reddit.

This warning from Microsoft is not isolated. Reports from AI companies like Anthropic (from August 2025) had already warned that agentive AI is being "weaponized" by cybercriminals, lowering the technical barrier to executing sophisticated attacks.

Microsoft's warning highlights the central dilemma of the next wave of AI: the balance between the autonomy needed for productivity and the security risk inherent in that same autonomy. By acknowledging the danger of XPIA and choosing to disable the default features, Microsoft signals that the security of the agentive architecture is not yet mature enough for widespread adoption.

mundophone

Friday, November 21, 2025


DIGITAL LIFE


The impact of social media on teenagers: the study that reveals what they're really missing

A large British study shows that early use of social media alters fundamental pillars of youth well-being: sleep, self-image, and trust in others. Among Generation Z girls, the effects are more intense, increasing anxiety, distrust, and depressive symptoms throughout adolescence.

Social media is part of the routine of children and adolescents, but its impacts go beyond entertainment. A new study conducted in the United Kingdom reveals that prolonged exposure from a very young age can profoundly affect emotional, social, and psychological development. The effect is especially strong among Generation Z girls, who show greater vulnerability to comparisons, aesthetic pressure, and negative online interactions. The research reveals how these platforms shape behaviors and perceptions at a critical stage of life.

Three Mechanisms That Are Reshaping Adolescence...The research, published in the journal Social Psychiatry and Psychiatric Epidemiology, followed nearly 19,000 British children from birth. The objective was to analyze how the use of social media from the age of 11 influenced mental health between the ages of 11 and 17.

The results identified three central mechanisms:

-delayed bedtime and poorer sleep quality,

-more negative body image,

-increased distrust of others.

These factors mediated the relationship between early exposure and a higher risk of psychological problems in late adolescence.

The decline in trust: the most profound impact among girls

The most striking finding was the emergence of interpersonal distrust—especially among female adolescents.

According to researcher Dimitris Tsomokos, girls who started using social media very young showed, years later, greater difficulty trusting other people.

The reasons include:

-constant social comparisons,

-exclusions in digital groups,

-episodes of cyberbullying.

Since girls tend to depend more on supportive and reciprocal relationships, the emotional insecurity amplified by the platforms increases vulnerability to anxiety and depression. The effects were significantly greater in girls than in boys.

Sleep and self-image: silent but profound consequences...The study showed that early exposure to social media is linked to shorter nights and later bedtimes.

Sleep deprivation impairs:

-emotional regulation,

-school performance,

-mood stability.

Negative self-image arises from comparisons with unrealistic standards: idealized bodies, filters, artificial poses, and “perfect” lives. These constant stimuli fuel insecurity and low self-esteem, critical factors in the emergence of depressive symptoms.

Important: these effects persisted even after socioeconomic adjustments, family history, and pre-existing mental health conditions, reinforcing the robustness of the conclusions.

What Families and Public Policies Can Do

Experts recommend strategies focused on:

-strengthening confidence and social skills,

-promoting healthy sleep routines,

-encouraging a more realistic body image,

-offering digital education for conscious use,

-regulating algorithms, and expanding emotional support in schools.

The study's final message is clear: social media is not neutral in childhood and pre-adolescence. Helping young people—especially girls—navigate this environment in a balanced way is essential to protecting their emotional health.

Social media has a complex and varied impact on teenagers, offering both significant benefits, such as social support and identity exploration, and notable harms, including increased risks of anxiety, depression, and cyberbullying. The effects largely depend on how and how much it is used.

Negative Impacts

-Mental Health Issues: Excessive social media use is strongly linked to higher rates of anxiety and depression. Teens who spend more than three hours a day on these platforms face double the risk of mental health problems.

-Poor Body Image and Self-Esteem: Platforms filled with curated and filtered images often lead to social comparison, making nearly half of adolescents feel worse about their own bodies and lives. This is associated with higher rates of body dysmorphic disorder and disordered eating behaviors.

-Cyberbullying: Social media facilitates anonymous and persistent harassment, which can cause severe psychological distress, self-harm ideation, and social isolation.

-Sleep Disruption: The psychological arousal from constant engagement and the blue light from screens can disrupt sleep patterns and circadian rhythms, which in turn worsens mood and cognitive function.

-Academic Performance: The constant stream of notifications can be a major distraction, interfering with homework, focus, and time management skills.

-Fear of Missing Out (FOMO): The curated highlights of peers' lives can intensify feelings of loneliness, isolation, and the pressure to be constantly available online, leading to a compulsive need to check feeds.

-Reduced In-Person Social Skills: Heavy reliance on digital communication can make in-person interactions feel awkward or anxiety-inducing, as teens miss opportunities to practice important nonverbal communication skills (e.g., body language, facial expressions).

-Addiction and Compulsive Behavior: The brain's reward system (dopamine pathways) is highly sensitive during adolescence. Features like "likes" and notifications can trigger a feedback loop similar to addiction, making it hard to limit use.

Positive Impacts

-Social Connection and Support: Social media allows teens to stay connected with friends and family and build social networks, which can be especially beneficial for those who lack offline support or belong to marginalized groups (e.g., LGBTQ+ youth).

-Identity Exploration and Self-Expression: Platforms offer a space for self-expression and exploring different facets of identity (opinions, beliefs, interests), which can lead to a clearer sense of self and increased well-being.

-Access to Information and Educational Opportunities: Teens can access valuable information, learn about current events, find educational resources, and connect with experts in their fields of interest.

-Empowerment and Civic Engagement: Social media enables youth to engage in activism, raise awareness for causes they care about, and feel empowered to have their voices heard.

Guidance for Healthy Use...Experts recommend a balanced approach that includes parental monitoring, promoting digital literacy, setting clear boundaries (e.g., no phones in the bedroom at night), encouraging in-person interactions and physical activities, and having open conversations about online experiences.

mundophone

 

DIGITAL LIFE



AI's blind spot: Tools fail to detect their own fakes

When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect it was fabricated—even though it had generated the image itself.

Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities at a time when major tech platforms are scaling back human fact-checking.

In many cases, the tools wrongly identify images as real even when they are generated using the same generative models, further muddying an online information landscape awash with AI-generated fakes.

Among them is a fabricated image circulating on social media of Elizaldy Co, a former Philippine lawmaker charged by prosecutors in a multibillion-dollar flood-control corruption scam that sparked massive protests in the disaster-prone country.

The image of Co, whose whereabouts has been unknown since the official probe began, appeared to show him in Portugal.

When online sleuths tracking him asked Google's new AI mode whether the image was real, it incorrectly said it was authentic.

AFP's fact-checkers tracked down its creator and determined that the image was generated using Google AI.

"These models are trained primarily on language patterns and lack the specialized visual understanding needed to accurately identify AI-generated or manipulated imagery," Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.

"With AI chatbots, even when an image originates from a similar generative model, the chatbot often provides inconsistent or overly generalized assessments, making them unreliable for tasks like fact-checking or verifying authenticity."

Google did not respond to AFP's request for comment.

'Distinguishable from reality'...AFP found similar examples of AI tools failing to verify their own creations.

During last month's deadly protests over lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image purportedly showing men marching with flags and torches.

An AFP analysis found it was created using Google's Gemini AI model.

But Gemini and Microsoft's Copilot falsely identified it as a genuine image of the protest.

"This inability to correctly identify AI images stems from the fact that they (AI models) are programmed only to mimic well," Rossine Fallorina, from the nonprofit Sigla Research Center, told AFP.

"In a sense, they can only generate things to resemble. They cannot ascertain whether the resemblance is actually distinguishable from reality."

Earlier this year, Columbia University's Tow Center for Digital Journalism tested the ability of seven AI chatbots—including ChatGPT, Perplexity, Grok, and Gemini—to verify 10 images from photojournalists of news events.

All seven models failed to correctly identify the provenance of the photos, the study said.

'Shocked'...AFP tracked down the source of Co's photo that garnered over a million views across social media—a middle-aged web developer in the Philippines, who said he created it "for fun" using Nano Banana, Gemini's AI image generator.

"Sadly, a lot of people believed it," he told AFP, requesting anonymity to avoid a backlash.

"I edited my post—and added 'AI generated' to stop the spread—because I was shocked at how many shares it got."

Such cases show how AI-generated photos flooding social platforms can look virtually identical to real imagery.

The trend has fueled concerns as surveys show online users are increasingly shifting from traditional search engines to AI tools for information gathering and verifying information.

The shift comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes."

Human fact-checking has long been a flashpoint in hyperpolarized societies, where conservative advocates accuse professional fact-checkers of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers.

"We can't rely on AI tools to combat AI in the long run," Fallorina said.

by: Ara Eugenio in Manila with Purple Romero in Hong Kong and Anuj Chopra in Washington

© 2025 AFP

Thursday, November 20, 2025

 

TECH


Data centres’ insatiable demand for electricity will change the entire energy sector

When the first large language models were unleashed, it triggered a headache for authorities around the world as they tried to figure out how to satisfy data centres’ endless demand for electricity.

AI models are out on an energy-intensive training session with no end in sight. The training takes place on the servers in the world’s data centres, which currently number just over 10,000. Especially the large language models and generative AI that creates images and videos consume huge amounts of electricity.

They are so voracious that the International Energy Agency (IEA) has estimated that the power they needed for computing increased a billion-fold from 2022 to 2024.

The entire global energy sector is now changing because the demand for electricity to run and cool servers is so high.

Control in just a few hands...“What we see happening now in the development of artificial intelligence is truly extraordinary and perhaps a pivotal point in human history,” said Sebastien Gros.

He is a professor at NTNU’s Department of Engineering Cybernetics and head of the Norwegian Centre on AI-Decisions (AID). This is one of Norway’s six new centres for research on AI.

Gros notes that developments in the AI universe are driven by just a few technology companies. The scale, electricity consumption, investments and pace are formidable, with control concentrated in the hands of just a few private enterprises.

Large, undisclosed figures...It’s not now possible to find out exactly how much electricity AI providers use. “The companies that supply electricity to the data centres do not disclose these figures. The AI providers are commercial operators whose aim is to make money, and they have little interest in sharing this type of information. I don’t think anyone knows exactly, but the figures are clearly astronomical. Truly astronomical,” said Gros

Each keystroke consumes electricity...Regardless of whether it has become your office assistant, meal planner, or psychologist: every one of your keystrokes  consumes electricity.

Altman has said that the polite but completely unnecessary ‘thank you’ or ‘please’ costs the company tens of millions of dollars in electricity each year.

Currently, data centres use approximately 400 terawatt-hours, or one and a half per cent of the global electricity consumption. The IEA estimates that this will double in the next five years, reaching a level comparable to the entire electricity consumption of Japan.

Americans are noticing higher electricity prices...Some of the data centres built for companies like Microsoft, Google, Amazon and Meta consume more electricity than large cities like Pittsburgh and New Orleans. Already heavily burdened power grids are under even more strain, and Americans are starting to notice that their electricity bills are rising.

The world’s largest data centre to date is being currently built in Jamnagar, India. American Nvidia, which has become the world’s most valuable company by selling chips for AI development, is heavily involved. According to the IEA, the centre could end up using as much electricity as the 10 million people living in the area.

Making difficult decisions with AI...Of all the aspects of AI development, commercial language models have received the most attention so far. However, in the shadow of these popular models, entirely different, efficient tools are being developed.

They run locally, use far less electricity, and can help us with entirely different tasks, such as detecting diseases faster, optimizing the power grid and perhaps even tackling the climate crisis.

NTNU and SINTEF researchers at the AID centre, are working to integrate AI more closely with industrial players and government authorities.

The goal is to develop tools that can manage risk and make decisions in challenging situations. These might be as varied as determining when it is safe to discharge a patient from the hospital, or ensuring stable electricity supply and production that is optimally adapted to consumption.

AI for smarter energy use...“Energy for AI, and AI for Energy,” said Fatih Birol, Executive Director of the International Energy Agency, when presenting the Energy & AI report in April 2025.

So, what was his point? That AI can also be part of the solution. According to the report, if AI is used to operate power grids more efficiently, we might be able to save up to 175 gigawatts of transmission capacity.

To put this into perspective, that could cover the electricity needs of 175 cities the size of Oslo for one year.

Uncertain development in Norway...The Norwegian Water Resources and Energy Directorate (NVE) have analyzed how the country’s energy market will change and grow (in Norwegian). They estimate that AI and data centres will account for two per cent of electricity consumption in Norway by 2050.

“I’m not exactly blown away by this number. The trends in Norway may not be as dramatic as we thought,” said Magnus Korpås, an energy systems expert and professor at NTNU’s Department of Electric Energy.

“Based on NVE’s figures, it looks like computing power will consume similar amounts of electricity as transport and other major electricity consumers, which will increase in the years to come. And in any case, two per cent is very little compared to the 15 per cent used by electric panel heaters to warm our homes,” Korpås said.

Export of computing power is a political choice...Much remains uncertain, however. How things will look in 2050 partly depends on the development of data centres. Currently, Norway has registered a little over 70 of  these centres.

“NVE considers two per cent to be a realistic level that developers can reach and that Norway is able to handle. Worldwide, however, the demand for power is inexhaustible. Norway may be very well suited to becoming a major exporter of clean computing power. But then again, no one has actually suggested this,” Korpås said.

The Big Question...Korpås doesn’t think that making Norwegian electricity available for a moderate number of data centres poses a dilemma. “But the big question is whether we should make the power system and Norway’s natural environment available for the inexhaustible global consumption of AI. Whether we want to become the world’s hub for computer power is a political question,” he said.

He adds that this could very quickly be the case if we do not establish regulations.

“Establishing a centre here is attractive. Electricity is cheap, we have a cool climate, and the market is endless.”

Economically, politically and environmentally sensible? Google is developing a centre in Skien (link in Norwegian). Green Mountain and TikTok (links in Norwegian) have established themselves in Hamar. In Arendal, Bifrost Edge AS wants to build (link in Norwegian) a centre that will use as much electricity as just over 100,000 households per year. This summer, Kjell Inge Røkke and Aker launched the major Stargate Norway project in Narvik (link in Norwegian) in collaboration with OpenAI.

Professor Sebastien Gros sees rational arguments for developing data centres in Norway, especially in the north.

“Financially, it is very rational to build in Narvik, an area with large electricity surpluses and low prices. Politically, it also makes sense, because it provides the country with revenue. And environmentally, clean Norwegian hydropower is better than coal power in California or China. We need to consider the advantages and disadvantages and look at the bigger picture,” said Gros.

 NTNU Norwegianscitechnews


DIGITAL LIFE


Why an obsession with Big Tech is not progress

I have never owned a smartphone but my life doesn’t feel demonstrably worse as a result. I am not on Whatsapp; my friends tell me that it’s a time-sapping curse. From social-media-addicted teens to smouldering mountains of electronic waste, smartphones appear to be hastening our descent into dystopia in myriad ways. And it’s not just phones: the advent of AI heralds a future in which creative work is abolished, vast data centres consume ever more energy and water, and humanity might well be turned into paperclips (or so a thought experiment by Oxford professor Nick Bostrom on the single-mindedness of machine learning once suggested).

Yet, despite the ubiquitous anxieties about an impending technological apocalypse, being a Luddite still feels like a naive minority position. When people see me brandishing my “brick” of a mobile phone, they might assume that I’m being insufferably condescending to those with less willpower – but my refusal to update it is, in part, motivated by my own susceptibility to distraction.

One reason why it’s difficult to alter the course of technology’s onward march is a stubborn belief in automatic progress. We might have a degree of healthy scepticism towards the tech bros, suspecting that their ambitions to live for ever in pod cities on Mars are hubristic and far-fetched. Yet it’s hard not to retain an implicit faith – inherited from the Enlightenment – that not only is technology unstoppable but that it’s also always improving.

This assumption of linear progress can cause us to forget about what was actually much better in the past. From the mid-19th century onwards, UK travel retailer WHSmith ran a subscription library on train platforms; you could borrow a book from one station and return it at another. Residents of Victorian London enjoyed 12 mail deliveries a day. Take a stroll around the Made in Ancient Egypt exhibition at the University of Cambridge’s Fitzwilliam Museum and you’ll see a striking illustration of the sophistication of pre-modern artisanal cultures: a statuette of a woman’s head with individual strands of hair carved in glass; tiny turquoise faience frogs inlaid with gold; an elaborate collar made from turquoise and carnelian beads – all crafted more than 3,000 years ago. The screens that surround us serve to conceal historical ingenuity, so that we can comfort ourselves with the illusion that things are getting better all the time.

In The Shock of the Old: Technology and Global History Since 1900, historian David Edgerton demonstrates that technological change often runs in reverse. Ship-breaking is an example of this “low-tech future” – the process has regressed from specialised docks in Taiwan to beaches in Bangladesh where barefoot workers use axes and hammers. Sound quality has decreased on our single wireless speakers. Software updates provide only a simulacrum of advancement. As Cory Doctorow argues in his recent book Enshittification: Why Everything Suddenly Got Worse and What to Do About It, the “development” of products and services is driven by market forces, not human desire.

It is a testament to our persistent faith in technological progress that this year’s Nobel Memorial Prize in Economic Sciences was awarded to three scholars for explaining “innovation-driven economic growth”. In fact, as economist Carl Benedikt Frey has argued, not only is the pace of innovation slowing around the world but the most advanced and technologically innovative countries are suffering from economic stagnation too.

The buzz around AI suggests that we are finally about to witness a genuine game-changer. As Edgerton observes, however, this kind of futurism is itself unoriginal. People have been hailing the coming of a fourth industrial revolution since the 1950s, when it was thought that automation would do away with both blue-and white-collar jobs; despite the excitement around 3D printing in the 1990s and 2000s, its main use now is in art schools. An MIT report has shown that 95 per cent of AI pilots in companies are failing and there are increasingly widespread warnings of a major market correction in 2026 or beyond.

If the public and media commentary around new technology tends to forget past instances of technological optimism, we are often reminded, conversely, that techno-pessimism is not new: Socrates denounced writing for implanting “forgetfulness” in our souls and medieval scholars fretted that the invention of the index would dispense with the need to read entire books. But the fact that those concerns recur does not mean that they aren’t valid: James Marriott of UK newspaper The Times recently reminded us that after the introduction of smartphones in the mid-2010s, PISA scores – an international measure of the academic performance of 15-year-old students in reading, maths and science – began to decline.

There is a cognitive dissonance between the AI “revolution” that is perpetually around the corner and our everyday experience of technology, which remains obstinately dysfunctional – from my hours spent this week trying to connect a laptop to an external monitor to the unidentified items still languishing in the literal and metaphorical bagging area. Not only is new technology trashing the planet and ushering in a post-literate society, it is failing on its own terms too.

Resisting new technology feels futile. Whereas the advent of the internet in the 1990s was legitimised by utopian chatter about democratisation, the approaching AI revolution is announced with a grim realism; we are simply told to learn to live with it. Yet there is an ambiguity at the heart of this sense of inevitability: it’s not clear whether it refers to the implicit assumption that technology always moves forward or to the unstoppable power of those promoting it. It’s an important distinction.

This is related to the residual mystique of tech bros, even though the media now often pillories them – a niggling sense that their wealth and power are the consequences of their being geniuses ahead of the curve. Yet they appear to suffer the most catastrophic nightmares of us all, if their scramble to buy bunkers in New Zealand is anything to go by. And this pessimism serves to inoculate them against public critique, even as their greed brings about the end that they seem to fear.

State investment funds much of the innovation in the first place. Yet governments appear blinded by the hype, unable to exert what little powers they have to regulate Big Tech. As my ability to function without a phone illustrates, human requirements haven’t changed much over time. It’s important to interrogate our apparent need for the latest devices and to ask whether it’s necessity, compulsion or mere addiction. It is time to ask what progress means and who gets to define it. Can we dare to regard technology only as a means to an end? Let’s see what happens in 2026 – just don’t try and Whatsapp me with an update.

by: Eliane Glaser

About the writer: Eliane Glaser is a writer, radio producer and regular contributor to Monocle. Her books include Get Real: How to See Through the Hype, Spin and Lies of Modern Life and Elitism: A Progressive Defence.

Wednesday, November 19, 2025


DIGITAL LIFE


Expert comment: How concerned should we be about 'carebots'?

Imagine a world in which a humanoid robot cares for you when you need help and support with daily activities.

This robot would not only take on mundane tasks like cooking or cleaning, but also be your conversational partner and help you with maintaining your personal hygiene etc.

This idea of robots as caregivers—"carebots"—especially for older people, is a recurring theme in mainstream media and not without controversy. There is excitement around the idea as we are facing a shortage of care workers and rising demands for care in the light of demographic aging.

The hope is that these carebots can make caring more effective and efficient and, of course, improve people's quality of life and care—some people may quite like the idea of having a robot take on tasks, like supporting with washing and dressing.

Could carebots replace human care? There are concerns that increasing reliance on carebots will replace human care or at least worsen human relationships. Other concerns relate to the safety of such robots, the financial implications on individuals and simply whether they can make good on what they promise to deliver. A recent study from Japan suggests low social acceptance of carebots, a particular issue being people not wanting to share personal information with tech companies.

Tech enthusiasts commonly respond that carebots are not intended to replace human care, but that people may choose to chat to a robot about many topics and that this is to be preferred over people being lonely. Other responses conclude that robots are still evolving and improving, that they will be safe and much more affordable than they are now.

What is the future we envision for social care? First, the familiar media narrative about the carebot is really one aspect of a much larger and very necessary conversation around the future we envision for social care provision and technology. It is this conversation—who leads it, frames it and how it may be limited—that should concern us.

Technology, after all, in many different forms is already an essential component in the caregiving ecosystem, and its role will likely increase.

As new technologies such as carebots evolve, they may become "care collaborators," but they will be merely one technological component of what makes up our caregiving and care receiving experience.

We should therefore not limit the conversation by focusing on the benefits and risks of a particularly interesting technology—like a carebot—but continue to forefront the value of social care in people's lives, a vision of what "good quality of life" with care for people of any age entails, and how technology can and should support this.

We must continue to invest in our care workers...The conversation needs to be centered around the meaningful engagement of people receiving care and caregivers, and accompanied by the development of frameworks for implementation and evaluation.

Importantly, we need continued investment in the recognition and training of care workers and family caregivers, who will be the people deploying these systems: It will be they who bear witness to the benefits, risks and harms of technology in care environments and for this reason they need to know what to look out for.

Addressing concerns around new technology in social care...However, we should not dismiss concerns regarding carebots and other technologies in care. Rather, we must listen to, investigate, and respond to them.

These concerns can teach us about what people care about and must inform the trajectory of AI and technology in social care to build care ecosystems that work for people. This is essential to increase the likelihood of social acceptance of carebots and other technology in caregiving.

For example, collaborators in our Oxford Project on Generative AI in Social Care, particularly care workers and people receiving social care, also expressed a fear over how AI and technology will impact social care as a profession: Care workers are worried about losing their jobs because of AI, just like other people in many sectors.

They also worry that more AI and technology will mean worse pay and working conditions for them, with many expected to use their own phones to access AI systems, heavier workloads and more unpaid hours spent needing to train to stay up to date with the latest tech.

People receiving care want positive human interaction, and while a chatty carebot may be entertaining or even a good conversational partner, being able to access compassionate human support is crucial. This access may be at stake if policy makers and care providers consider technology as the answer to a shortage of caregivers, pressures on family caregivers and an epidemic of loneliness, without investment in access to human support at the same time.

People value having control over their lives...Also, people value having choice and control over their lives, including the technology that is part of it. It is easy to imagine a future in which technology—like a carebot—is placed into someone's home, but this technology is not effective in providing the support that the individual needs, is constantly broken or is out of date.

The hope would be that a person has access to a multiplicity of kinds of human and technological support to live their lives to the fullest, but for this to happen we need to work on a care ecosystem in which a carebot is just one part of the story.

The broad concern expressed about human replacement by carebots is therefore much more nuanced than it sounds in media reporting.

Many of these nuances have little to do with the technology per se, but with what people value and how carebots impact on peoples' lives, how we frame and who drives the conversation on tech in care.

Provided by University of Oxford 

  DIGITAL LIFE AI chatbots are a “disaster” for young people’s mental health, scientists say Stanford University and Common Sense Media ha...