Thursday, November 20, 2025

 

TECH


Data centres’ insatiable demand for electricity will change the entire energy sector

When the first large language models were unleashed, it triggered a headache for authorities around the world as they tried to figure out how to satisfy data centres’ endless demand for electricity.

AI models are out on an energy-intensive training session with no end in sight. The training takes place on the servers in the world’s data centres, which currently number just over 10,000. Especially the large language models and generative AI that creates images and videos consume huge amounts of electricity.

They are so voracious that the International Energy Agency (IEA) has estimated that the power they needed for computing increased a billion-fold from 2022 to 2024.

The entire global energy sector is now changing because the demand for electricity to run and cool servers is so high.

Control in just a few hands...“What we see happening now in the development of artificial intelligence is truly extraordinary and perhaps a pivotal point in human history,” said Sebastien Gros.

He is a professor at NTNU’s Department of Engineering Cybernetics and head of the Norwegian Centre on AI-Decisions (AID). This is one of Norway’s six new centres for research on AI.

Gros notes that developments in the AI universe are driven by just a few technology companies. The scale, electricity consumption, investments and pace are formidable, with control concentrated in the hands of just a few private enterprises.

Large, undisclosed figures...It’s not now possible to find out exactly how much electricity AI providers use. “The companies that supply electricity to the data centres do not disclose these figures. The AI providers are commercial operators whose aim is to make money, and they have little interest in sharing this type of information. I don’t think anyone knows exactly, but the figures are clearly astronomical. Truly astronomical,” said Gros

Each keystroke consumes electricity...Regardless of whether it has become your office assistant, meal planner, or psychologist: every one of your keystrokes  consumes electricity.

Altman has said that the polite but completely unnecessary ‘thank you’ or ‘please’ costs the company tens of millions of dollars in electricity each year.

Currently, data centres use approximately 400 terawatt-hours, or one and a half per cent of the global electricity consumption. The IEA estimates that this will double in the next five years, reaching a level comparable to the entire electricity consumption of Japan.

Americans are noticing higher electricity prices...Some of the data centres built for companies like Microsoft, Google, Amazon and Meta consume more electricity than large cities like Pittsburgh and New Orleans. Already heavily burdened power grids are under even more strain, and Americans are starting to notice that their electricity bills are rising.

The world’s largest data centre to date is being currently built in Jamnagar, India. American Nvidia, which has become the world’s most valuable company by selling chips for AI development, is heavily involved. According to the IEA, the centre could end up using as much electricity as the 10 million people living in the area.

Making difficult decisions with AI...Of all the aspects of AI development, commercial language models have received the most attention so far. However, in the shadow of these popular models, entirely different, efficient tools are being developed.

They run locally, use far less electricity, and can help us with entirely different tasks, such as detecting diseases faster, optimizing the power grid and perhaps even tackling the climate crisis.

NTNU and SINTEF researchers at the AID centre, are working to integrate AI more closely with industrial players and government authorities.

The goal is to develop tools that can manage risk and make decisions in challenging situations. These might be as varied as determining when it is safe to discharge a patient from the hospital, or ensuring stable electricity supply and production that is optimally adapted to consumption.

AI for smarter energy use...“Energy for AI, and AI for Energy,” said Fatih Birol, Executive Director of the International Energy Agency, when presenting the Energy & AI report in April 2025.

So, what was his point? That AI can also be part of the solution. According to the report, if AI is used to operate power grids more efficiently, we might be able to save up to 175 gigawatts of transmission capacity.

To put this into perspective, that could cover the electricity needs of 175 cities the size of Oslo for one year.

Uncertain development in Norway...The Norwegian Water Resources and Energy Directorate (NVE) have analyzed how the country’s energy market will change and grow (in Norwegian). They estimate that AI and data centres will account for two per cent of electricity consumption in Norway by 2050.

“I’m not exactly blown away by this number. The trends in Norway may not be as dramatic as we thought,” said Magnus Korpås, an energy systems expert and professor at NTNU’s Department of Electric Energy.

“Based on NVE’s figures, it looks like computing power will consume similar amounts of electricity as transport and other major electricity consumers, which will increase in the years to come. And in any case, two per cent is very little compared to the 15 per cent used by electric panel heaters to warm our homes,” Korpås said.

Export of computing power is a political choice...Much remains uncertain, however. How things will look in 2050 partly depends on the development of data centres. Currently, Norway has registered a little over 70 of  these centres.

“NVE considers two per cent to be a realistic level that developers can reach and that Norway is able to handle. Worldwide, however, the demand for power is inexhaustible. Norway may be very well suited to becoming a major exporter of clean computing power. But then again, no one has actually suggested this,” Korpås said.

The Big Question...Korpås doesn’t think that making Norwegian electricity available for a moderate number of data centres poses a dilemma. “But the big question is whether we should make the power system and Norway’s natural environment available for the inexhaustible global consumption of AI. Whether we want to become the world’s hub for computer power is a political question,” he said.

He adds that this could very quickly be the case if we do not establish regulations.

“Establishing a centre here is attractive. Electricity is cheap, we have a cool climate, and the market is endless.”

Economically, politically and environmentally sensible? Google is developing a centre in Skien (link in Norwegian). Green Mountain and TikTok (links in Norwegian) have established themselves in Hamar. In Arendal, Bifrost Edge AS wants to build (link in Norwegian) a centre that will use as much electricity as just over 100,000 households per year. This summer, Kjell Inge Røkke and Aker launched the major Stargate Norway project in Narvik (link in Norwegian) in collaboration with OpenAI.

Professor Sebastien Gros sees rational arguments for developing data centres in Norway, especially in the north.

“Financially, it is very rational to build in Narvik, an area with large electricity surpluses and low prices. Politically, it also makes sense, because it provides the country with revenue. And environmentally, clean Norwegian hydropower is better than coal power in California or China. We need to consider the advantages and disadvantages and look at the bigger picture,” said Gros.

 NTNU Norwegianscitechnews


DIGITAL LIFE


Why an obsession with Big Tech is not progress

I have never owned a smartphone but my life doesn’t feel demonstrably worse as a result. I am not on Whatsapp; my friends tell me that it’s a time-sapping curse. From social-media-addicted teens to smouldering mountains of electronic waste, smartphones appear to be hastening our descent into dystopia in myriad ways. And it’s not just phones: the advent of AI heralds a future in which creative work is abolished, vast data centres consume ever more energy and water, and humanity might well be turned into paperclips (or so a thought experiment by Oxford professor Nick Bostrom on the single-mindedness of machine learning once suggested).

Yet, despite the ubiquitous anxieties about an impending technological apocalypse, being a Luddite still feels like a naive minority position. When people see me brandishing my “brick” of a mobile phone, they might assume that I’m being insufferably condescending to those with less willpower – but my refusal to update it is, in part, motivated by my own susceptibility to distraction.

One reason why it’s difficult to alter the course of technology’s onward march is a stubborn belief in automatic progress. We might have a degree of healthy scepticism towards the tech bros, suspecting that their ambitions to live for ever in pod cities on Mars are hubristic and far-fetched. Yet it’s hard not to retain an implicit faith – inherited from the Enlightenment – that not only is technology unstoppable but that it’s also always improving.

This assumption of linear progress can cause us to forget about what was actually much better in the past. From the mid-19th century onwards, UK travel retailer WHSmith ran a subscription library on train platforms; you could borrow a book from one station and return it at another. Residents of Victorian London enjoyed 12 mail deliveries a day. Take a stroll around the Made in Ancient Egypt exhibition at the University of Cambridge’s Fitzwilliam Museum and you’ll see a striking illustration of the sophistication of pre-modern artisanal cultures: a statuette of a woman’s head with individual strands of hair carved in glass; tiny turquoise faience frogs inlaid with gold; an elaborate collar made from turquoise and carnelian beads – all crafted more than 3,000 years ago. The screens that surround us serve to conceal historical ingenuity, so that we can comfort ourselves with the illusion that things are getting better all the time.

In The Shock of the Old: Technology and Global History Since 1900, historian David Edgerton demonstrates that technological change often runs in reverse. Ship-breaking is an example of this “low-tech future” – the process has regressed from specialised docks in Taiwan to beaches in Bangladesh where barefoot workers use axes and hammers. Sound quality has decreased on our single wireless speakers. Software updates provide only a simulacrum of advancement. As Cory Doctorow argues in his recent book Enshittification: Why Everything Suddenly Got Worse and What to Do About It, the “development” of products and services is driven by market forces, not human desire.

It is a testament to our persistent faith in technological progress that this year’s Nobel Memorial Prize in Economic Sciences was awarded to three scholars for explaining “innovation-driven economic growth”. In fact, as economist Carl Benedikt Frey has argued, not only is the pace of innovation slowing around the world but the most advanced and technologically innovative countries are suffering from economic stagnation too.

The buzz around AI suggests that we are finally about to witness a genuine game-changer. As Edgerton observes, however, this kind of futurism is itself unoriginal. People have been hailing the coming of a fourth industrial revolution since the 1950s, when it was thought that automation would do away with both blue-and white-collar jobs; despite the excitement around 3D printing in the 1990s and 2000s, its main use now is in art schools. An MIT report has shown that 95 per cent of AI pilots in companies are failing and there are increasingly widespread warnings of a major market correction in 2026 or beyond.

If the public and media commentary around new technology tends to forget past instances of technological optimism, we are often reminded, conversely, that techno-pessimism is not new: Socrates denounced writing for implanting “forgetfulness” in our souls and medieval scholars fretted that the invention of the index would dispense with the need to read entire books. But the fact that those concerns recur does not mean that they aren’t valid: James Marriott of UK newspaper The Times recently reminded us that after the introduction of smartphones in the mid-2010s, PISA scores – an international measure of the academic performance of 15-year-old students in reading, maths and science – began to decline.

There is a cognitive dissonance between the AI “revolution” that is perpetually around the corner and our everyday experience of technology, which remains obstinately dysfunctional – from my hours spent this week trying to connect a laptop to an external monitor to the unidentified items still languishing in the literal and metaphorical bagging area. Not only is new technology trashing the planet and ushering in a post-literate society, it is failing on its own terms too.

Resisting new technology feels futile. Whereas the advent of the internet in the 1990s was legitimised by utopian chatter about democratisation, the approaching AI revolution is announced with a grim realism; we are simply told to learn to live with it. Yet there is an ambiguity at the heart of this sense of inevitability: it’s not clear whether it refers to the implicit assumption that technology always moves forward or to the unstoppable power of those promoting it. It’s an important distinction.

This is related to the residual mystique of tech bros, even though the media now often pillories them – a niggling sense that their wealth and power are the consequences of their being geniuses ahead of the curve. Yet they appear to suffer the most catastrophic nightmares of us all, if their scramble to buy bunkers in New Zealand is anything to go by. And this pessimism serves to inoculate them against public critique, even as their greed brings about the end that they seem to fear.

State investment funds much of the innovation in the first place. Yet governments appear blinded by the hype, unable to exert what little powers they have to regulate Big Tech. As my ability to function without a phone illustrates, human requirements haven’t changed much over time. It’s important to interrogate our apparent need for the latest devices and to ask whether it’s necessity, compulsion or mere addiction. It is time to ask what progress means and who gets to define it. Can we dare to regard technology only as a means to an end? Let’s see what happens in 2026 – just don’t try and Whatsapp me with an update.

by: Eliane Glaser

About the writer: Eliane Glaser is a writer, radio producer and regular contributor to Monocle. Her books include Get Real: How to See Through the Hype, Spin and Lies of Modern Life and Elitism: A Progressive Defence.

Wednesday, November 19, 2025


DIGITAL LIFE


Expert comment: How concerned should we be about 'carebots'?

Imagine a world in which a humanoid robot cares for you when you need help and support with daily activities.

This robot would not only take on mundane tasks like cooking or cleaning, but also be your conversational partner and help you with maintaining your personal hygiene etc.

This idea of robots as caregivers—"carebots"—especially for older people, is a recurring theme in mainstream media and not without controversy. There is excitement around the idea as we are facing a shortage of care workers and rising demands for care in the light of demographic aging.

The hope is that these carebots can make caring more effective and efficient and, of course, improve people's quality of life and care—some people may quite like the idea of having a robot take on tasks, like supporting with washing and dressing.

Could carebots replace human care? There are concerns that increasing reliance on carebots will replace human care or at least worsen human relationships. Other concerns relate to the safety of such robots, the financial implications on individuals and simply whether they can make good on what they promise to deliver. A recent study from Japan suggests low social acceptance of carebots, a particular issue being people not wanting to share personal information with tech companies.

Tech enthusiasts commonly respond that carebots are not intended to replace human care, but that people may choose to chat to a robot about many topics and that this is to be preferred over people being lonely. Other responses conclude that robots are still evolving and improving, that they will be safe and much more affordable than they are now.

What is the future we envision for social care? First, the familiar media narrative about the carebot is really one aspect of a much larger and very necessary conversation around the future we envision for social care provision and technology. It is this conversation—who leads it, frames it and how it may be limited—that should concern us.

Technology, after all, in many different forms is already an essential component in the caregiving ecosystem, and its role will likely increase.

As new technologies such as carebots evolve, they may become "care collaborators," but they will be merely one technological component of what makes up our caregiving and care receiving experience.

We should therefore not limit the conversation by focusing on the benefits and risks of a particularly interesting technology—like a carebot—but continue to forefront the value of social care in people's lives, a vision of what "good quality of life" with care for people of any age entails, and how technology can and should support this.

We must continue to invest in our care workers...The conversation needs to be centered around the meaningful engagement of people receiving care and caregivers, and accompanied by the development of frameworks for implementation and evaluation.

Importantly, we need continued investment in the recognition and training of care workers and family caregivers, who will be the people deploying these systems: It will be they who bear witness to the benefits, risks and harms of technology in care environments and for this reason they need to know what to look out for.

Addressing concerns around new technology in social care...However, we should not dismiss concerns regarding carebots and other technologies in care. Rather, we must listen to, investigate, and respond to them.

These concerns can teach us about what people care about and must inform the trajectory of AI and technology in social care to build care ecosystems that work for people. This is essential to increase the likelihood of social acceptance of carebots and other technology in caregiving.

For example, collaborators in our Oxford Project on Generative AI in Social Care, particularly care workers and people receiving social care, also expressed a fear over how AI and technology will impact social care as a profession: Care workers are worried about losing their jobs because of AI, just like other people in many sectors.

They also worry that more AI and technology will mean worse pay and working conditions for them, with many expected to use their own phones to access AI systems, heavier workloads and more unpaid hours spent needing to train to stay up to date with the latest tech.

People receiving care want positive human interaction, and while a chatty carebot may be entertaining or even a good conversational partner, being able to access compassionate human support is crucial. This access may be at stake if policy makers and care providers consider technology as the answer to a shortage of caregivers, pressures on family caregivers and an epidemic of loneliness, without investment in access to human support at the same time.

People value having control over their lives...Also, people value having choice and control over their lives, including the technology that is part of it. It is easy to imagine a future in which technology—like a carebot—is placed into someone's home, but this technology is not effective in providing the support that the individual needs, is constantly broken or is out of date.

The hope would be that a person has access to a multiplicity of kinds of human and technological support to live their lives to the fullest, but for this to happen we need to work on a care ecosystem in which a carebot is just one part of the story.

The broad concern expressed about human replacement by carebots is therefore much more nuanced than it sounds in media reporting.

Many of these nuances have little to do with the technology per se, but with what people value and how carebots impact on peoples' lives, how we frame and who drives the conversation on tech in care.

Provided by University of Oxford 


DIGITAL LIFE


Leading antitrust entity in the US calls for robust policy against big tech abuses

The latest book by American jurist Tim Wu is a plea against the monopolistic abuses of big tech and a defense of a robust antitrust policy.

In "The Age of Extraction – How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity," the Columbia University law professor shows how the excessive market concentration of big tech harms consumers, suppliers, and even democracy.

"Today's big tech platforms are impressive, fun, and make our lives easier, but they are also designed to be the most advanced tools in history for extracting wealth and resources from the economy," Wu says in the book.

The jurist uses Amazon as an example of how consumers and suppliers are harmed.

In 2014, the average cost for a merchant to sell their products on Amazon was about 19% of revenue (in traditional retail it is 50%). “Sellers were happy, Amazon’s stock was rising, and it seemed that the promise of win-win on the internet was being fulfilled,” writes Wu. To grow and keep its customers and sellers loyal, Amazon subsidized prices and shipping.

Once its market power was established and many competitors had given up or been bought out, Amazon began to “extract” value. It continuously increased the monthly fee and other charges levied on sellers, which reached 30% per sale. It began selling ads that, in practice, functioned as a mandatory fee. Without paying for the ads, the merchant saw their products disappear in searches. With the increase in fees, sellers began to raise their prices, harming consumers.

“By 2023, fees had become less predictable, but on average, they totaled more than 50% of the product's selling price.”

At that time, however, many consumers were already loyal to Amazon Prime and its convenience, and sellers didn't have other marketplaces as large to sell their products. As a result, Amazon continues to extract value at the expense of merchants' margins and buyers' income.

“It’s an undeniable truth that technological change creates wealth. It’s the distribution of that wealth that has always been the tricky part. Often, technological advances have been used to widen, not reduce, economic disparities.”

Wu was one of those responsible for antitrust policy in the Biden administration, as Special Advisor to the President for Competition and Technology Policy, alongside Lina Khan, who was chair of the FTC (Federal Trade Commission), and Jonathan Kanter, then Deputy Attorney General responsible for the Antitrust Division.

The three are the bulwarks of the new progressive antitrust policy, called "neo-Brandeisians," in reference to Supreme Court Justice Louis Brandeis (1856-1941). Brandeis believed that monopolies were inherently harmful and advocated a more assertive antitrust policy to ensure fair competition. In recent decades, a much less interventionist policy had prevailed in the US, in which antitrust measures are only adopted when there is clear evidence of harm to consumers, such as price increases.

In recent years, especially under the Biden administration, but also during Trump's two terms, a more militant stance has been adopted against the excessive market power of big tech companies. The Department of Justice has filed lawsuits against Google, Facebook, Amazon, and other large platforms, accusing them of anti-competitive behavior.

When recommending remedies against the excessive platformization of the economy, the American jurist doesn't stray far from already known solutions: strong antitrust rules, neutrality in the provision of services and non-discrimination, empowerment of countervailing powers (unions, consumers), regulating big tech companies in the same way as public services (electricity, water, highway companies).

He goes further by linking the concentration of power and wealth in the hands of a few companies to the recent proliferation of governments with authoritarian tendencies.

Wu quotes US Senator Estes Kefauver, who, in 1950, warned: “I am not an alarmist, but the history of other nations, where mergers and concentrations have left economic control in the hands of very few people, is too clear to be ignored.”

According to him, a point would be reached where intervention would be necessary to regain control of the economy. “This results in a fascist state or the nationalization of industries and, subsequently, a socialist or communist state.”

For Wu, the path to an authoritarian state “passes through the imbalance of economic power, and a platform economy contributes to this problem.”

According to him, after the systemic extraction of value by platforms, mass resentment arises – and this is the opportunity for autocrats to rise.

As the mentor of the new antitrust warriors, Louis Brandeis, said: “We can have democracy in this country or we can have great wealth concentrated in the hands of a few, but we cannot have both.”

mundophone

Tuesday, November 18, 2025

 

DIGITAL LIFE


WhatsApp security vulnerability discovered by researchers

IT-Security Researchers from the University of Vienna and SBA Research identified and responsibly disclosed a large-scale privacy weakness in WhatsApp's contact discovery mechanism that allowed the enumeration of 3.5 billion accounts. In collaboration with the researchers, Meta has since addressed and mitigated the issue.

The study underscores the importance of continuous, independent security research on widely used communication platforms and highlights the risks associated with the centralization of instant messaging services. The preprint of the study has now been released on GitHub, and the results will be presented in 2026 at the Network and Distributed System Security (NDSS) Symposium.

WhatsApp's contact discovery mechanism can use a user's address book to find other WhatsApp users by their phone number. Using the same underlying mechanism, the researchers demonstrated that it was possible to query more than 100 million phone numbers per hour through WhatsApp's infrastructure, confirming more than 3.5 billion active accounts across 245 countries.

"Normally, a system shouldn't respond to such a high number of requests in such a short time—particularly when originating from a single source," explains lead author Gabriel Gegenhuber from the University of Vienna. "This behavior exposed the underlying flaw, which allowed us to issue effectively unlimited requests to the server and, in doing so, map user data worldwide."

The accessible data items used in the study are the same that are public for anyone who knows a user's phone number and consist of: phone number, public keys, timestamps, and, if set to public, about text and profile picture.

From these data points, the researchers were able to extract additional information, which allowed them to infer a user's operating system, account age, as well as the number of linked companion devices. The study shows that even this limited amount of data per user can reveal important information, both on macroscopic and individual levels.

The study also revealed a range of broader insights:

Millions of active WhatsApp accounts were identified in countries where the platform was officially banned, including China, Iran, and Myanmar.

Population-level insights into platform usage, such as the global distribution of Android (81%) versus iOS (19%) devices, regional differences in privacy behavior (e.g., use of public profile pictures or "about" tagline), and variations in user growth across countries.

A small number of cases showed re-use of cryptographic keys across different devices or phone numbers, pointing to potential weaknesses in non-official WhatsApp clients or fraudulent use.

Nearly half of all phone numbers that appeared in the 2021 Facebook data leak of 500 million phone numbers (caused by a scraping incident in 2018) were still active on WhatsApp. This highlights the enduring risks for leaked numbers (e.g., being targeted in scam calls) associated with such exposures.

The study did not involve access to message content, and no personal data was published or shared. All retrieved data was deleted by the researchers prior to publication. Message content on WhatsApp is 'end-to-end encrypted' and was not affected at any time.

"This end-to-end encryption protects the content of messages, but not necessarily the associated metadata," explains last author Aljosha Judmayer from the University of Vienna. "Our work shows that privacy risks can also arise when such metadata is collected and analyzed on a large scale."

"These findings remind us that even mature, widely trusted systems can contain design or implementation flaws that have real-world consequences," says Gegenhuber. "They show that security and privacy are not one-time achievements, but must be continuously re-evaluated as technology evolves."

"Building on our previous findings on delivery receipts and key management, we are contributing to a long-term understanding of how messaging systems evolve and where new risks arise," adds co-author Maximilian Günther from the University of Vienna.

"We are grateful to the University of Vienna researchers for their responsible partnership and diligence under our Bug Bounty program. This collaboration successfully identified a novel enumeration technique that surpassed our intended limits, allowing the researchers to scrape basic publicly available information. We had already been working on industry-leading anti-scraping systems, and this study was instrumental in stress-testing and confirming the immediate efficacy of these new defenses," says Nitin Gupta, VP of Engineering at WhatsApp.

Importantly, the researchers have securely deleted the data collected as part of the study, and we have found no evidence of malicious actors abusing this vector. As a reminder, user messages remained private and secure thanks to WhatsApp's default end-to-end encryption, and no non-public data was accessible to the researchers."

Ethical handling and disclosure...The research was conducted according to strict ethical guidelines and in accordance with responsible disclosure principles. The findings were promptly reported to Meta, the operator of WhatsApp, which has since implemented countermeasures (e.g., rate-limiting, stricter profile information visibility) to close the identified vulnerability.

The authors argue that transparency, academic scrutiny, and independent testing are essential to maintaining trust in global communication services. They emphasize that proactive collaboration between researchers and industry can significantly improve user privacy and prevent abuse.

Provided by University of Vienna

 

TECH 


What does 'agentic' AI mean? Tech's newest buzzword is a mix of marketing fluff and real promise

For technology adopters looking for the next big thing, "agentic AI" is the future. At least, that's what the marketing pitches and tech industry T-shirts say.

What makes an artificial intelligence product "agentic" depends on who's selling it. But the promise is usually that it's a step beyond today's generative AI chatbots.

Chatbots, however useful, are all talk and no action. They can answer questions, retrieve and summarize information, write papers and generate images, music, video and lines of code. AI agents, by contrast, are supposed to be able to take actions on a person's behalf.

But if you're confused, you're not alone. Google searches for "agentic" have skyrocketed from near obscurity a year ago to a peak earlier this fall.

A new report Tuesday by researchers at the Massachusetts Institute of Technology and the Boston Consulting Group, who surveyed more than 2,000 business executives around the world, describes agentic AI as a "new class of systems" that "can plan, act, and learn on their own."

"They are not just tools to be operated or assistants waiting for instructions," says the MIT Sloan Management Review report. "Increasingly, they behave like autonomous teammates, capable of executing multistep processes and adapting as they go."

How to know if it's an AI agent or just a fancy chatbot...AI chatbots—such as the original ChatGPT that debuted three years ago this month—rely on systems called large language models that predict the next word in a sentence based on the huge trove of human writings they've been trained on. They can sound remarkably human, especially when given a voice, but are effectively performing a kind of word completion.

That's different from what AI developers—including ChatGPT's maker, OpenAI, and tech giants like Amazon, Google, IBM, Microsoft and Salesforce—have in mind for AI agents.

"A generative AI-based chatbot will say, 'Here are the great ideas' … and then be done," said Swami Sivasubramanian, vice president of Agentic AI at Amazon Web Services, in an interview this week. "It's useful, but what makes things agentic is that it goes beyond what a chatbot does."

Sivasubramanian, a longtime Amazon employee, took on his new role helping to lead work on AI agents in Amazon's cloud computing division earlier this year. He sees great promise in AI systems that can be given a "high-level goal" and break it down into a series of steps and act upon them. "I truly believe agentic AI is going to be one of the biggest transformations since the beginning of the cloud," he said.

For most consumers, the first encounters with AI agents could be in realms like online shopping. Set a budget and some preferences and AI agents can buy things or arrange travel bookings using your credit card. In the longer run, the hope is that they can do more complex tasks with access to your computer and a set of guidelines to follow.

"I'd love an agent that just looked at all my medical bills and explanations of benefits and figured out how to pay them," or another one that worked like a "personal shield" fighting off email spam and phishing attempts, said Thomas Dietterich, a professor emeritus at Oregon State University who has worked on developing AI assistants for decades.

Dietterich has some quibbles with certain companies using "agentic" to describe "any action a computer might do, including just looking things up on the web," but he has no doubt that the technology has immense possibilities as AI systems are given the "freedom and responsibility" to refine goals and respond to changing conditions as they work on people's behalf.

"We can imagine a world in which there are thousands or millions of agents operating and they can form coalitions," Dietterich said. "Can they form cartels? Would there be law enforcement (AI) agents?

"Agentic" is a trendy buzzword based on an older idea...Milind Tambe has been researching AI agents that work together for three decades, since the first International Conference on Multi-Agent Systems gathered in San Francisco in 1995. Tambe said he's been "amused" by the sudden popularity of "agentic" as an adjective. Previously, the word describing something that has agency was mostly found in other academic fields, such as psychology or chemistry.

But computer scientists have been debating what an agent is for as long as Tambe has been studying them.

In the 1990s, "people agreed that some software appeared more like an agent, and some felt less like an agent, and there was not a perfect dividing line," said Tambe, a professor at Harvard University. "Nonetheless, it seemed useful to use the word 'agent' to describe software or robotic entities acting autonomously in an environment, sensing the environment, reacting to it, planning, thinking."

The prominent AI researcher Andrew Ng, co-founder of online learning company Coursera, helped advocate for popularizing the adjective "agentic" more than a year ago to encompass a broader spectrum of AI tasks. At the time, he also appreciated that mainly "technical people" were describing it that way.

"When I see an article that talks about 'agentic' workflows, I'm more likely to read it, since it's less likely to be marketing fluff and more likely to have been written by someone who understands the technology," Ng wrote in a June 2024 blog post.

Ng didn't respond to requests for comment on whether he still thinks that.

© 2025 The Associated Press. All rights reserved.

Monday, November 17, 2025


DIGITAL LIFE


The most effective online fact-checkers? Your peers

When the social media platform X (formerly Twitter) invited users to flag false or misleading posts, critics initially scoffed. How could the same public that spreads misinformation be trusted to correct it? But a recent study by researchers from the University of Rochester, the University of Illinois Urbana–Champaign, and the University of Virginia finds that "crowdchecking" (X's collaborative fact-checking experiment known as Community Notes) actually works.

X posts with public correction notes were 32 percent more likely to be deleted by the authors than those with just private notes.

The paper, published in the journal Information Systems Research, shows that when a community note about a post's potential inaccuracy appears beneath a tweet, its author is far more likely to retract that tweet.

"Trying to define objectively what misinformation is and then removing that content is controversial and may even backfire," notes co-author Huaxia Rui, the Xerox Professor of Information Systems and Technology at URochester's Simon Business School. "In the long run, I think a better way for misleading posts to disappear is for the authors themselves to remove those posts."

Using a causal inference method called regression discontinuity and a vast dataset of X posts (previously known as tweets), the researchers find that public, peer-generated corrections can do something experts and algorithms have struggled to achieve. Showing some notes or corrective content alongside potentially misleading information, Rui says, can indeed "nudge the author to remove that content."

Community Notes on X: An experiment in public correction...Community Notes operates on a threshold mechanism. For a corrective note to appear publicly, it must earn a "helpfulness" score of at least 0.4. (A proposed note is first shown to contributors for evaluation. The bridging algorithm used by Community Notes prioritizes ratings from a diverse range of users—specifically, from people who have disagreed in their past ratings—to prevent partisan group voting that could otherwise manipulate a note's visibility).

Conversely, notes that fall just below that threshold stay hidden to the public. That design allows for a natural experiment as the researchers were able to compare X posts with notes just above and below the cutoff (i.e., visible to the public versus visible only to Community Notes contributors )—thereby enabling them to measure the causal effect of public exposure.

In total, the researchers analyzed 264,600 posts on X that received at least one community note during two separate time intervals—the first before a U.S. presidential election, which is a time when misinformation typically surges (June–August 2024), and the second two months after the election (January–February 2025).

The results were striking: X posts with public correction notes were 32 percent more likely to be deleted by the authors than those with just private notes, demonstrating the power of voluntary retraction as an alternative to forcible removal of content. The effect persisted across both study periods.

The reputation effect...An author's decision to retract or delete, the team discovered, is primarily driven by social concerns. "You worry," says Rui, "that it's going to hurt your online reputation if others find your information misleading."

Publicly displayed Community Notes (highlighting factual inaccuracies) function as a signal to the online audience that "the content—and, by extension, its author—is untrustworthy," the researchers note.

In the social media ecosystem, reputation is important—especially for users with influence—and speed matters greatly, as misinformation tends to spread faster and farther than corrections.

The researchers found that public notes not only increased the likelihood of tweet deletions but also accelerated the process: among retracted X posts, the faster notes are publicly displayed, the sooner the noted posts are retracted.

Those whose posts attract substantial visibility and engagement or who have large follower bases, face heightened reputational risks. As a result, verified X users (those marked by a blue check mark) were particularly quick to delete their posts when they garnered public Community Notes, exhibiting a greater concern for maintaining their credibility.

The overall pattern suggests that social media's own dynamics, such as status, visibility, and peer feedback, can improve online accuracy.

A democratic defense against misinformation? Crowdchecking, the team concludes, "strikes a balance between protecting First Amendment rights and the urgent need to curb misinformation." It relies not on censorship but on collective judgment and public correction. The algorithm employed by Community Notes emphasizes diversity and views that are supported by both sides.

Initially, Rui admits, he was surprised by the team's strong findings. "For people to be willing to retract, it's like admitting their mistakes or wrongdoing, which is difficult for anyone, especially in today's super polarized environment with all its echo chambers," he says.

At the outset of the study, the team had wondered if the correcting mechanisms might even backfire. In other words, could a public display note really induce people to retract their problematic posts or would it make them dig in their heels?

Now they know it works.

"Ultimately," Rui says, "the voluntary removal of misleading or false information is a more civic and possibly more sustainable way to resolve problems."

Provided by University of Rochester

  TECH Data centres’ insatiable demand for electricity will change the entire energy sector When the first large language models were unleas...