Wednesday, November 19, 2025


DIGITAL LIFE


Leading antitrust entity in the US calls for robust policy against big tech abuses

The latest book by American jurist Tim Wu is a plea against the monopolistic abuses of big tech and a defense of a robust antitrust policy.

In "The Age of Extraction – How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity," the Columbia University law professor shows how the excessive market concentration of big tech harms consumers, suppliers, and even democracy.

"Today's big tech platforms are impressive, fun, and make our lives easier, but they are also designed to be the most advanced tools in history for extracting wealth and resources from the economy," Wu says in the book.

The jurist uses Amazon as an example of how consumers and suppliers are harmed.

In 2014, the average cost for a merchant to sell their products on Amazon was about 19% of revenue (in traditional retail it is 50%). “Sellers were happy, Amazon’s stock was rising, and it seemed that the promise of win-win on the internet was being fulfilled,” writes Wu. To grow and keep its customers and sellers loyal, Amazon subsidized prices and shipping.

Once its market power was established and many competitors had given up or been bought out, Amazon began to “extract” value. It continuously increased the monthly fee and other charges levied on sellers, which reached 30% per sale. It began selling ads that, in practice, functioned as a mandatory fee. Without paying for the ads, the merchant saw their products disappear in searches. With the increase in fees, sellers began to raise their prices, harming consumers.

“By 2023, fees had become less predictable, but on average, they totaled more than 50% of the product's selling price.”

At that time, however, many consumers were already loyal to Amazon Prime and its convenience, and sellers didn't have other marketplaces as large to sell their products. As a result, Amazon continues to extract value at the expense of merchants' margins and buyers' income.

“It’s an undeniable truth that technological change creates wealth. It’s the distribution of that wealth that has always been the tricky part. Often, technological advances have been used to widen, not reduce, economic disparities.”

Wu was one of those responsible for antitrust policy in the Biden administration, as Special Advisor to the President for Competition and Technology Policy, alongside Lina Khan, who was chair of the FTC (Federal Trade Commission), and Jonathan Kanter, then Deputy Attorney General responsible for the Antitrust Division.

The three are the bulwarks of the new progressive antitrust policy, called "neo-Brandeisians," in reference to Supreme Court Justice Louis Brandeis (1856-1941). Brandeis believed that monopolies were inherently harmful and advocated a more assertive antitrust policy to ensure fair competition. In recent decades, a much less interventionist policy had prevailed in the US, in which antitrust measures are only adopted when there is clear evidence of harm to consumers, such as price increases.

In recent years, especially under the Biden administration, but also during Trump's two terms, a more militant stance has been adopted against the excessive market power of big tech companies. The Department of Justice has filed lawsuits against Google, Facebook, Amazon, and other large platforms, accusing them of anti-competitive behavior.

When recommending remedies against the excessive platformization of the economy, the American jurist doesn't stray far from already known solutions: strong antitrust rules, neutrality in the provision of services and non-discrimination, empowerment of countervailing powers (unions, consumers), regulating big tech companies in the same way as public services (electricity, water, highway companies).

He goes further by linking the concentration of power and wealth in the hands of a few companies to the recent proliferation of governments with authoritarian tendencies.

Wu quotes US Senator Estes Kefauver, who, in 1950, warned: “I am not an alarmist, but the history of other nations, where mergers and concentrations have left economic control in the hands of very few people, is too clear to be ignored.”

According to him, a point would be reached where intervention would be necessary to regain control of the economy. “This results in a fascist state or the nationalization of industries and, subsequently, a socialist or communist state.”

For Wu, the path to an authoritarian state “passes through the imbalance of economic power, and a platform economy contributes to this problem.”

According to him, after the systemic extraction of value by platforms, mass resentment arises – and this is the opportunity for autocrats to rise.

As the mentor of the new antitrust warriors, Louis Brandeis, said: “We can have democracy in this country or we can have great wealth concentrated in the hands of a few, but we cannot have both.”

mundophone

Tuesday, November 18, 2025

 

DIGITAL LIFE


WhatsApp security vulnerability discovered by researchers

IT-Security Researchers from the University of Vienna and SBA Research identified and responsibly disclosed a large-scale privacy weakness in WhatsApp's contact discovery mechanism that allowed the enumeration of 3.5 billion accounts. In collaboration with the researchers, Meta has since addressed and mitigated the issue.

The study underscores the importance of continuous, independent security research on widely used communication platforms and highlights the risks associated with the centralization of instant messaging services. The preprint of the study has now been released on GitHub, and the results will be presented in 2026 at the Network and Distributed System Security (NDSS) Symposium.

WhatsApp's contact discovery mechanism can use a user's address book to find other WhatsApp users by their phone number. Using the same underlying mechanism, the researchers demonstrated that it was possible to query more than 100 million phone numbers per hour through WhatsApp's infrastructure, confirming more than 3.5 billion active accounts across 245 countries.

"Normally, a system shouldn't respond to such a high number of requests in such a short time—particularly when originating from a single source," explains lead author Gabriel Gegenhuber from the University of Vienna. "This behavior exposed the underlying flaw, which allowed us to issue effectively unlimited requests to the server and, in doing so, map user data worldwide."

The accessible data items used in the study are the same that are public for anyone who knows a user's phone number and consist of: phone number, public keys, timestamps, and, if set to public, about text and profile picture.

From these data points, the researchers were able to extract additional information, which allowed them to infer a user's operating system, account age, as well as the number of linked companion devices. The study shows that even this limited amount of data per user can reveal important information, both on macroscopic and individual levels.

The study also revealed a range of broader insights:

Millions of active WhatsApp accounts were identified in countries where the platform was officially banned, including China, Iran, and Myanmar.

Population-level insights into platform usage, such as the global distribution of Android (81%) versus iOS (19%) devices, regional differences in privacy behavior (e.g., use of public profile pictures or "about" tagline), and variations in user growth across countries.

A small number of cases showed re-use of cryptographic keys across different devices or phone numbers, pointing to potential weaknesses in non-official WhatsApp clients or fraudulent use.

Nearly half of all phone numbers that appeared in the 2021 Facebook data leak of 500 million phone numbers (caused by a scraping incident in 2018) were still active on WhatsApp. This highlights the enduring risks for leaked numbers (e.g., being targeted in scam calls) associated with such exposures.

The study did not involve access to message content, and no personal data was published or shared. All retrieved data was deleted by the researchers prior to publication. Message content on WhatsApp is 'end-to-end encrypted' and was not affected at any time.

"This end-to-end encryption protects the content of messages, but not necessarily the associated metadata," explains last author Aljosha Judmayer from the University of Vienna. "Our work shows that privacy risks can also arise when such metadata is collected and analyzed on a large scale."

"These findings remind us that even mature, widely trusted systems can contain design or implementation flaws that have real-world consequences," says Gegenhuber. "They show that security and privacy are not one-time achievements, but must be continuously re-evaluated as technology evolves."

"Building on our previous findings on delivery receipts and key management, we are contributing to a long-term understanding of how messaging systems evolve and where new risks arise," adds co-author Maximilian Günther from the University of Vienna.

"We are grateful to the University of Vienna researchers for their responsible partnership and diligence under our Bug Bounty program. This collaboration successfully identified a novel enumeration technique that surpassed our intended limits, allowing the researchers to scrape basic publicly available information. We had already been working on industry-leading anti-scraping systems, and this study was instrumental in stress-testing and confirming the immediate efficacy of these new defenses," says Nitin Gupta, VP of Engineering at WhatsApp.

Importantly, the researchers have securely deleted the data collected as part of the study, and we have found no evidence of malicious actors abusing this vector. As a reminder, user messages remained private and secure thanks to WhatsApp's default end-to-end encryption, and no non-public data was accessible to the researchers."

Ethical handling and disclosure...The research was conducted according to strict ethical guidelines and in accordance with responsible disclosure principles. The findings were promptly reported to Meta, the operator of WhatsApp, which has since implemented countermeasures (e.g., rate-limiting, stricter profile information visibility) to close the identified vulnerability.

The authors argue that transparency, academic scrutiny, and independent testing are essential to maintaining trust in global communication services. They emphasize that proactive collaboration between researchers and industry can significantly improve user privacy and prevent abuse.

Provided by University of Vienna

 

TECH 


What does 'agentic' AI mean? Tech's newest buzzword is a mix of marketing fluff and real promise

For technology adopters looking for the next big thing, "agentic AI" is the future. At least, that's what the marketing pitches and tech industry T-shirts say.

What makes an artificial intelligence product "agentic" depends on who's selling it. But the promise is usually that it's a step beyond today's generative AI chatbots.

Chatbots, however useful, are all talk and no action. They can answer questions, retrieve and summarize information, write papers and generate images, music, video and lines of code. AI agents, by contrast, are supposed to be able to take actions on a person's behalf.

But if you're confused, you're not alone. Google searches for "agentic" have skyrocketed from near obscurity a year ago to a peak earlier this fall.

A new report Tuesday by researchers at the Massachusetts Institute of Technology and the Boston Consulting Group, who surveyed more than 2,000 business executives around the world, describes agentic AI as a "new class of systems" that "can plan, act, and learn on their own."

"They are not just tools to be operated or assistants waiting for instructions," says the MIT Sloan Management Review report. "Increasingly, they behave like autonomous teammates, capable of executing multistep processes and adapting as they go."

How to know if it's an AI agent or just a fancy chatbot...AI chatbots—such as the original ChatGPT that debuted three years ago this month—rely on systems called large language models that predict the next word in a sentence based on the huge trove of human writings they've been trained on. They can sound remarkably human, especially when given a voice, but are effectively performing a kind of word completion.

That's different from what AI developers—including ChatGPT's maker, OpenAI, and tech giants like Amazon, Google, IBM, Microsoft and Salesforce—have in mind for AI agents.

"A generative AI-based chatbot will say, 'Here are the great ideas' … and then be done," said Swami Sivasubramanian, vice president of Agentic AI at Amazon Web Services, in an interview this week. "It's useful, but what makes things agentic is that it goes beyond what a chatbot does."

Sivasubramanian, a longtime Amazon employee, took on his new role helping to lead work on AI agents in Amazon's cloud computing division earlier this year. He sees great promise in AI systems that can be given a "high-level goal" and break it down into a series of steps and act upon them. "I truly believe agentic AI is going to be one of the biggest transformations since the beginning of the cloud," he said.

For most consumers, the first encounters with AI agents could be in realms like online shopping. Set a budget and some preferences and AI agents can buy things or arrange travel bookings using your credit card. In the longer run, the hope is that they can do more complex tasks with access to your computer and a set of guidelines to follow.

"I'd love an agent that just looked at all my medical bills and explanations of benefits and figured out how to pay them," or another one that worked like a "personal shield" fighting off email spam and phishing attempts, said Thomas Dietterich, a professor emeritus at Oregon State University who has worked on developing AI assistants for decades.

Dietterich has some quibbles with certain companies using "agentic" to describe "any action a computer might do, including just looking things up on the web," but he has no doubt that the technology has immense possibilities as AI systems are given the "freedom and responsibility" to refine goals and respond to changing conditions as they work on people's behalf.

"We can imagine a world in which there are thousands or millions of agents operating and they can form coalitions," Dietterich said. "Can they form cartels? Would there be law enforcement (AI) agents?

"Agentic" is a trendy buzzword based on an older idea...Milind Tambe has been researching AI agents that work together for three decades, since the first International Conference on Multi-Agent Systems gathered in San Francisco in 1995. Tambe said he's been "amused" by the sudden popularity of "agentic" as an adjective. Previously, the word describing something that has agency was mostly found in other academic fields, such as psychology or chemistry.

But computer scientists have been debating what an agent is for as long as Tambe has been studying them.

In the 1990s, "people agreed that some software appeared more like an agent, and some felt less like an agent, and there was not a perfect dividing line," said Tambe, a professor at Harvard University. "Nonetheless, it seemed useful to use the word 'agent' to describe software or robotic entities acting autonomously in an environment, sensing the environment, reacting to it, planning, thinking."

The prominent AI researcher Andrew Ng, co-founder of online learning company Coursera, helped advocate for popularizing the adjective "agentic" more than a year ago to encompass a broader spectrum of AI tasks. At the time, he also appreciated that mainly "technical people" were describing it that way.

"When I see an article that talks about 'agentic' workflows, I'm more likely to read it, since it's less likely to be marketing fluff and more likely to have been written by someone who understands the technology," Ng wrote in a June 2024 blog post.

Ng didn't respond to requests for comment on whether he still thinks that.

© 2025 The Associated Press. All rights reserved.

Monday, November 17, 2025


DIGITAL LIFE


The most effective online fact-checkers? Your peers

When the social media platform X (formerly Twitter) invited users to flag false or misleading posts, critics initially scoffed. How could the same public that spreads misinformation be trusted to correct it? But a recent study by researchers from the University of Rochester, the University of Illinois Urbana–Champaign, and the University of Virginia finds that "crowdchecking" (X's collaborative fact-checking experiment known as Community Notes) actually works.

X posts with public correction notes were 32 percent more likely to be deleted by the authors than those with just private notes.

The paper, published in the journal Information Systems Research, shows that when a community note about a post's potential inaccuracy appears beneath a tweet, its author is far more likely to retract that tweet.

"Trying to define objectively what misinformation is and then removing that content is controversial and may even backfire," notes co-author Huaxia Rui, the Xerox Professor of Information Systems and Technology at URochester's Simon Business School. "In the long run, I think a better way for misleading posts to disappear is for the authors themselves to remove those posts."

Using a causal inference method called regression discontinuity and a vast dataset of X posts (previously known as tweets), the researchers find that public, peer-generated corrections can do something experts and algorithms have struggled to achieve. Showing some notes or corrective content alongside potentially misleading information, Rui says, can indeed "nudge the author to remove that content."

Community Notes on X: An experiment in public correction...Community Notes operates on a threshold mechanism. For a corrective note to appear publicly, it must earn a "helpfulness" score of at least 0.4. (A proposed note is first shown to contributors for evaluation. The bridging algorithm used by Community Notes prioritizes ratings from a diverse range of users—specifically, from people who have disagreed in their past ratings—to prevent partisan group voting that could otherwise manipulate a note's visibility).

Conversely, notes that fall just below that threshold stay hidden to the public. That design allows for a natural experiment as the researchers were able to compare X posts with notes just above and below the cutoff (i.e., visible to the public versus visible only to Community Notes contributors )—thereby enabling them to measure the causal effect of public exposure.

In total, the researchers analyzed 264,600 posts on X that received at least one community note during two separate time intervals—the first before a U.S. presidential election, which is a time when misinformation typically surges (June–August 2024), and the second two months after the election (January–February 2025).

The results were striking: X posts with public correction notes were 32 percent more likely to be deleted by the authors than those with just private notes, demonstrating the power of voluntary retraction as an alternative to forcible removal of content. The effect persisted across both study periods.

The reputation effect...An author's decision to retract or delete, the team discovered, is primarily driven by social concerns. "You worry," says Rui, "that it's going to hurt your online reputation if others find your information misleading."

Publicly displayed Community Notes (highlighting factual inaccuracies) function as a signal to the online audience that "the content—and, by extension, its author—is untrustworthy," the researchers note.

In the social media ecosystem, reputation is important—especially for users with influence—and speed matters greatly, as misinformation tends to spread faster and farther than corrections.

The researchers found that public notes not only increased the likelihood of tweet deletions but also accelerated the process: among retracted X posts, the faster notes are publicly displayed, the sooner the noted posts are retracted.

Those whose posts attract substantial visibility and engagement or who have large follower bases, face heightened reputational risks. As a result, verified X users (those marked by a blue check mark) were particularly quick to delete their posts when they garnered public Community Notes, exhibiting a greater concern for maintaining their credibility.

The overall pattern suggests that social media's own dynamics, such as status, visibility, and peer feedback, can improve online accuracy.

A democratic defense against misinformation? Crowdchecking, the team concludes, "strikes a balance between protecting First Amendment rights and the urgent need to curb misinformation." It relies not on censorship but on collective judgment and public correction. The algorithm employed by Community Notes emphasizes diversity and views that are supported by both sides.

Initially, Rui admits, he was surprised by the team's strong findings. "For people to be willing to retract, it's like admitting their mistakes or wrongdoing, which is difficult for anyone, especially in today's super polarized environment with all its echo chambers," he says.

At the outset of the study, the team had wondered if the correcting mechanisms might even backfire. In other words, could a public display note really induce people to retract their problematic posts or would it make them dig in their heels?

Now they know it works.

"Ultimately," Rui says, "the voluntary removal of misleading or false information is a more civic and possibly more sustainable way to resolve problems."

Provided by University of Rochester


DIGITAL LIFE


AI, Blockchain, and NFC fraud mark financial cybercrime in 2025

Financial cybercrime in 2025 reached unprecedented levels of complexity, marked by the increasing use of artificial intelligence, blockchain, and NFC fraud that challenged banks and fintechs globally.

According to the Kaspersky Security Bulletin 2025, dedicated to financial cybersecurity, the year was dominated by more coordinated and technically sophisticated attacks, in which organized crime and cybercrime came closer together in an unprecedented way.

According to Kaspersky, malware spread more frequently through messaging applications, AI-assisted attacks became faster and harder to detect, and contactless payment fraud went from an emerging phenomenon to a consolidated trend.

The result is a systemic risk: institutions, critical infrastructures, and end users began to share the same digital battlefield.

The report summarizes the financial sector's exposure to attacks with a set of indicators that paint a worrying picture.

8.15% of users faced online threats related to the financial sector.

15.81% of users in the sector were targeted by local threats (malware already present on devices).

12.8% of B2B companies in the financial sector suffered ransomware attacks during the analyzed period.

The number of unique users in the financial sector who detected ransomware increased by 35.7% in 2025 compared to 2023.

1,338,357 banking trojan attacks were identified throughout the year.

These numbers confirm that financial cybercrime is not limited to isolated incidents, but functions as a global ecosystem with the capacity to adapt techniques and exploit new attack surfaces at scale.

Evolving tactics: supply chain, organized crime and AI...One of the most striking trends of the year was the intensification of attacks on the financial services supply chain. According to Kaspersky, several Incidents exploited vulnerabilities in external vendors to target national payment networks and central systems, demonstrating the domino effect that a single weak link can trigger.

The convergence between organized crime and cybercrime has also become more evident.

Groups traditionally associated with physical activities, such as trafficking or extortion, have begun to integrate digital capabilities, combining social engineering, insider contacts, and technical exploitation to increase the impact and profitability of attacks.

Artificial intelligence has added a layer of automation and speed.

According to Kaspersky, malware with AI components has incorporated automated propagation and evasion mechanisms, reducing the time between the development and execution of attacks, which makes it more difficult for security teams to respond.

Attackers have not abandoned classic malware, but have altered their distribution channels. Instead of relying primarily on email phishing campaigns, banking trojans have shifted to using popular messaging apps as their main vector, leveraging the trust users place in these platforms.

Kaspersky indicates that banking trojans have been rewritten to operate on top of messaging services, allowing for large-scale infection campaigns without the need for traditional spam infrastructure.

This movement shifts the risk to environments where detection is less mature and where the boundaries between personal conversation and professional communication are more blurred.

On the mobile front, the report highlights the role of Android malware with ATS (Automated Transfer System) capabilities, which automates fraudulent transactions and alters values ​​and recipients in real time without the user's knowledge.

According to Kaspersky, this type of malware acts on legitimate banking apps, bypassing visual checks and confusing users who believe they are operating in a secure context.

NFC-based fraud has evolved in two directions. On one hand, in-person schemes in busy locations exploit contactless payments through devices or cards illegally.

On the other hand, remote attacks resort to social engineering and fake apps that mimic genuine banks, directing the user to payment authorizations without realizing the fraud.

One of the most unsettling trends is the use of blockchain as a command and control infrastructure for financial malware.

According to Kaspersky, some groups have begun to inscribe malware commands in smart contracts, leveraging the Web3 ecosystem to orchestrate attacks.

This approach increases the resilience of malicious campaigns.

Even if traditional servers are deactivated, the control logic persists in the blockchain, making eradication significantly more difficult and raising dilemmas about the governance and oversight of decentralized networks.

Ransomware remains, some families disappear... Ransomware has remained a structural threat in the financial sector.

Globally, 12.8% of B2B financial organizations were affected in the analyzed period, with regional incidences of 12.9% in Africa, 12.6% in Latin America, and 9.4% in Russia and CIS countries.

Kaspersky also indicates that certain malware families have begun to disappear as specific groups cease operations or migrate to more modern tools.

This does not mean less risk, but a reorganization of the criminal ecosystem, which replaces old lineages with new malware platforms, often in MaaS (Malware as a Service) regimes.

What to Expect in 2026: WhatsApp, deepfakes, and “agentic AI malware”...In the predictions chapter, Kaspersky presents a vision that projects the intensification of already ongoing trends, as well as the emergence of new threat categories.

Among the points highlighted by the company for 2026 are:

Banking Trojans rewritten for distribution via WhatsApp and other messaging apps, targeting corporate and government environments that still rely on desktop banking systems.

Expansion of deepfake services and AI tools at the service of social engineering, impacting job interviews, KYC processes, and identity fraud.

Emergence of regional infostealers, inspired by families like Lumma and Redline, focused on specific countries or blocs, in the MaaS model.

More attacks on NFC payments, with new tools and malware aimed at contactless transactions in different contexts.

Arrival of “agentic” AI malware, capable of altering behavior during execution, analyzing the environment, and Adapting to the defenses and vulnerabilities it encounters.

Persistence of classic frauds, but with new distribution methods on emerging platforms.

Continued sale of counterfeit "pre-infected" smart devices, with trojans like Triada on smartphones, televisions, and other connected equipment.

These predictions describe a scenario in which technical sophistication combines with the industrialization of cybercrime, lowering barriers to entry and expanding the potential impact of less experienced actors.

Recommendations: between best practices and commercial interest...Kaspersky presents a set of recommendations for users and organizations, combining recognized best practices with the explicit promotion of its products and services.

For individual users, the company suggests:

Downloading applications only from official stores, verifying the authenticity of the developer.

Disabling NFC whenever there is no need for its use and opting for digital wallets with mechanisms to block unauthorized communications.

Monitoring accounts and transactions regularly to quickly detect suspicious activity.

Protecting financial transactions with the Kaspersky Premium solution and the Safe Money feature, which, according to the company, validates the authenticity of banking and payment websites.

For financial organizations, Kaspersky recommends a "cybersecurity ecosystem" approach that unites people, processes, and technology.

Among the measures proposed by the company are:

Assessing the entire infrastructure, correcting vulnerabilities, and using external specialists to identify hidden risks.

Implementing integrated platforms of Monitoring and control of all attack vectors, with rapid detection and immediate response; the Kaspersky Next range is presented as an example of a solution with real-time protection and EDR/XDR capabilities.

Monitor the threat landscape with Kaspersky threat intelligence services and promote regular awareness training to create a "human firewall".

Although the technical guidelines align with widely accepted industry best practices, it is important to note that the report also serves as a commercial positioning piece for Kaspersky's portfolio, which requires critical reading by decision-makers and security teams.

Information from Kaspersky

Sunday, November 16, 2025

 

DIGITAL LIFE


Journalism has become addicted to Big Tech, and that's not a good thing...

Large technology companies positioned themselves as allies—sometimes even saviors—of journalism for much of the last decade. Their actions were compelling: they distributed resources through grants, created project incubators, supported fact-checking initiatives, offered training on their tools, permeated newsrooms with a culture of innovation, launched news curation tools, and closed advertising deals directly with media outlets.

From 2013 to 2022, while news organizations faced changes in business models imposed by the internet and competed for people's attention with social media, Big Tech gave them some room to operate and grow. Many successful companies and non-profit organizations were born during this time.

Certainly, money and access to tools were part of the attraction. But, mainly, the game that newsrooms became addicted to was that of distribution.

Media companies began investing heavily in search engine optimization (SEO), rewriting headlines to please social media rewards for absurdity, polluting websites with low-quality banner ads, adopting limiting publishing platforms like Instant Articles and AMP, and fragmenting good content to fit it into recommendation tools like Discover and news feeds. This list could get very long, very quickly, and it has begun to backfire.

News organizations (especially small and medium-sized ones) became so dependent on this dynamic that when platforms began to change their products, many newsrooms began to feel the effects.

Many news organizations that do truly impactful work face the same problems. Some recent examples may make my point clearer.

After a decade of investing in SEO, some news organizations are seeing their traffic from Google (one of the main sources) plummet because of the AI ​​summaries attached to most searches.

Similarly, in July 2025, Meta said it would begin charging per message on its commercial WhatsApp broadcast lists, disrupting distribution flows for many news organizations that relied on this distribution (the messaging app is the standard communication tool in Latin America and many other countries).

Furthermore, when Twitter was sold to Elon Musk and became X, external links were significantly demoted in the feed, considerably affecting news organizations that had invested years building huge follower bases. Just one decision can render significant investments obsolete.

Another path for our journalism...I believe there is a way to reduce this effect that Big Tech has on us, and that involves bringing news organizations closer together in cooperation. During my 2026 John S. Knight Fellowship at Stanford University, I am developing a framework that facilitates commercial and institutional collaboration between news organizations.

This means creating practical foundations for organizations to work with each other, such as legal guidelines, partnership ideas, business cooperation methods, technical alternatives, and even trying to bring independent technology companies into the game.

Collaboration needs to require little effort and be seen as mutually beneficial for the newsrooms involved, avoiding friction with our audiences. It also has to be seen as crucial for survival, helping newsrooms share access to each other's communities and distribution networks – let's leverage each other's expertise and strengthen the ecosystem as a whole.

The Dilemma of Journalism...We are at a critical juncture where technological innovation is crucial for the future of journalism, just as it was for the emergence of the internet. There is a potential risk of over-reliance on Big Tech for immediate financial relief, but this may not be a sustainable long-term solution.

The news industry needs to develop more diversified and resilient revenue models to ensure its survival in the face of ongoing digital disruption. Relying on revenue from technology companies can create a dependency that may compromise the impartiality of news organizations. Changes in algorithms, policies, or market focus of these companies can abruptly impact the revenue sources of news outlets.

There is a risk that licensed content will be used in ways that dilute its quality or context. AI platforms may misinterpret or distort journalistic content, further undermining public trust.

There is a need for a reassessment of the relationship between technology companies and media organizations, with balanced partnerships, greater transparency in funding initiatives, and stronger regulatory frameworks to protect journalistic integrity.

The study shows that the influence of tech giants can be more pronounced in regions where news organizations struggle with limited resources. There is a risk of creating a two-tiered global news ecosystem, where only well-funded organizations can keep up with technological advancements. This disparity could have far-reaching implications for access to information and democratic discourse globally.

The current challenge lies in harnessing the benefits of technological advancements while preserving the essential role of independent journalism in democratic societies. The coming years will be crucial in determining the future of the news industry.

mundophone


DIGITAL LIFE


When the algorithm decides how much each of us is worth

Personalization can bring real benefits if used fairly and transparently, but when personalization stops serving the consumer and starts exploiting them, technological progress turns into digital discrimination, warns Paulo Fonseca(image above) in this opinion piece.

Digital personalization has become one of the hallmarks of the modern economy. It allows products, services, and communications to be adapted to each consumer's preferences, making their digital experience simpler, more relevant, and more efficient. But there is a boundary that cannot be crossed: the one that separates convenience from exploitation. When personalization stops serving the consumer and starts exploiting them, technological progress turns into digital discrimination.

The recent case of Delta Airlines, which generated controversy by being accused of adjusting airfares through its artificial intelligence systems that assessed how much each passenger would be willing to pay for their trip, is just one example of a hidden phenomenon that is becoming increasingly widespread, although many companies vehemently claim that they do not use any pricing models tailored to their customers' profiles. The truth is that prices, interfaces, and even the messages we receive are increasingly shaped by our digital footprint. And often, not even Hercule Poirot can figure out what's behind the personalization.

It's important to distinguish between different types of personalization. There's content personalization – which includes advertising and recommendations from online platforms. There's interface personalization – which changes how each user sees and interacts with websites and applications, influencing decisions and perceptions. And there's price personalization – the most sensitive and potentially most problematic – when the value of a product or service is determined not only by global demand, but also by the analysis of our personal data, from purchase history to our publications and searches, and even the type of device or location.

Personalization can bring real benefits if used fairly and transparently. It can simplify choices, enable relevant offers, facilitate discounts, and improve the relationship between companies and their customers. But there are serious risks when personalization becomes a weapon of discrimination. Personalization cannot be used to exploit our weaknesses or our level of need for a product or service. It cannot serve to extract the maximum amount someone is willing to pay, nor to conceal the real market price.

DECO (Portuguese Consumer Protection Association) has developed extensive work in this area. For this organization, price personalization based on behavioral data that generates discrimination is incompatible with the Charter of Fundamental Rights of the European Union itself. When the price that each person sees is different, and especially when that difference results from privileged information that companies collect about us, the balance in the market disappears. Each person becomes an isolated market, without reference or possible comparison.

Dynamic pricing is also an example of this. In theory, it should reflect overall demand – more demand, higher price; less demand, lower price. But in practice, it is no longer the number of people interested in a product that determines its price, but rather the digital profile of the person seeking it. The risk is clear: when my online behavior influences the price I am shown, the market ceases to be a space of competition and becomes a reflection of the citizen's digital vulnerability.

But what should the solution be? The path is not to reject personalization altogether, but to put it on the right track. Personalization can and should exist when you offer clear advantages, such as real discounts, helpful recommendations, and a more accessible experience. But it must have transparent and auditable limits. Companies should be required to demonstrate that their practices are fair, and personalization can only exist by choice and never by default.

The European Commission has a decisive role here. It is not enough to require companies to disclose that they practice personalized pricing – it is necessary to define red lines through a strong, European, and coherent institutional response that establishes the ethical and legal limits of personalization.

The challenge, as we have already said, will always be to balance innovation with protection. The fair price is the one that promotes trust, better products, and the right choices. It is not the one that depends on how much an algorithm thinks I can pay. The digital future cannot be a game where consumers pay the price of their own information. If we let algorithms decide how much each of us is worth, then we will cease to be free consumers and become perfect targets. And no progress justifies this.

*Paulo Fonseca is strategic and institutional relations advisor at DECO

mundophone

DIGITAL LIFE Leading antitrust entity in the US calls for robust policy against big tech abuses The latest book by American jurist Tim Wu i...