Monday, November 17, 2025


DIGITAL LIFE


The most effective online fact-checkers? Your peers

When the social media platform X (formerly Twitter) invited users to flag false or misleading posts, critics initially scoffed. How could the same public that spreads misinformation be trusted to correct it? But a recent study by researchers from the University of Rochester, the University of Illinois Urbana–Champaign, and the University of Virginia finds that "crowdchecking" (X's collaborative fact-checking experiment known as Community Notes) actually works.

X posts with public correction notes were 32 percent more likely to be deleted by the authors than those with just private notes.

The paper, published in the journal Information Systems Research, shows that when a community note about a post's potential inaccuracy appears beneath a tweet, its author is far more likely to retract that tweet.

"Trying to define objectively what misinformation is and then removing that content is controversial and may even backfire," notes co-author Huaxia Rui, the Xerox Professor of Information Systems and Technology at URochester's Simon Business School. "In the long run, I think a better way for misleading posts to disappear is for the authors themselves to remove those posts."

Using a causal inference method called regression discontinuity and a vast dataset of X posts (previously known as tweets), the researchers find that public, peer-generated corrections can do something experts and algorithms have struggled to achieve. Showing some notes or corrective content alongside potentially misleading information, Rui says, can indeed "nudge the author to remove that content."

Community Notes on X: An experiment in public correction...Community Notes operates on a threshold mechanism. For a corrective note to appear publicly, it must earn a "helpfulness" score of at least 0.4. (A proposed note is first shown to contributors for evaluation. The bridging algorithm used by Community Notes prioritizes ratings from a diverse range of users—specifically, from people who have disagreed in their past ratings—to prevent partisan group voting that could otherwise manipulate a note's visibility).

Conversely, notes that fall just below that threshold stay hidden to the public. That design allows for a natural experiment as the researchers were able to compare X posts with notes just above and below the cutoff (i.e., visible to the public versus visible only to Community Notes contributors )—thereby enabling them to measure the causal effect of public exposure.

In total, the researchers analyzed 264,600 posts on X that received at least one community note during two separate time intervals—the first before a U.S. presidential election, which is a time when misinformation typically surges (June–August 2024), and the second two months after the election (January–February 2025).

The results were striking: X posts with public correction notes were 32 percent more likely to be deleted by the authors than those with just private notes, demonstrating the power of voluntary retraction as an alternative to forcible removal of content. The effect persisted across both study periods.

The reputation effect...An author's decision to retract or delete, the team discovered, is primarily driven by social concerns. "You worry," says Rui, "that it's going to hurt your online reputation if others find your information misleading."

Publicly displayed Community Notes (highlighting factual inaccuracies) function as a signal to the online audience that "the content—and, by extension, its author—is untrustworthy," the researchers note.

In the social media ecosystem, reputation is important—especially for users with influence—and speed matters greatly, as misinformation tends to spread faster and farther than corrections.

The researchers found that public notes not only increased the likelihood of tweet deletions but also accelerated the process: among retracted X posts, the faster notes are publicly displayed, the sooner the noted posts are retracted.

Those whose posts attract substantial visibility and engagement or who have large follower bases, face heightened reputational risks. As a result, verified X users (those marked by a blue check mark) were particularly quick to delete their posts when they garnered public Community Notes, exhibiting a greater concern for maintaining their credibility.

The overall pattern suggests that social media's own dynamics, such as status, visibility, and peer feedback, can improve online accuracy.

A democratic defense against misinformation? Crowdchecking, the team concludes, "strikes a balance between protecting First Amendment rights and the urgent need to curb misinformation." It relies not on censorship but on collective judgment and public correction. The algorithm employed by Community Notes emphasizes diversity and views that are supported by both sides.

Initially, Rui admits, he was surprised by the team's strong findings. "For people to be willing to retract, it's like admitting their mistakes or wrongdoing, which is difficult for anyone, especially in today's super polarized environment with all its echo chambers," he says.

At the outset of the study, the team had wondered if the correcting mechanisms might even backfire. In other words, could a public display note really induce people to retract their problematic posts or would it make them dig in their heels?

Now they know it works.

"Ultimately," Rui says, "the voluntary removal of misleading or false information is a more civic and possibly more sustainable way to resolve problems."

Provided by University of Rochester


DIGITAL LIFE


AI, Blockchain, and NFC fraud mark financial cybercrime in 2025

Financial cybercrime in 2025 reached unprecedented levels of complexity, marked by the increasing use of artificial intelligence, blockchain, and NFC fraud that challenged banks and fintechs globally.

According to the Kaspersky Security Bulletin 2025, dedicated to financial cybersecurity, the year was dominated by more coordinated and technically sophisticated attacks, in which organized crime and cybercrime came closer together in an unprecedented way.

According to Kaspersky, malware spread more frequently through messaging applications, AI-assisted attacks became faster and harder to detect, and contactless payment fraud went from an emerging phenomenon to a consolidated trend.

The result is a systemic risk: institutions, critical infrastructures, and end users began to share the same digital battlefield.

The report summarizes the financial sector's exposure to attacks with a set of indicators that paint a worrying picture.

8.15% of users faced online threats related to the financial sector.

15.81% of users in the sector were targeted by local threats (malware already present on devices).

12.8% of B2B companies in the financial sector suffered ransomware attacks during the analyzed period.

The number of unique users in the financial sector who detected ransomware increased by 35.7% in 2025 compared to 2023.

1,338,357 banking trojan attacks were identified throughout the year.

These numbers confirm that financial cybercrime is not limited to isolated incidents, but functions as a global ecosystem with the capacity to adapt techniques and exploit new attack surfaces at scale.

Evolving tactics: supply chain, organized crime and AI...One of the most striking trends of the year was the intensification of attacks on the financial services supply chain. According to Kaspersky, several Incidents exploited vulnerabilities in external vendors to target national payment networks and central systems, demonstrating the domino effect that a single weak link can trigger.

The convergence between organized crime and cybercrime has also become more evident.

Groups traditionally associated with physical activities, such as trafficking or extortion, have begun to integrate digital capabilities, combining social engineering, insider contacts, and technical exploitation to increase the impact and profitability of attacks.

Artificial intelligence has added a layer of automation and speed.

According to Kaspersky, malware with AI components has incorporated automated propagation and evasion mechanisms, reducing the time between the development and execution of attacks, which makes it more difficult for security teams to respond.

Attackers have not abandoned classic malware, but have altered their distribution channels. Instead of relying primarily on email phishing campaigns, banking trojans have shifted to using popular messaging apps as their main vector, leveraging the trust users place in these platforms.

Kaspersky indicates that banking trojans have been rewritten to operate on top of messaging services, allowing for large-scale infection campaigns without the need for traditional spam infrastructure.

This movement shifts the risk to environments where detection is less mature and where the boundaries between personal conversation and professional communication are more blurred.

On the mobile front, the report highlights the role of Android malware with ATS (Automated Transfer System) capabilities, which automates fraudulent transactions and alters values ​​and recipients in real time without the user's knowledge.

According to Kaspersky, this type of malware acts on legitimate banking apps, bypassing visual checks and confusing users who believe they are operating in a secure context.

NFC-based fraud has evolved in two directions. On one hand, in-person schemes in busy locations exploit contactless payments through devices or cards illegally.

On the other hand, remote attacks resort to social engineering and fake apps that mimic genuine banks, directing the user to payment authorizations without realizing the fraud.

One of the most unsettling trends is the use of blockchain as a command and control infrastructure for financial malware.

According to Kaspersky, some groups have begun to inscribe malware commands in smart contracts, leveraging the Web3 ecosystem to orchestrate attacks.

This approach increases the resilience of malicious campaigns.

Even if traditional servers are deactivated, the control logic persists in the blockchain, making eradication significantly more difficult and raising dilemmas about the governance and oversight of decentralized networks.

Ransomware remains, some families disappear... Ransomware has remained a structural threat in the financial sector.

Globally, 12.8% of B2B financial organizations were affected in the analyzed period, with regional incidences of 12.9% in Africa, 12.6% in Latin America, and 9.4% in Russia and CIS countries.

Kaspersky also indicates that certain malware families have begun to disappear as specific groups cease operations or migrate to more modern tools.

This does not mean less risk, but a reorganization of the criminal ecosystem, which replaces old lineages with new malware platforms, often in MaaS (Malware as a Service) regimes.

What to Expect in 2026: WhatsApp, deepfakes, and “agentic AI malware”...In the predictions chapter, Kaspersky presents a vision that projects the intensification of already ongoing trends, as well as the emergence of new threat categories.

Among the points highlighted by the company for 2026 are:

Banking Trojans rewritten for distribution via WhatsApp and other messaging apps, targeting corporate and government environments that still rely on desktop banking systems.

Expansion of deepfake services and AI tools at the service of social engineering, impacting job interviews, KYC processes, and identity fraud.

Emergence of regional infostealers, inspired by families like Lumma and Redline, focused on specific countries or blocs, in the MaaS model.

More attacks on NFC payments, with new tools and malware aimed at contactless transactions in different contexts.

Arrival of “agentic” AI malware, capable of altering behavior during execution, analyzing the environment, and Adapting to the defenses and vulnerabilities it encounters.

Persistence of classic frauds, but with new distribution methods on emerging platforms.

Continued sale of counterfeit "pre-infected" smart devices, with trojans like Triada on smartphones, televisions, and other connected equipment.

These predictions describe a scenario in which technical sophistication combines with the industrialization of cybercrime, lowering barriers to entry and expanding the potential impact of less experienced actors.

Recommendations: between best practices and commercial interest...Kaspersky presents a set of recommendations for users and organizations, combining recognized best practices with the explicit promotion of its products and services.

For individual users, the company suggests:

Downloading applications only from official stores, verifying the authenticity of the developer.

Disabling NFC whenever there is no need for its use and opting for digital wallets with mechanisms to block unauthorized communications.

Monitoring accounts and transactions regularly to quickly detect suspicious activity.

Protecting financial transactions with the Kaspersky Premium solution and the Safe Money feature, which, according to the company, validates the authenticity of banking and payment websites.

For financial organizations, Kaspersky recommends a "cybersecurity ecosystem" approach that unites people, processes, and technology.

Among the measures proposed by the company are:

Assessing the entire infrastructure, correcting vulnerabilities, and using external specialists to identify hidden risks.

Implementing integrated platforms of Monitoring and control of all attack vectors, with rapid detection and immediate response; the Kaspersky Next range is presented as an example of a solution with real-time protection and EDR/XDR capabilities.

Monitor the threat landscape with Kaspersky threat intelligence services and promote regular awareness training to create a "human firewall".

Although the technical guidelines align with widely accepted industry best practices, it is important to note that the report also serves as a commercial positioning piece for Kaspersky's portfolio, which requires critical reading by decision-makers and security teams.

Information from Kaspersky

Sunday, November 16, 2025

 

DIGITAL LIFE


Journalism has become addicted to Big Tech, and that's not a good thing...

Large technology companies positioned themselves as allies—sometimes even saviors—of journalism for much of the last decade. Their actions were compelling: they distributed resources through grants, created project incubators, supported fact-checking initiatives, offered training on their tools, permeated newsrooms with a culture of innovation, launched news curation tools, and closed advertising deals directly with media outlets.

From 2013 to 2022, while news organizations faced changes in business models imposed by the internet and competed for people's attention with social media, Big Tech gave them some room to operate and grow. Many successful companies and non-profit organizations were born during this time.

Certainly, money and access to tools were part of the attraction. But, mainly, the game that newsrooms became addicted to was that of distribution.

Media companies began investing heavily in search engine optimization (SEO), rewriting headlines to please social media rewards for absurdity, polluting websites with low-quality banner ads, adopting limiting publishing platforms like Instant Articles and AMP, and fragmenting good content to fit it into recommendation tools like Discover and news feeds. This list could get very long, very quickly, and it has begun to backfire.

News organizations (especially small and medium-sized ones) became so dependent on this dynamic that when platforms began to change their products, many newsrooms began to feel the effects.

Many news organizations that do truly impactful work face the same problems. Some recent examples may make my point clearer.

After a decade of investing in SEO, some news organizations are seeing their traffic from Google (one of the main sources) plummet because of the AI ​​summaries attached to most searches.

Similarly, in July 2025, Meta said it would begin charging per message on its commercial WhatsApp broadcast lists, disrupting distribution flows for many news organizations that relied on this distribution (the messaging app is the standard communication tool in Latin America and many other countries).

Furthermore, when Twitter was sold to Elon Musk and became X, external links were significantly demoted in the feed, considerably affecting news organizations that had invested years building huge follower bases. Just one decision can render significant investments obsolete.

Another path for our journalism...I believe there is a way to reduce this effect that Big Tech has on us, and that involves bringing news organizations closer together in cooperation. During my 2026 John S. Knight Fellowship at Stanford University, I am developing a framework that facilitates commercial and institutional collaboration between news organizations.

This means creating practical foundations for organizations to work with each other, such as legal guidelines, partnership ideas, business cooperation methods, technical alternatives, and even trying to bring independent technology companies into the game.

Collaboration needs to require little effort and be seen as mutually beneficial for the newsrooms involved, avoiding friction with our audiences. It also has to be seen as crucial for survival, helping newsrooms share access to each other's communities and distribution networks – let's leverage each other's expertise and strengthen the ecosystem as a whole.

The Dilemma of Journalism...We are at a critical juncture where technological innovation is crucial for the future of journalism, just as it was for the emergence of the internet. There is a potential risk of over-reliance on Big Tech for immediate financial relief, but this may not be a sustainable long-term solution.

The news industry needs to develop more diversified and resilient revenue models to ensure its survival in the face of ongoing digital disruption. Relying on revenue from technology companies can create a dependency that may compromise the impartiality of news organizations. Changes in algorithms, policies, or market focus of these companies can abruptly impact the revenue sources of news outlets.

There is a risk that licensed content will be used in ways that dilute its quality or context. AI platforms may misinterpret or distort journalistic content, further undermining public trust.

There is a need for a reassessment of the relationship between technology companies and media organizations, with balanced partnerships, greater transparency in funding initiatives, and stronger regulatory frameworks to protect journalistic integrity.

The study shows that the influence of tech giants can be more pronounced in regions where news organizations struggle with limited resources. There is a risk of creating a two-tiered global news ecosystem, where only well-funded organizations can keep up with technological advancements. This disparity could have far-reaching implications for access to information and democratic discourse globally.

The current challenge lies in harnessing the benefits of technological advancements while preserving the essential role of independent journalism in democratic societies. The coming years will be crucial in determining the future of the news industry.

mundophone


DIGITAL LIFE


When the algorithm decides how much each of us is worth

Personalization can bring real benefits if used fairly and transparently, but when personalization stops serving the consumer and starts exploiting them, technological progress turns into digital discrimination, warns Paulo Fonseca(image above) in this opinion piece.

Digital personalization has become one of the hallmarks of the modern economy. It allows products, services, and communications to be adapted to each consumer's preferences, making their digital experience simpler, more relevant, and more efficient. But there is a boundary that cannot be crossed: the one that separates convenience from exploitation. When personalization stops serving the consumer and starts exploiting them, technological progress turns into digital discrimination.

The recent case of Delta Airlines, which generated controversy by being accused of adjusting airfares through its artificial intelligence systems that assessed how much each passenger would be willing to pay for their trip, is just one example of a hidden phenomenon that is becoming increasingly widespread, although many companies vehemently claim that they do not use any pricing models tailored to their customers' profiles. The truth is that prices, interfaces, and even the messages we receive are increasingly shaped by our digital footprint. And often, not even Hercule Poirot can figure out what's behind the personalization.

It's important to distinguish between different types of personalization. There's content personalization – which includes advertising and recommendations from online platforms. There's interface personalization – which changes how each user sees and interacts with websites and applications, influencing decisions and perceptions. And there's price personalization – the most sensitive and potentially most problematic – when the value of a product or service is determined not only by global demand, but also by the analysis of our personal data, from purchase history to our publications and searches, and even the type of device or location.

Personalization can bring real benefits if used fairly and transparently. It can simplify choices, enable relevant offers, facilitate discounts, and improve the relationship between companies and their customers. But there are serious risks when personalization becomes a weapon of discrimination. Personalization cannot be used to exploit our weaknesses or our level of need for a product or service. It cannot serve to extract the maximum amount someone is willing to pay, nor to conceal the real market price.

DECO (Portuguese Consumer Protection Association) has developed extensive work in this area. For this organization, price personalization based on behavioral data that generates discrimination is incompatible with the Charter of Fundamental Rights of the European Union itself. When the price that each person sees is different, and especially when that difference results from privileged information that companies collect about us, the balance in the market disappears. Each person becomes an isolated market, without reference or possible comparison.

Dynamic pricing is also an example of this. In theory, it should reflect overall demand – more demand, higher price; less demand, lower price. But in practice, it is no longer the number of people interested in a product that determines its price, but rather the digital profile of the person seeking it. The risk is clear: when my online behavior influences the price I am shown, the market ceases to be a space of competition and becomes a reflection of the citizen's digital vulnerability.

But what should the solution be? The path is not to reject personalization altogether, but to put it on the right track. Personalization can and should exist when you offer clear advantages, such as real discounts, helpful recommendations, and a more accessible experience. But it must have transparent and auditable limits. Companies should be required to demonstrate that their practices are fair, and personalization can only exist by choice and never by default.

The European Commission has a decisive role here. It is not enough to require companies to disclose that they practice personalized pricing – it is necessary to define red lines through a strong, European, and coherent institutional response that establishes the ethical and legal limits of personalization.

The challenge, as we have already said, will always be to balance innovation with protection. The fair price is the one that promotes trust, better products, and the right choices. It is not the one that depends on how much an algorithm thinks I can pay. The digital future cannot be a game where consumers pay the price of their own information. If we let algorithms decide how much each of us is worth, then we will cease to be free consumers and become perfect targets. And no progress justifies this.

*Paulo Fonseca is strategic and institutional relations advisor at DECO

mundophone

Saturday, November 15, 2025


DIGITAL LIFE


Return of the landline phone? Young people are betting on modern versions to take care of their mental health

Nowadays, with technological advances, it seems unthinkable to depend on a landline phone to communicate with someone. But the wired device, which fell into disuse, seems to be winning over new generations, who are resorting to ways to reduce the time spent on smartphones.

According to the Mashable website, 15 years ago, 62% of Americans said that a landline phone was a necessity of life, according to a study by the Pew Research Center. However, with the advent of cell phones, the use of landline phones plummeted rapidly, as expected. At the end of 2022, 72.6% of adults lived in homes without a landline phone, according to the National Health Interview Survey.

But just when the landline phone seems about to disappear, a wave of young people has discovered that it may be the simplest way to cure brain rot, which is the deterioration of mental or intellectual state caused by excessive consumption of superficial online content.

A new approach to the analog device is now emerging. Online content creator Catherine Goetze is one example. According to Mashable, she created a landline phone called the Physical Phone that connects to a smartphone, allowing users to answer calls without having to pick up their phone. The video she posted on Instagram explaining the concept has already surpassed 2 million views.

The idea behind the Physical Phone is to avoid doomscrolling (endlessly scrolling through your phone's feed, usually consuming negative content) and reduce screen time. Sometimes, without realizing it, you pick up your phone to call someone or send a message, but when you do, you've gone on social media, endlessly scrolled through your feed for several minutes, and haven't done what you intended. The landline phone would solve this: it maintains communication but eliminates the temptation to open social media apps.

Other people are finding new ways to adopt the landline phone concept. Erin Wakeland posted a video on TikTok, which now has over 90,000 views, showing a method that involves attaching a smartphone to a chain on the wall, creating a de facto "landline phone." This way, whenever she wants to use her phone, she has to sit in a chair at a specific point. "It's a physical boundary that helps my digital boundaries," she said in the video.

Another content creator went viral by showing her version of the landline phone, which consists of putting her iPhone in "landline mode"—meaning she only receives notifications for calls and messages. She claims in the viral video that she managed to reduce her screen time to 29 minutes a day and became more aware of how often she uses her cell phone.

Reasons for the return

-Mental health: The possibility of having a conversation without the need for multitasking (such as answering emails or messages at the same time) helps to "slow down" and focus on the conversation, being a form of self-care.

-Reduced screen time: Many young people seek alternatives to reduce the time spent on cell phones, such as games or social networks, which can be used as "escape valves" to avoid daily challenges.

-More direct and real communication: Modern landline phone models encourage more focused conversations without the mediation of emojis, being a way to create stronger bonds with family and friends.

-Independence for children: Children's landline phones allow children to contact friends and family without the risks and distractions of a smartphone.

How the landline phone is coming back

-Modern retro models: Companies are launching models with classic design, but with digital features such as cell phone chips or Wi-Fi, even allowing parental control through apps.

-Parental control: Some devices allow parents to control usage times and access, offering a safer alternative than a smartphone.

-Simplicity: The simplicity of the device, without the need for an app or internet connection, makes it a nostalgic object and a symbol of less stressful communication.

Thâmara Kaoru

 

DIGITAL LIFE


How lobbying sustains the power of big tech companies worldwide

Could Regulating Big Tech Lead to Censorship of the Bible in Brazil? Is Tony Blair quietly promoting Oracle's AI in the Global South? Which politicians are lobbying on behalf of Mark Zuckerberg or Jeff Bezos around the world?

We already know that the concentration of money and power in Big Tech companies, coupled with hermetically sealed algorithms, leaves a trail of victims ranging from access to reliable information to natural resources. Yet, little is said about the force that sustains the political and economic dominance of these tech giants: lobbying.

According to its financial report, Alphabet, Google's parent company, had revenues of US$350 billion in 2024, almost equivalent to Chile's GDP. Last year, Amazon amassed US$638 billion in sales, almost a third of Brazil's GDP.

With so much money, the bargaining power of Big Tech companies is significant in less developed regions. In places like Brazil, lobbying is not regulated, which makes it more difficult to track these actions and measure how much influence they have on laws passed in Congress.

“Deputies sent me messages reporting physical threats and threats on social media (…) Big Tech companies have crossed all limits of prudence,” said the then-president of the Chamber of Deputies, Arthur Lira (PP-AL), in 2023, when Congress was trying to approve Bill 2630, to regulate the tech giants. The text became known as the Fake News Bill.

Lira promised to hold Big Tech companies accountable for what he defined as “an almost horrific act in the lives of deputies in the week leading up to the vote.” Faced with the pressure, meticulously reconstructed in one of the more than 20 articles in the investigative series The Invisible Hand of Big Tech, the bill ended up being shelved.

The influence of Big Tech lobbying motivated Agência Pública, from Brazil, to join forces with the Latin American Center for Journalistic Research (CLIP) to lead an unprecedented transnational investigation. The Invisible Hand of Big Tech brought together 17 journalistic organizations from 13 countries—from Mexico to Australia—to try to understand the power of these corporations in today's world.

Databases systematize the relationship between lobbying and politics...In Europe, Larry Ellison, co-founder and CEO of Oracle, donated US$130 million between 2021 and 2023 to the Tony Blair Institute (TBI). “When it comes to technology policy, the role of the TBI is to enter developing economies and sell Ellison's technologies. Oracle and TBI are inseparable,” said a former senior advisor from the United Kingdom, one of the 29 sources interviewed for the report on the institute.

When the Labor Party, led by Prime Minister Keir Starmer and of which Blair was a key leader, returned to power in 2024, TBI employees began to occupy direct government positions while receiving TBI salaries. In the United States, Ellison was dubbed the "CEO of everything" by Donald Trump after the Republican's return to the White House in 2025.

Ellison's donations helped TBI employ nearly a thousand people in at least 45 countries. The institute's revenue in the last fiscal year was approximately US$145.3 million. A former employee said the influx of money made the internal culture "extremely toxic," while others described a blind optimism regarding AI that pushes the boundaries of lobbying in favor of Oracle.

In Ethiopia, on the brink of civil war in 2020, TBI was working on an AI public policy proposal advocating for the introduction of autonomous cars. The TBI told journalists investigating the matter that it does not represent Oracle's commercial interests. Oracle and the Larry Ellison Foundation declined to comment.

But Tony Blair is not the only former head of government to act in favor of a tech giant.

In Brazil, Google hired former president Michel Temer (2016–2019) to strengthen its lobby against regulation. In 2023, when Congress attempted to pass the Fake News Bill, executives from Google and Telegram were investigated by the Federal Police for pressuring parliamentarians to vote against the text. On behalf of Google, Temer acted as an intermediary in the negotiations throughout the process.

Meanwhile, Big Tech companies also allied themselves with the far right to block regulatory efforts. In Congress, conservative deputies spread the idea that passages from the Bible would be banned in Brazil with regulation. The document that fabricated this false connection was conceived by a lobbyist from Meta and distributed by a lobbying entity representing Amazon, Meta, Google, Kwai, and TikTok.

Critics of the project also organized demonstrations in defense of freedom of expression. One of them took place at Brasília Airport. But the investigation found strong evidence that the protest was linked to lobbying groups.

Journalism in the wake of lobby victims...Lobbying also intervenes in the use of news content by Big Tech companies, which need journalism to train algorithms, AI, and social networks. Meta and Google, in particular, have put together a kind of manual to block remuneration for journalism in Canada, Australia, and Brazil.

The strategy includes strengthening ties with the press, promoting lavish events, making private agreements with large media outlets ("divide and conquer"), and turning public opinion against the media. In any country that has tried to legislate on the relationship between news sites and digital platforms, Richard Gingras, former vice president of news at Google, appeared to argue that the laws were misguided and that Alphabet defends a free internet.

His presence in various countries reveals the extent of Google's influence. In the first half of 2023 alone, he spoke with Brazilian journalists at a closed event in São Paulo, appeared on seven monitors during testimony to a parliamentary committee in Canada, and spoke to journalists in Taipei.

“I wouldn’t call it a ‘manual.’ That’s a very structured term. But we’re not completely stupid. If you’re hit with a club multiple times, you learn to dodge,” Gingras said in an interview for the series The Invisible Hand of Big Tech.

The Conversation Brazil 

Friday, November 14, 2025


TECH


Samsung Electronics announces groundbreaking antioxidant index metric to be used in Galaxy Watch8

Samsung Electronics recently announced the Antioxidant Index as the central feature of its new Galaxy Watch8. This groundbreaking metric, according to the company, is capable of measuring carotenoid levels in the skin in five seconds, offering the user an objective indicator of their fruit and vegetable consumption.

This is the first time a wearable manufacturer has attempted to quantify a nutritional biomarker directly and non-invasively in a consumer device. Samsung positions the technology as a "portable health advisor," aiming to bridge the gap between monitoring physical activity and the real impact of diet.

Until now, the assessment of nutritional biomarkers, such as carotenoids, relied on complex and expensive laboratory tests, notably Raman spectroscopy, which uses bulky equipment.

Samsung claims to have solved this challenge after seven years of research. The “main breakthrough,” according to Jinyoung Park, Engineer on Samsung’s Digital Health Team, was the miniaturization of the technology. The new BioActive sensor uses reflectance spectroscopy based on multi-wavelength LEDs, integrated into a compact photodetector.

The measurement process requires the user to place their thumb on the sensor. In seconds, calibration algorithms interpret the light absorbed by the skin and estimate the level of carotenoids.

Why measure carotenoids? Carotenoids are natural pigments (red, yellow, green) that the human body does not produce, obtaining them exclusively through diet, especially fruits and vegetables. Their level in tissues therefore reflects the consumption of these foods.

These compounds function as antioxidants, essential for neutralizing reactive oxygen species which, in excess, contribute to aging and increase the risk of chronic diseases.

“Antioxidant management is essential to slow down aging,” says Dr. Hyojee Joung, a specialist at Seoul National University, who collaborated on the development. The Antioxidant Index translates this complex measurement into three simple categories: Very Low, Low, or Optimal (corresponding to 100% or more of the WHO recommendation of 400g/day).

The challenge of inclusive accuracy...One of the biggest technical obstacles in optical measurement through the skin is the interference of melanin, which varies dramatically between different skin tones.

To ensure data reliability, Samsung engineers opted to perform the measurement using the fingertip. This area, according to the company, has a consistently low concentration of melanin in all individuals. Additionally, the system requires light finger pressure, which momentarily reduces blood flow and improves the accuracy of the optical signal.

Samsung states that the technology, incorporated into the Galaxy Watch8, was validated in clinical trials with hundreds of participants at the Samsung Medical Center to ensure its effectiveness in a diverse population.

Antioxidant Index: the impact on preventive health...The Antioxidant Index is not an immediate reflection of the last meal. “Carotenoids accumulate in tissues gradually,” explains Dr. Joung, noting that one to two weeks of dietary change are necessary for the index to reflect this change.

Samsung wants the metric to function as an indicator of overall well-being, also influenced by sleep, stress, and physical activity.

By transforming a complex biological data point into an actionable daily metric, the company is betting on the gamification of preventative health. “New sensors in wearable technologies can play a key role in promoting healthy eating habits,” concludes Professor Yoonho Choi of the Samsung Medical Center.

It remains to be seen how the market and the medical community will react to the accuracy of the functionality outside of a controlled environment, but, in any case, this Samsung innovation could signal a new era for wearables: that of biochemical monitoring.

mundophone

DIGITAL LIFE The most effective online fact-checkers? Your peers When the social media platform X (formerly Twitter) invited users to flag f...