Wednesday, May 6, 2026


DIGTAL LIFE



Google Chrome silently installs a 4 GB AI model on your device(video)

Google is the latest company to generate negative headlines for its AI integration, due to its Gemini AI integration into Chrome announced back in September of last year. In the months since, the feature has rolled out to users' PCs, and since you have to opt out, most people are now using Chrome with AI features enabled by default. Privacy advocates and AI critics have taken note of this and are pointing fingers at a 4GB "weights.bin" file used for Google's local AI, Gemini Nano. Critics have also noted that the weights file restores itself upon deletion, and even on a brand-new Chrome install, will be automatically downloaded to a user's device.

That said, it's  surprisingly easy to disable AI features in Chrome, including permanently deleting the weights.bin file. In fact, I did so as soon as I noticed the Gemini button being added to my Chrome window, and right-clicking it to disable it. After that, you need to open up "chrome://flags" in your address bar, and find the "Enables optimization guide on device" flag. Disable it and the folder containing the weights.bin file will automatically be deleted, and it won't come back. The persistence of the file when manually disabled is most likely due to this conflicting browser flag, and Chrome repairing itself, and not malicious intent.

Two weeks ago I wrote about Anthropic silently registering a Native Messaging bridge in seven Chromium-based browsers on every machine where Claude Desktop was installed [1]. The pattern was: install on user launch of product A, write configuration into the user's installs of products B, C, D, E, F, G, H without asking. Reach across vendor trust boundaries. No consent dialog. No opt-out UI. Re-installs itself if the user removes it manually, every time Claude Desktop is launched.

This week I discovered the same pattern, executed by Google. Google Chrome is reaching into users' machines and writing a 4 GB on-device AI model file to disk without asking. The file is named weights.bin. It lives in OptGuideOnDeviceModel. It is the weights for Gemini Nano, Google's on-device LLM. Chrome did not ask. Chrome does not surface it. If the user deletes it, Chrome re-downloads it.

The legal analysis is the same one I gave for the Anthropic case. The environmental analysis is new. At Chrome's scale, the climate bill for one model push, paid in atmospheric CO2 by the entire planet, is between six thousand and sixty thousand tonnes of CO2-equivalent emissions, depending on how many devices receive the push. That is the environmental cost of one company unilaterally deciding that two billion peoples' default browser will mass-distribute a 4 GB binary they did not request.

This is, in my professional opinion, a direct breach of Article 5(3) of Directive 2002/58/EC (the ePrivacy Directive) [2], a breach of the Article 5(1) GDPR principles of lawfulness, fairness, and transparency [3], a breach of Article 25 GDPR's data-protection-by-design obligation [3], and an environmental harm of a magnitude that would be a notifiable event under the Corporate Sustainability Reporting Directive (CSRD) for any in-scope undertaking [4].

What is on the disk and how it got there...On any machine that has Chrome installed, in the user profile, sits a directory whose name is OptGuideOnDeviceModel. Inside it is a file called weights.bin. The file is approximately 4 GB. It is the weights file for Gemini Nano. Chrome uses it to power features Google has marketed under names like "Help me write", on-device scam detection, and other AI-assisted browser functions.

The file appeared with no consent prompt. There is no checkbox in Chrome Settings labelled "download a 4 GB AI model". The download triggers when Chrome's AI features are active, and those features are active by default in recent Chrome versions. On any machine that meets the hardware requirements, Chrome treats the user's hardware as a delivery target and writes the model.

The cycle of deletion and re-download has been documented across multiple independent reports on Windows installations [5][6][7][8] - the user deletes, Chrome re-downloads, the user deletes again, Chrome re-downloads again. The only ways to make the deletion stick are to disable Chrome's AI features through chrome://flags or enterprise policy tooling that home users do not generally have, or to uninstall Chrome entirely [5]. On macOS the file lands as mode 600 owned by the user (so it is deletable in principle) but Chrome holds the install state in Local State after the bytes are written, and as soon as the variations server next tells Chrome the profile is eligible, the download fires again - the architecture is the same, only the file permissions differ.

How I verified this on a freshly created Apple Silicon profile...Most of the existing reporting on this behaviour is from Windows users who noticed their disk filling up - useful, but Google could (and probably will) try to characterise those reports as anecdotes from non-representative configurations. So I went looking for a clean witness on a different platform.

The witness I found is macOS itself. The kernel keeps a filesystem event log called .fseventsd - it records every file create, modify and delete at the OS level, independent of any application logging. Chrome cannot edit it, Google cannot remotely reach it, and the page files that record the events survive the deletion of the files they reference.

I created a Chrome user-data directory on 23 April 2026 to run an automated audit (one of the WebSentinel 100-site privacy sweeps). The audit driver is fully Chrome DevTools Protocol - it loads a page, dwells for five minutes with no input, captures events, closes Chrome between sites - and the profile had received zero keyboard or mouse input from a human at any point in its existence. Every "AI mode" surface in Chrome was untouched - in fact every UI surface in Chrome was untouched, the audit driver only interacts with the document via CDP and the omnibox is never reached. By 29 April the profile contained 4 GB of OptGuideOnDeviceModel weights - and I knew it because a routine du -sh of the audit-profile directory caught it during a cleanup pass.

mundophone


TECH


Researcher explores the hidden science of pipe failure

How do aging cast iron pipes actually start leaking? The School of Mechanical, Aerospace and Civil Engineering's Edward John is the highlight in a new UK Water Industry Research (UKWIR) video looking at his Ph.D. research that uncovers how pressure changes in old cast iron pipes can cause cracks and leaks over time, with the findings enabling water companies to fix pipes before they burst completely.

Cast iron was the gold standard for plumbing for decades because of its balance of strength and manufacturability. However, it isn't invincible. With approximately 40% of the UK's water supply network composed of cast iron pipes installed during or before the 1960s, large parts of the existing network are reaching the end of their natural lifespan.

This aging infrastructure has led to a high failure rate, manifesting in frequent leaks and bursts, and is a major barrier for UK water companies who want to achieve their goal of halving water leakage by 2050.

The transition from a solid pipe to a leaking one is usually a slow, chemical corrosion process that happens from the outside in, followed by a terminal phase of crack initiation and growth over a few years.

Edward's Ph.D. research, seen in the UKWIR report, focuses on understanding the cracking part of the process, specifically looking at how pressure changes, such as at night when water usage is low and pressure is often higher, forcing small cracks to open up, leading to the leaks we see today.

This image shows a University of Sheffield researcher in a laboratory setting, likely working with fluid systems. Credit: University of Sheffield

Using controlled lab experiments, Edward studied the mechanical fatigue behind these leaks and discovered that reducing pressure temporarily closes micro leaks by easing pipe stress, but the cyclic pressure (the constant rising and falling of water pressure) causes the cracks to keep growing and the leak eventually becomes permanent.

By understanding these failure mechanics, the work allows utility companies to move beyond guesswork and target the specific pipes most likely to burst, which reduces the occurrence of new leaks, saves millions of gallons of water, and in turn reduces water bills for consumers.

The work has shown much promise with the potential of really changing how companies deal with underground pipes in the future.

Edward said, "We're hoping to carry out some follow-up research that will work towards having more of an implementable solution that's based on the kind of fundamental understanding from my Ph.D. The research will allow water companies to more fully understand pipe deterioration and proactively replace pipes in a targeted way.

"At the moment, I am working on two projects about pipe condition assessment—one is looking at measuring the thickness of cast iron pipes using acoustic sensor techniques to detect small leaks. The other is looking at sewer liner condition assessment, which is trying to find non-visible defects."

Provided by University of Sheffield  

Tuesday, May 5, 2026


DIGITAL LIFE


Novel approach to training AI saves energy, improves speed, and minimizes data sent across networks

In a novel attempt to improve how large language models learn and make them more capable and energy-efficient, Stevens Institute of Technology researchers have devised an algorithm that improves AI data sharing, boosts performance and reduces power consumption.

Large language models like ChatGPT are huge. Letting many people train them together without sharing users' private data—an approach called federated learning—is slow and inefficient. To collaborate, the models must share their updated versions of the entire data all the time—and that's a huge amount of information to exchange. This approach uses a lot of network bandwidth memory and is energy intensive. As a result, models can't be synchronized as often as necessary, resulting in outdated versions.

"It's too much data to share," says Stevens Ph.D. candidate Yide Ran, who was the driving force behind the effort to improve the process. "It's like sending in an entire encyclopedia when you only need to change a few entries. But you really don't need to do that."

Working together with his advisors, Zhaozhuo Xu, Assistant Professor of Computer Science at the School of Engineering who studies machine learning at Stevens Department of Computer Science, and Denghui Zhang, Assistant Professor of Information Systems and Analytics at the School of Business, Ran sought to improve how language models share their data.

The team built upon the previously known concept that effective learning in large language models is often driven by a surprisingly small but well-chosen subset of parameters. The result is a more agile, faster-working model that also uses less energy. The researchers named the model MEERKAT after the animal, known for its dexterity and speed.

The team outlined their findings in a paper titled "Mitigating Non-IID Drift in Zeroth-Order Federated LLM Fine-Tuning with Transferable Sparsity," which was presented at the 2026 International Conference on Learning Representations.

Instead of sharing the entire giant AI model, MEERKAT shares updates to only 0.1% of the model, which includes the most important parameters.

"So you are no longer sending the entire encyclopedia when only a few key definitions have changed," explains Zhang. That shrinks communications by over 1,000 times. "Updates that used to be gigabytes are now just a few megabytes," Zhang says.

MEERKAT's other efficiency secret is using a different error-checking approach. Standard AI training requires an intense mathematical process called backpropagation, which stands for backward propagation of errors, in which AI performs self-checks to avoid mistakes. Although it's a core algorithm used to train neural networks by minimizing the difference between predicted and actual outputs, backpropagation consumes huge amounts of memory and energy. MEERKAT simply tweaks the model slightly and checks the results, completely bypassing backpropagation.

Finally, small updates allow for more frequent synchronization of data, which is another breakthrough, as it keeps models up to date.

"Because updates are so tiny, data can now be sent back and forth more often," says Xu. "The result is a much better shared model."

This new approach substantially reduces computational and communication costs, helping make advanced AI adaptation more feasible for resource-constrained institutions, researchers say. Their work will also support more equitable deployment of AI in domains such as health care, education and cross-institutional collaboration, where centralized data collection is often difficult to achieve due to privacy and other issues.

Bypassing Backpropagation (MEERKAT Project) An innovative approach developed by researchers at the Stevens Institute of Technology introduced the MEERKAT system, which saves energy by avoiding the mathematically intensive process of backpropagation.

Innovation: The system makes small adjustments to the model and verifies the results, bypassing the complex error checks that typically drain memory and energy.

Benefit: In addition to saving energy, it reduces the cost of data communication, facilitating training on resource-limited local devices.


by: Stevens Institute of Technology


TECH


Microsoft yields to pressure and removes controversial advice about 32 GB of RAM

If you've recently built or bought a gaming PC, you're guaranteed to encounter the age-old PC gaming dilemma: how much RAM is really needed to run modern games without stuttering and frame rate drops? The answer to this question usually varies immensely depending on who you ask, the games you prefer, and your tolerance for potential performance issues. However, Microsoft recently decided to come out and give its own official "verdict" on the matter. The only problem? The gaming community didn't find the suggestion very amusing, and the tech giant was forced to quietly back down.

In an article published (and now defunct) on its official website, the owner of Windows and Xbox decided to establish what it considered the new golden rules for the ideal memory configuration in a computer dedicated to video games.

According to the company's text, 16 GB of RAM should only be seen as the "practical starting point" or the minimum baseline. Microsoft's real recommendation, which it boldly labeled the "no worries upgrade," focused on a hefty 32 GB of memory. The official justification even had some logical basis: having 32 GB gives you a gigantic margin of maneuver if you're the type of gamer who likes to have the game running while keeping Discord open to chat with friends, has your browser (like the resource-hungry Google Chrome) full of tabs with guides and tutorials, or uses streaming tools running in the background.

The corporate theory seemed very sensible on paper, but Microsoft quickly forgot a crucial and unavoidable detail: the finite budget of the average gamer.

As soon as the specialized portal Windows Latest discovered this advice article and shared it with the rest of the world, the reaction on social media, subreddits, and forums dedicated to hardware was immediate and relentless. Gamers didn't hold back on criticism, and many resorted to sarcasm to ridicule the company's recommendation.

The main complaint is very easy to understand. In a market where the latest graphics cards and top-of-the-line processors already cost a fortune, demanding or attempting to normalize gamers spending tens or hundreds of euros (or dollars) more just to double their RAM is an attitude that sounds completely disconnected from economic reality. The general consensus of the community was clear: asking people to loosen their purse strings to reach the 32 GB mark doesn't send a good message, especially when many are still struggling to assemble a basic machine.

Microsoft's "ninja blackout"...The outcome of this story is a classic case of corporate crisis management and damage control, executed in the quietest way imaginable. Faced with the avalanche of negative comments and ridicule from the gaming community, Microsoft implicitly agreed that it had shot itself in the foot with that publication.

The new foundation for gaming...In a post on its Learning Center, Microsoft published an article showing what constitutes "A good gaming computer," and what caught people's attention was the company's choice to set 32 ​​GB of RAM as the new standard for PCs.

In the text, the company explains that 32 GB of RAM is necessary to achieve "smooth gameplay." Since most current games consume around 16 GB of RAM during execution, although unrealistic for many, the company's thinking makes sense.

Microsoft comments that users often also use applications like Discord, browsers, or even streaming programs while playing, so it's necessary to have enough RAM to run everything smoothly.

The only problem with this situation is that, thanks to investments in generative artificial intelligence, the price of RAM has plummeted worldwide due to scarcity.

Furthermore, the company also commented that SSDs are being used more frequently, which also makes sense given the recommended specifications of most games, which always warn about the need for an SSD to run the titles.

What was the solution found? The company simply removed the article from its official website overnight, without leaving any trace, footnote, or apology. The maneuver was so quick and stealthy that not even the famous Wayback Machine (the digital archive that preserves web pages) managed to capture a copy of the original page for posterity, turning Microsoft's words into a veritable digital ghost.

The stark truth is that, regardless of this controversy, modern video games are effectively becoming increasingly demanding. Having 16 GB on your system still allows you to play the vast majority of the current catalog in a very decent way, but the inevitable transition to higher capacities is looming on the horizon. Microsoft's mistake wasn't a technical forecasting error, but rather a giant error in timing and empathy with your wallet.

mundophone

Monday, May 4, 2026


DIGITAL LIFE


Think online ads are harmless? They could be revealing your private life, say researchers

A new study has uncovered a significant and largely invisible privacy risk in the online advertising ecosystem: the ads you see may be enough to reveal sensitive personal information.

Researchers from the ARC Center of Excellence for Automated Decision-Making and Society (ARC ADM+S) at UNSW Sydney and QUT have demonstrated that artificial intelligence can assess personal attributes, including political preferences, education level, and employment status, based solely on the advertisements a person is shown online.

The study analyzed more than 435,000 Facebook ads seen by 891 Australian users, collected through the Australian Ad Observatory project—a signature project of the ARC ADM+S.

Using advanced large language models (LLMs), researchers found that:

-Personal traits could be inferred without access to browsing history or personal data

-Profiles could be built from short browsing sessions

-AI systems matched and sometimes exceeded human ability to infer personal characteristics

-The process was over 200 times cheaper and 50 times faster than human analysis.

The research shows LLMs can very quickly and cheaply assess online adverts being fed to individuals to predict a wide range of detailed personal information.

In a paper presented at the ACM Web Conference 2026, the researchers say, "Our results demonstrate that off-the-shelf LLMs can accurately reconstruct complex user private attributes.

"Critically, actionable profiling is feasible even within short observation windows, indicating that prolonged tracking is not a prerequisite for a successful attack."

Lead author Baiyu Chen, from UNSW, said the findings challenge common assumptions about online privacy.

"The key point is that the ads a person sees are not random. Advertising systems optimize delivery based on inferred profiles and behaviors, so the overall pattern of ads shown to a user can carry signals about traits such as gender, age, education, employment status, political preference, and broader socioeconomic position.

"Our study shows that LLMs can analyze those patterns and infer private attributes from ad exposure alone.

"These findings provide the first empirical evidence that ad streams serve as a high-fidelity digital footprint, enabling off-platform profiling that inherently bypasses current platform safeguards, highlighting a systemic vulnerability in the ad ecosystem and the urgent need for responsible web AI governance in the generative AI era.

"This work reveals a critical blind spot in web privacy: the latent leakage of user private attributes through passive exposure to algorithmic advertising."

A critical blind spot in privacy...By using AI to analyze ad content, the researchers—including Professor Flora Salim, Professor Daniel Angus, Dr. Benjamin Tag and Dr. Hao Xue—show that streams of ads act like highly detailed digital fingerprints, allowing private attributes to be reconstructed with surprising accuracy, often matching or even exceeding human judgment.

Crucially, the research shows this is not a theoretical risk. Profiles can be built quickly and at scale, even from short browsing sessions, and without long-term tracking. Even when predictions are not exact, they are often close enough to reveal meaningful insights about a person's life stage or financial situation.

How it could be exploited...While major platforms have restricted advertisers from targeting sensitive categories, the study shows that algorithmic ad delivery still encodes these traits indirectly and that this information can now be extracted using widely available AI tools.

This creates a new form of privacy risk where:

-Users do not actively share information.

-No hacking or platform-side access is required.

-Profiling can happen outside platform oversight.

The researchers warn that everyday tools such as browser extensions could be repurposed to quietly collect ads and build detailed user profiles—bypassing platform safeguards and leaving little trace.

In the paper, they say, "We identify browser extensions that abuse legitimate privileges as the potential primary vector for this attack. This scenario is severe due to its inherent stealth and scalability.

"Rather than distributing specialized malware, an adversary can opportunistically deploy this attack within the existing ecosystem of widely installed, benign functioning extensions, such as ad blockers, coupon finders, or page translators.

"These extensions legitimately require permissions to read web page content to function, providing a perfect cover for data harvesting."

Implications for policy and regulation...The findings suggest current privacy protections may not go far enough.

As AI tools make this kind of analysis easier and more accessible, the researchers argue that regulation must evolve to address not just data collection, but what can be inferred from the content people are exposed to.

Addressing this risk will require rethinking privacy frameworks to account for the hidden signals embedded in everyday online experiences—including the ads users passively consume.

"In terms of protection, users can reduce the risk by being cautious with browser extensions, limiting unnecessary permissions, and using available privacy and ad-personalization settings," said Chen.

"However, this is not something users can fully solve on their own, because the broader issue is systemic: people cannot easily opt out of the ad ecosystem altogether, so stronger platform safeguards are also needed."

The study draws on data from the Australian Ad Observatory, a citizen science initiative that collects ads seen by everyday users. It represents one of the largest real-world investigations into how AI can infer personal information from online advertising.

Online ads have gone from being just random ads to becoming a detailed mirror of your private life. Recent studies reveal that the simple stream of ads you receive — even without clicking on anything — can be used by Artificial Intelligence to reconstruct your personal profile with alarming accuracy.

How ads "snitch" on you... Unlike what many think, ads don't appear by chance. They are the end result of a complex system called Real-Time Bidding (RTB), an instant auction that takes place in the milliseconds it takes for a page to load.

Digital ad impressions: Researchers at UNSW Sydney have demonstrated that AI models (LLMs) can infer traits such as political preference, education level, employment status, and financial situation simply by analyzing the pattern of ads displayed to a user.

Surveillance without clicks: You don't need to interact with the ad to be exposed. "Passive exposure" to algorithmic content serves as a high-fidelity digital footprint that bypasses current platform protections.

Short sessions are enough: Months of monitoring are not necessary. Actionable profiles can be created from short browsing sessions, making the attack quick and cheap for malicious actors.

The role of "data brokers": Advertising systems fuel a billion-dollar surveillance industry:

Location sales: Precise location data is harvested through SDKs (software development kits) in common applications, such as weather or flashlight apps, and sold to marketing companies and even government agencies.

Exposure of sensitive groups: There have been confirmed cases of data brokers selling lists of people categorized as "pregnant women," "Hispanic churchgoers," or members of the LGBTQ+ community, often exposing individuals in vulnerable situations.

Provided by University of New South Wales 


TECH


Stacked intelligent surfaces could boost wireless reliability and security for 6G

Wireless communication is about to get stronger, clearer, and more secure, thanks to a new idea from UBC Okanagan researchers. Dr. Anas Chaaban and his team in the School of Engineering are exploring a method to improve the way stacked intelligent surfaces (SIS) can process electromagnetic waves more efficiently.

How stacked intelligent surfaces work...SIS is an emerging alternative to conventional wireless hardware, Dr. Chaaban says, as layers of specially engineered materials are used to directly manipulate electromagnetic waves.

"Electromagnetic waves travel through special surfaces that consist of several elements. These elements mimic neurons in a computerized neural network," Dr. Chaaban says. "As the waves move through the surface, each element changes them slightly. When the waves come out, they are captured by antennas that send the signals to digital processors for further analysis."

Unlike traditional systems that rely on complex and power-hungry circuitry, SIS technology enables fast, low-energy signal processing by controlling how signals propagate through space.

Adding nonlinear intelligence to signals...This new research, published recently in IEEE Wireless Communications, introduces a nonlinear architecture, enabling these surfaces to behave more like artificial neural networks. By incorporating nonlinear behavior into each element, the system can process signals in more complex ways—similar to how modern AI systems handle data.

Until now, most SIS designs have relied on linear operations, so they could only perform relatively simple signal transformations. As a result, these designs cannot take full advantage of advanced communication techniques.

"Nonlinearity unlocks a fundamentally new capability for intelligent surfaces, allowing them to perform tasks that linear systems simply cannot achieve," says Omran Abbas, who is the study's co-author and a UBCO doctoral student.

The idea of using an SIS in this way is not new, he adds, but by using the nonlinear elements, the system can have more intelligence to perform AI-like operations. In a simulated wireless system, the nonlinear system demonstrated improved communication reliability, reducing symbol error rates compared to conventional designs. The improvement comes from the surface's ability to create complex wave patterns that are more resilient to noise and interference.

From prototype design to future networks...Dr. Loïc Markley, a co-investigator on the project with a background in periodic structures and metamaterials, says they are working on the physical design of a non-linear unit cell to build a prototype.

"We are very excited to design a system that incorporates non-linear responses so we can test our theoretical predictions in a real-world environment," he says.

Dr. Chaaban adds that beyond performance gains, the technology also shows promise for enhanced wireless security as these non-linear transformations are characteristically harder to predict and harder for unintended receivers to intercept or decode signals.

Although more research is needed to validate real-world deployments, the findings highlight the untapped potential of non-linear intelligent surfaces as a powerful new tool for next-generation communication systems.

"This innovation could play a key role in enabling future wireless technologies, including 6G communications," Dr. Chaaban says.

"We are analyzing the ideas and investigating them further, and we are also working on testing a nonlinear SIS. This technology could significantly improve reliability, efficiency, and security in next-generation networks."

The use of Stacked Intelligent Metasurfaces (SIMs) can significantly increase the reliability and security of 6G wireless networks.

Unlike conventional single-layer intelligent surfaces (RISs), SIMs utilize multiple layers of programmable materials arranged in a cascaded fashion. This volumetric architecture allows for much more precise control of electromagnetic waves, functioning as a kind of electromagnetic neural network processed directly in space, even before the signal reaches the digital circuits.

How SIMs improve reliability... Stacked surfaces increase network stability through:

-High-precision beamforming: Unlike simple RISs that only reflect waves, SIMs perform spatial filtering and near-field focusing, creating narrower and more directed beams that drastically reduce interference between users.

-Noise resilience: Recent research shows that non-linear SIM architectures can generate complex wave patterns that are more resistant to noise and external interference than traditional systems.

-Error reduction: In 6G system simulations, SIM models demonstrated a significant reduction in symbol error rate (SER), ensuring that information reaches its destination more clearly.

How SIMs Enhance Security: In 6G, physical layer security is enhanced by SIMs due to their inherent complexity:

-Interception Difficulty: The non-linear transformations applied to the signal by the stacked layers are extremely difficult to predict. This makes it nearly impossible for an unauthorized receiver (such as an eavesdropper) to intercept or decode the message.

-Eavesdropping Suppression: SIMs can be configured to maximize the signal on the legitimate user while creating "silent zones" or destructive interference in locating potential intruders, increasing the network's so-called secrecy capability.

Provided by University of British Columbia



Sunday, May 3, 2026


SAMSUNG


Samsung will pay $392 million to ZTE over patent dispute

Whenever you pick up your smartphone to send a message, make a call, or browse the internet on 5G, you're using technologies that rely on thousands of invisible patents. These patents, known as Essential Standard Patents (ESPs), are the true "oxygen" of mobile networks: without them, your phone wouldn't be able to connect to any antenna. And it's precisely because of this vital access that Samsung has just suffered a multi-million dollar financial blow.

The London High Court in the UK ordered Samsung to pay ZTE a $392 lump sum in a global patent dispute. The court’s ruling on the global patent licensing battle came this Friday, requiring Samsung to pay a lump sum amount.

According to MLex, the UK High Court judge ⁠Richard Meade ruled that Samsung must pay a lump sum of $392 million after the company failed to renew the previous 2021 deal with ZTE.

Samsung requested the court to cap the payout at under $200 million. ZTE sought a huge payout of $731 million from Samsung. The determined payout is higher than Samsung’s request and lower than ZTE’s demand.

ZTE sued Samsung in Brazil, China, and Germany. Samsung, on the flip side, sued ZTE in London in December 2024, seeking a determination of the fair, reasonable, and ​non-discriminatory – or FRAND – terms of a patent licence.

The UK Supreme Court issued a bombshell ruling forcing the South Korean giant Samsung to pay around $392 million (approximately €360 million) to the Chinese company ZTE. The reason? The licensing of these essential patents that allow Samsung devices to communicate correctly with global telecommunications networks.

To understand how we got to this point, we have to go back in time a little. This conflict is based on what the technology industry calls "FRAND" terms (Fair, Reasonable, and Non-Discriminatory). Basically, companies that hold patents vital to global connectivity are required to license them to other brands at a fair price to avoid monopolies.

Samsung and ZTE had a peaceful licensing agreement that was in effect until 2021. The problem arose when it came time to renegotiate the renewal of this contract. The two giants could not reach an understanding on the values, and the conversation soured. In response to this impasse, Samsung decided to file a lawsuit in a London court in December 2024, asking an impartial judge to set a "fair" price for these licenses.

According to reports advanced by Reuters, British judge Richard Meade was responsible for finding the balance point in this heated dispute. And the disparity between what each brand wanted was simply abysmal.

-Samsung was willing to pay a maximum absolute value of US$200 million to settle the matter.

-ZTE, on the other hand, demanded a colossal payment of $731 million for its licenses.

The judge ended up setting the final bill at US$392 million. Although this amount is well below the Chinese brand's pharaonic demands, it represents almost double the original budget that Samsung intended to spend. It's a hefty bill that directly affects the coffers of the Galaxy line manufacturer.

Why London? The unusual stage for technology...You may be wondering: why is a massive trade war between a South Korean company (Samsung) and a Chinese company (ZTE) being resolved in a UK court?

The answer lies in a landmark legal precedent set by the British Supreme Court in 2020. This decision granted English courts the authority and power to define patent licensing terms globally, not just regionally. Since then, London has become the main strategic battleground for the entire telecommunications industry. That's where the tech giants go to dictate the rules of the game worldwide.

The chess game is far from over...If you think the passing of this million-dollar check ends the matter, think again. This British decision is just one piece on a complex global chessboard.

ZTE wasted no time and launched similar lawsuits against Samsung in other crucial markets, including Brazil, Germany, and, of course, China itself, seeking to maintain maximum financial and legal pressure. At the same time, Chinese courts are also working to determine their own FRAND terms to resolve this dispute at the local level.

For now, a tactical silence has prevailed. Both companies have refused to make public comments on the London court's ruling, knowing that both parties still retain the right to appeal the decision. However, regardless of appeals, this verdict sets a costly precedent and sends a very clear message for 2026: the invisible cost of keeping our devices connected is getting higher and higher, and the behind-the-scenes wars for control of 5G are only just beginning.

mundophone

DIGTAL LIFE Google Chrome silently installs a 4 GB AI model on your device(video) Google is the latest company to generate negative headline...