Saturday, January 24, 2026

 

SAMSUNG


Galaxy S26 Ultra: new Gorilla Glass could kill screen protectors

There are rituals that are part of buying a new smartphone: opening the box, turning on the device and, almost religiously, applying a tempered glass screen protector to protect the $1,000 investment. Samsung may be about to end this habit. New rumors indicate that the future Galaxy S26 Ultra will come equipped with a completely new generation of Gorilla Glass, so resistant that it could make screen protectors obsolete.

The information, revealed by renowned leaker Ice Universe, points to a "high-resistance" glass that not only protects against drops, but integrates technologies that replace the functional screen protectors that many users buy.

Samsung's strategy is not just to make the glass harder; it's to make it smarter. The S26 Ultra seems to be the culmination of an approach that began with the S24 Ultra.

Extreme Durability: The new generation of Gorilla Glass promises impact and scratch resistance superior to anything seen so far. The goal is to give the user the confidence to use the phone "naked," without fear of keys in their pocket or accidental drops.

Native Anti-Glare: Like its predecessors, the S26 Ultra will retain the integrated anti-glare coating. This eliminates the need for matte films, ensuring visibility in sunlight without the loss of sharpness that third-party films cause.

Integrated Privacy: Joining previous rumors about a "Privacy Display," Samsung may be embedding protection against prying eyes directly into the panel. If the screen goes black for those looking askance, privacy films no longer make sense.

If Samsung manages to deliver on this promise, it will be a hard blow to the accessories industry. Tempered glass screen protectors are a multi-million dollar business, living off users' fear of breaking their screens.

By addressing durability, reflection, and privacy at the material level, Samsung offers a cleaner user experience:

No bubbles: No more application stress.

Perfect Touch: No extra layers reducing sensitivity.

Pure Aesthetics: The phone's design shines without additional plastics.

Technical advancements also include a new anti-reflective coating. Compared to the current Gorilla Armor 2 on the Galaxy S25 Ultra, the new generation promises even greater visual clarity in outdoor environments, eliminating problems caused by direct sunlight.

This change represents a significant generational leap. While current glass is already resistant, Samsung seeks to address the optical limitations that tempered glass films often cause, such as loss of sensitivity in the fingerprint reader and reduced color fidelity.

If confirmed, the Galaxy S26 Ultra will be the first device to offer military-grade protection, privacy filter, and advanced optical treatment in a single piece of glass. The line is scheduled to launch in the first quarter of 2026.

Trust vs. reality...The big question will be psychological. Will consumers trust glass, however "Gorilla" it may be, to protect such a high investment? The fear of breaking the screen is deeply ingrained. However, if Samsung aggressively markets this feature as a "screen protector replacement," it could change market behavior.

With a launch scheduled for February 2026 and the Snapdragon 8 Elite Gen 5 engine confirmed, the Galaxy S26 Ultra is positioning itself as the most rugged and complete smartphone of the year.

mundophone


DIGITAL LIFE


Europe’s digital reliance on US big tech: does the EU have a plan?

In the digital era, almost every part of life – from communication to healthcare infrastructure and banking – functions within an intricate digital framework, led by a handful of companies operating mainly out of the United States. If the framework collapses, so do many of the essential services that allow society to function.

Transatlantic tensions have been steadily rising during US President Donald Trump’s chaotic first year back in the White House. Trump’s repeated demands for the Danish autonomous territory of Greenland and tariff threats have driven the EU to reassess its relationship with its long-time ally, who may not be as dependable as was previously thought. US cooperation with Europe isn’t just key for trade and diplomacy, it’s also essential to maintain a robust technological and digital frontier.

The bulk of European data is stored on US cloud services. Companies like Amazon, Microsoft and Google own over two-thirds of the European market, while US-based AI pioneers like OpenAI and Anthropic are leading the artificial intelligence boom. According to a European Parliament report, the EU “relies on non-EU countries for over 80 percent of digital products, services, infrastructure, and intellectual property”.

That dependency on a handful of providers has left the EU extremely vulnerable to sovereignty risks in its public and private sectors, to the point where technical issues, geopolitical disputes or malicious activity can have wide-reaching, disastrous effects.

With that fear in mind, EU lawmakers are pushing for alternatives to US Big Tech, and providing homegrown substitutes to Google, Open AI, Microsoft and others.

EU lawmakers are pushing for digital sovereignty...According to Johan Linåker, senior researcher at RISE Research Institutes of Sweden and adjunct assistant professor at Lund University, Europe’s complacency has led the bloc to a point where most of Europe runs on clouds provided by US Big Tech.

“Public sector and governments have been suffering by a comfort syndrome for decades. There’s a tradition of conservative procurement culture, risk aversiveness, and preference for status quo," he said.

“The difference now is that the geopolitical landscape adds a new dimension of risks to the palate – beyond lack of innovation and escalating license costs.”

Lawmakers are scrambling to make up for that complacency. In 2024, the European Commission appointed its first "technology sovereignty, security and democracy" chief, Henna Virkkunen, whose job it is to reduce dependency and formulate policies that will keep the EU digitally secure. 

Lawmakers have also rallied behind the Eurostack movement, an initiative established in 2024 that aims to build an independent European digital infrastructure to limit the dependence of the European Union on foreign technology and US companies. It has lofty goals of cutting technological dependence, boosting industry competitiveness and driving innovation – all while committing to the EU’s sustainability goals.

However, an analysis by independent think-tank Bertelsmann Stiftung estimates that it will take roughly a decade and €300 billion for Eurostack to achieve its goal. A less conservative estimate by US trade group Chamber of Progress (which includes several US Big Tech companies) estimates that the full cost would be far higher at over €5 trillion.

Time and money – not will or talent – are the EU’s main problem.

“We have a brilliant pool of skillful and entrepreneurial minds, but they require beyond substantial investments and demand to fully leverage,” Linåker said. “Europe’s sovereignty hinges on its competitiveness and innovation.”

Finding realistic alternatives...France, Germany, the Netherlands and Italy have also begun investing in open-source platforms. Open-source means that the technology – hardware or software – is available to be modified, reviewed, and shared.

“Essentially, the open-source piece of software is free to use," Linåker said. "In a public procurement you are free to point out the software explicitly and focus on buying the services needed for use. It provides a toolset for governments in growing their digital sovereignty and resilience, and increasingly recognised through upcoming strategies and legislation."

Certain websites like Switch to EU and european-alternatives.eu also provide lists of European or "European-friendly" digital substitutes that can replace US Big Tech: Mastodon can be an alternative to Elon Musk’s X, Switzerland’s Proton Mail can replace Gmail, etc. But habits are hard to break and these changes can only occur through a deliberate shift in mentality.

While that shift will be “massive”, according to Linåker, it must start somewhere.

“Policy-makers and governments need to lead by example by moving the public discourse and communication beyond incumbent platforms as X to options like Mastodon, which enables an open and federated infrastructure, not dependent on any single actor," he said. "But again, this is not an easy shift – although in practice it’s not that hard."

A few pioneering projects are taking digital sovereignty seriously and leading the way to making that change concrete.

The Swedish city of Helsingborg, for example, is testing how it’s public services would function in the event of a digital blackout. The German region of Schleswig-Holstein has gone a step farther: the regional government has substituted its Microsoft-powered computer systems with open-source alternatives, cancelling almost 80 percent of its Microsoft licenses. Although the system isn’t perfect, the Schleswig-Holstein administration hopes to phase out Big Tech almost entirely by the end of the decade.

“The regional government of Schleswig-Holstein proves the fact that one can create a sovereign digital infrastructure, while working with domestic and European vendors," Linåker said. "Myths regarding security and usability are no more. All of Europe should be pointing their eyes in their direction."

But weaning off US tech entirely will take its time.

“Decoupling is unrealistic and cooperation will remain significant across the technological value chain,” an EU digital strategy report draft reviewed by POLITICO in June 2025 said – which means the EU will, for now, continue to promote collaboration with the US and other tech players including China, Japan, India and South Korea.

The draft report's admission that untangling from the dominance of US tech companies is “unrealistic” only fuels fears about the EU’s reliance on their unpredictable transatlantic ally. 

“If the plug gets pulled, consequences will be catastrophic. The likelihood is another question,” Linåker said. “Either way, policy makers and governments need to realise the risk is a fact, understand the potential consequences, and start treating digital infrastructure as a critical asset.”

by: Diya Gupta---https://www.france24.com/en/author/diya-gupta/

Friday, January 23, 2026


TECH


Q&A: Ethical, legal and social issues—what does it take for new technology to be accepted?

How do cutting-edge science and technology respond to ethical and legal issues when incorporated into society? These issues are known as ethical, legal and social issues, or "ELSI" for short, and research on these issues is being carried out both within Japan and around the world.

In 2023, Kobe University launched KOBELSI, its university-wide research project on ELSI in life sciences and natural sciences. As revolutionary technology in fields such as medicine and artificial intelligence (AI) continue to pop up one after another, how has ELSI research advanced?

Professor Chatani Naoto of the Graduate School of Humanities, leader of the project and an expert in ancient Greek philosophy and bioethics, spoke about the current state of ELSI research and its future prospects.

An introduction to research at KOBELSI was given at an event to commemorate the launch of the project in August 2023. Credit: Kobe University

When did ELSI research start to become more widespread?

ELSI research began to really spread in the U.S. from the end of the 20th century, and there were two big reasons for this.

First of all, from the 1970s, the field of molecular biology started to see rapid growth, owing to the elucidation of genetic organization and advances in genetic engineering. There were also pushes to apply this technology to medicine. On the other hand, the scientists themselves began to raise fears of the risks of these advances, such as biohazards and abuse of technology.

was at that point that James Watson, known for his discovery of the double helix structure of DNA, proposed a moratorium in 1974 for researchers to stop and consider the way research should be conducted.

Another reason was the human genome project that began in the U.S. in the 1980s. This was an international endeavor to decipher all the genetic information of humans, and during the project promotion process, Watson advocated the importance of ELSI. He proposed that when performing research, a certain percentage of the budget should be allocated to ELSI research, which the US government accepted and applied when organizing the project.

With the advancement of DNA analysis comes the possibility of all sorts of disadvantages, such as not being able to get insurance or a job due to the prediction of the formation of an illness due to a certain genetic disease. In addition, genetic information, also known as the blueprint for our bodies, was a previously unexpected intellectual property, which raises the issue of how society and laws should handle it. Promotion of the human genome project has suddenly brought these issues into the spotlight.

What about in countries besides the U.S.?

Perhaps due to lessons learned following the human experiments conducted by the Nazis, Europe tends to emphasize human rights issues involved in medical research on humans, and has for some time now. However, in Europe, research has developed not as ELSI, but as ELSA, in which the A stands for "aspects."

On the other hand, Japan has only really started its own ELSI research within the last few years. In the fifth edition, for the academic years 2016-2020, of the Cabinet Office's "Science and technology basic plan," which is released every five years, the concept of ELSI was included for the very first time. Following this, ELSI became eligible for public research grants, and in the past few years, centers for conducting ELSI research have begun to be established at national universities.

At Kobe University, the "Project innovative ethics" was launched at the Graduate School of Humanities in 2007, which carried out research and education regarding applied ethics. This project has been involved in activity such as conducting surveys on asbestos issues in industrial areas in Amagasaki City, Hyogo Prefecture, and in the aftermath of the Great East Japan Earthquake, publishing academic journals and holding research seminars.

To make use of the activity that has accumulated in the project, KOBELSI was started as a university-wide organization in 2023. Currently, some 20 researchers from graduate schools in both liberal arts and sciences participate in KOBELSI.

What about ethical issues in research even before the term ELSI came to be?

In recent years, the span between developing science and technology and its social implementation has become extremely short. Not only that, once it's implemented into society, the effects are both wide and varied. Even if there are fears that technology will be misused, there is also the issue of laws not being able to keep up. Once that technology becomes widespread in society, it's already too late to think about any issues.

Whether it's the internet or smartphones or what have you, it's impossible to go back to a society where those things didn't exist. Rapidly evolving generative AI will likely turn out much the same. Generative AI searches through massive amounts of data to provide answers to questions in an instant, but this comes with its own issues.

From biased expressions created in a world centered around Europe and North America to copyright issues associated with the original data being used, all kinds of issues have been pointed out.

One characteristic of ELSI research is that it looks to find issues not after social implementation, but from the development stage of science and technology to try to solve as many of those issues as possible. Rather than pointing out issues from the outside, ELSI research aims to think about issues together in the same circles as the researchers who are developing new science and technology.

In order to do this, researchers from ELSI fields, such as law and ethics, must have a certain level of understanding of the specialized knowledge of their target science and technology, while the researchers developing this technology must also consider the social effects of their research.

However, when these researchers do think from within the same circle, there is also the issue that they may lose their critical attitude. Even if there's a framework for researchers with ELSI-based perspectives to enter into, it's meaningless if they just approve each technology as it's developed.

The more organized this framework becomes, the more researchers need to be aware of the risks involved with the technology and emphasize communication among themselves.

Perspectives on the use of research by the military are important ELSI topics as well...I often hear the term "dual-use" being used, but this can actually mean both "military and civilian use" and "good use and misuse."

There are dual-use aspects to new science and technology. The internet we use all the time, the GPS in our car navigation systems, those were all developed for the military and expanded to civilian use. Conversely, there are also examples of technology developed for civilian use that were later adopted by the military.

I like to think based on the keywords of "intention" and "foresight" of the doctrine of double effect, which is well known in the field of ethics. Development of new science and technology for the benefit of society or for civilian use is the intention; however, such technology could also be foreseen as being used for crime or for war.

In cases where these uses are foreseen, this then requires thinking from all sorts of angles, such as cruelty, invasiveness and even potential use in weapons of mass destruction.

So, once you consider the overall situation, i.e. the scale and reliability of the benefits that this technology will provide society, only then can you decide whether a certain level of dual use can be tolerated.

The people of Japan deeply regret the use of university research for military purposes in wars past, making this problem a large point of discussion for ELSI. If we're being extreme, it's certainly possible that all science and technology could be used by the military, but what's most important when carrying out research is autonomy and openness.

Autonomy is when research begins from the intellectual curiosity of researchers and is not interfered with externally. Openness is when the content of research is made public through papers and other media. In other words, the lack of either autonomy or openness would be in opposition of the true nature of academic research.

What kinds of endeavors is KOBELSI engaged in?

In 2022, Kobe University constructed a framework called the Digital Bio and Life Sciences Research Park (DBLR), which includes research hubs in five areas, such as biomanufacturing, medical engineering and healthy longevity, where research is being conducted with a stronger awareness of enterprise partnerships and social implementation. Since research conducted at DBLR in particular requires ELSI perspectives, this led to the launch of ELSI research projects.

We are especially active in collaborations in the area of biomanufacturing. A research project that creates all kinds of chemical and medical products using microorganisms and materials of biological origins was even selected to the Ministry of Education, Culture, Sports, Science and Technology's "Program for forming Japan's peak research universities (J-PEAKS)."

Biomanufacturing makes use of techniques such as genetic engineering and genome editing, which means that we must not only think about how we can ensure the safety and security of society, but also about how we can get society to accept our research.

We also need to think about how this research aligns with the culture and traditions rooted in society. In addition, we need to do our best to keep things fair and make sure that people from specific countries or regions or the very wealthy aren't the only ones benefitting from new technology. That's also why we begin by finding issues that would lead to those types of problems.

ELSI is an area to which researchers from a wide variety of fields contribute, meaning that one group of researchers can't handle every problem that arises. At KOBELSI, we invite individuals from both inside and outside the university for research seminars held about once a month while also collaborating with universities overseas.

To this point, we've concluded exchange agreements with Lingnan University in Hong Kong, the University of Genoa in Italy and the University of Valencia in Spain, and moving forward, we'll be conducting joint research and other forms of exchange with these universities.

What is your personal topic of research?

One research topic of mine is informed consent. In medicine, this process involves the patient receiving an explanation regarding treatment and, once they understand the details, making decisions regarding treatment on their own, rather than leaving treatment strategy entirely up to their physician.

Even in medical research that involves people, there is a model in place in which research is conducted only after the research targets receive an explanation, give their consent and the research passes internal screening.

I think this process is necessary even in fields outside of medicine. Be it generative AI or genetic engineering, when it comes to the introduction of new technology, I think we're going to need a system in which we explain the technology to society to gain their understanding and consent.

The current focus of my research is on problems regarding the collection and use of personal data. When we use services on the web, most of the time we click the "I agree" button to consent to providing our personal information, but how many people actually read the rules and restrictions displayed on their screens?

Many people say that privacy is important, but the act of pushing that button without reading is saying the exact opposite. This is what is called the "privacy paradox."

Even if one piece of personal data can't identify an individual, when you link various pieces of data together, it can paint a fairly detailed picture. If people use these services with an understanding of these issues, then that's fine, but when they don't, a proper explanation is required.

In my research, I'd like to take into consideration the current state of personal data management to think about how to make these agreements possible in a practical sense, referencing the results of informed consent carried out in medical fields.

Where is ELSI research headed?

The goal of ELSI research is to pursue whether or not science and technology will truly be beneficial to society and make people happy, in other words, whether or not it will bring about well-being. Well-being shares traits with "eudaimonia," a state of happiness in ancient Greek philosophy, one of my fields of research, which describes a way of living that is fulfilling and has purpose. Thus, the E in ELSI (Ethical) forms a crucial foundation for the way we think about our research.

When introducing science and technology that could change the structure of the world we live in, each and every citizen becomes a stakeholder. It's extremely important, then, that they receive an explanation about the technology and have the right to choose. If they have the right to use the technology, then they ought to have the right to not use it. I'd like to think about how to make procedures to exercise that right possible.

Science communication is essential to researchers. What we need is a structure that will allow us to hold discussions with citizens, apply the content of those discussions to our research and then get even more feedback from citizens before the social implementation stage.

Recently, in addition to ELSI, the term "RRI," or responsible research and innovation, has also come into use. RRI further enhances the nuance that researchers themselves think about ELSI as they conduct research, but it's the way of thinking that's important, not so much what you call it. Even if terms like ELSI and RRI fall out of use, I think that it's a topic that we should continue to deal with for as long as science and technology evolve.

Provided by Kobe University


DIGITAL LIFE


Fortinet: the invisible risk of fragmentation in cloud security is real

A new study by Fortinet, released today, reveals that the complexity of modern digital environments constitutes a structural risk to business security. The 2026 Cloud Security Report indicates that 66% of organizations do not trust their ability to detect and respond to cloud threats in real time, an alarming figure in a scenario of accelerated digitization.

Although cloud investment now represents 34% of the total IT security budget, the effectiveness of defenses remains low. Currently, 88% of organizations operate in hybrid or multi-cloud architectures, which expands the attack surface at a rate faster than the management capacity of teams.

Fragmentation is the main obstacle: 69% of cybersecurity leaders identify the proliferation of distinct tools and visibility gaps as the biggest blocks to effective protection. This reality forces 59% of organizations to remain at early levels of security maturity.

Alongside technology, the human factor is a critical vulnerability. The Fortinet 2026 Cloud Security Report highlights that 74% of organizations face a shortage of qualified professionals. Without specialists, teams operate reactively and rely on manual alerts, delaying the response to critical incidents.

Critical cloud security metrics statistical----Statistical data

Lack of confidence in real-time response----66%

Hybrid or multi-cloud environments----88%

Shortage of qualified professionals----74%

Use of 2 or more cloud providers----81%

Preference for a unified security platform----64%

Faced with these challenges, there is a paradigm shift in defense strategy. Approximately 64% of organizations now express a preference for an approach based on a unified security platform, which eliminates fragmentation and allows for cross-functional visibility into critical workloads.

The 2026 Cloud Security Report was developed by Cybersecurity Insiders in collaboration with Fortinet, based on a global survey conducted in late 2025 with 1,163 cybersecurity professionals from various sectors, including financial services, technology, healthcare, and the public sector.

In 2026, fragmentation has emerged as a systemic risk in cloud security, driven by the rapid adoption of multi-cloud architectures, AI workloads, and decentralized identity systems. This "complexity gap" creates invisible vulnerabilities that are often exploited before organizations realize they exist.

The realities of fragmentation in 2026

Identity Sprawl (The "Dark Matter"): Organizations struggle with "orphan" accounts—active but untracked identities belonging to former employees or decommissioned services. Non-human identities (NHIs), such as bots and API agents, are frequently natively ungoverned, forming a shadow layer invisible to traditional security governance.

Tool Fatigue and Silos: Security teams often manage a "mystery basket" of disconnected tools. Adding more point products to a fragmented stack has proven counterproductive, leading to inconsistent controls and operational fatigue where 58% of enterprises find vulnerability detection increasingly difficult despite higher spending.

AI-accelerated risks: The deployment of AI agents in 2026 has amplified fragmentation risks. These agents require dynamic, often over-privileged access to sensitive data across different cloud environments, creating new attack vectors that operate at "machine speed".

Impact on security operations(below)

Operational disconnect: Most cloud security failures in 2026 stem from "coordination breakdowns" rather than technical ignorance. Fragmented stacks often report conflicting data, causing teams to prioritize the wrong risks or overlook actual threats.

Higher breach costs: Cloud-related incidents in 2026 cost nearly 20% more than standard data breaches, with average costs reaching approximately $4.7 million per incident.

Ineffective audits: Static, periodic audits are insufficient for 2026's ephemeral environments. By the time an audit report is generated, the cloud configuration has often already changed, leaving new gaps unaddressed.

Mitigation strategies for 2026:

To bridge the complexity gap, organizations are moving toward integrated, platform-driven strategies:

Platform consolidation: Shifting from siloed point tools to unified security platforms that provide a single source of truth for identity, configuration, and data across all clouds.

Continuous discovery & mapping: Implementing automated tools to constantly map the entire attack surface and identify ephemeral or shadow resources in real-time.

Identity-first security: Prioritizing modern Privileged Access Management (PAM) and Zero Trust architectures to govern both human and non-human identities uniformly.

AI-powered defense: Using AI to sift through fragmented telemetry and autonomously manage micro-segmentation policies to contain threats at the speed they occur.

mundophone

Thursday, January 22, 2026

 

TECH


Hacking the grid: How digital sabotage turns infrastructure into a weapon

The darkness that swept over the Venezuelan capital in the predawn hours of Jan. 3, 2026, signaled a profound shift in the nature of modern conflict: the convergence of physical and cyber warfare. While U.S. special operations forces carried out the dramatic seizure of Venezuelan President Nicolás Maduro, a far quieter but equally devastating offensive was taking place in the unseen digital networks that help operate Caracas.

The blackout was not the result of bombed transmission towers or severed power lines but rather a precise and invisible manipulation of the industrial control systems that manage the flow of electricity. This synchronization of traditional military action with advanced cyber warfare represents a new chapter in international conflict, one where lines of computer code that manipulate critical infrastructure are among the most potent weapons.

To understand how a nation can turn an adversary's lights out without firing a shot, you have to look inside the controllers that regulate modern infrastructure. They are the digital brains responsible for opening valves, spinning turbines and routing power.

For decades, controller devices were considered simple and isolated. Grid modernization, however, has transformed them into sophisticated internet-connected computers. As a cybersecurity researcher, I track how advanced cyber forces exploit this modernization by using digital techniques to control the machinery's physical behavior.

Hijacked machines...My colleagues and I have demonstrated how malware can compromise a controller to create a split reality. The malware intercepts legitimate commands sent by grid operators and replaces them with malicious instructions designed to destabilize the system.

For example, malware could send commands to rapidly open and close circuit breakers, a technique known as flapping. This action can physically damage massive transformers or generators by causing them to overheat or go out of sync with the grid. These actions can cause fires or explosions that take months to repair.

Simultaneously, the malware calculates what the sensor readings should look like if the grid were operating normally and feeds these fabricated values back to the control room. The operators likely see green lights and stable voltage readings on their screens even as transformers are overloading and breakers are tripping in the physical world. This decoupling of the digital image from physical reality leaves defenders blind, unable to diagnose or respond to the failure until it is too late.

Historical examples of this kind of attack include the Stuxnet malware that targeted Iranian nuclear enrichment plants. The malware destroyed centrifuges in 2009 by causing them to spin at dangerous speeds while feeding false "normal" data to operators.

Another example is the Industroyer attack by Russia against Ukraine's energy sector in 2016. Industroyer malware targeted Ukraine's power grid, using the grid's own industrial communication protocols to directly open circuit breakers and cut power to Kyiv.

More recently, the Volt Typhoon attack by China against the United States' critical infrastructure, exposed in 2023, was a campaign focused on pre-positioning. Unlike traditional sabotage, these hackers infiltrated networks to remain dormant and undetected, gaining the ability to disrupt the United States' communications and power systems during a future crisis.

To defend against these types of attacks, the U.S. military's Cyber Command has adopted a "defend forward" strategy, actively hunting for threats in foreign networks before they reach U.S. soil.

Domestically, the Cybersecurity and Infrastructure Security Agency promotes "secure by design" principles, urging manufacturers to eliminate default passwords and utilities to implement "zero trust" architectures that assume networks are already compromised.

Supply chain vulnerability...Nowadays, there is a vulnerability lurking within the supply chain of the controllers themselves. A dissection of firmware from major international vendors reveals a significant reliance on third-party software components to support modern features such as encryption and cloud connectivity.

This modernization comes at a cost. Many of these critical devices run on outdated software libraries, some of which are years past their end-of-life support, meaning they're no longer supported by the manufacturer. This creates a shared fragility across the industry. A vulnerability in a single, ubiquitous library like OpenSSL—an open-source software toolkit used worldwide by nearly every web server and connected device to encrypt communications—can expose controllers from multiple manufacturers to the same method of attack.

Modern controllers have become web-enabled devices that often host their own administrative websites. These embedded web servers present an often overlooked point of entry for adversaries.

Attackers can infect the web application of a controller, allowing the malware to execute within the web browser of any engineer or operator who logs in to manage the plant. This execution enables malicious code to piggyback on legitimate user sessions, bypassing firewalls and issuing commands to the physical machinery without requiring the device's password to be cracked.

The scale of this vulnerability is vast, and the potential for damage extends far beyond the power grid, including transportation, manufacturing and water treatment systems.

Using automated scanning tools, my colleagues and I have discovered that the number of industrial controllers exposed to the public internet is significantly higher than industry estimates suggest. Thousands of critical devices, from hospital equipment to substation relays, are visible to anyone with the right search criteria. This exposure provides a rich hunting ground for adversaries to conduct reconnaissance and identify vulnerable targets that serve as entry points into deeper, more protected networks.

The success of recent U.S. cyber operations forces a difficult conversation about the vulnerability of the United States. The uncomfortable truth is that the American power grid relies on the same technologies, protocols and supply chains as the systems compromised abroad.

Regulatory misalignment...The domestic risk, however, is compounded by regulatory frameworks that struggle to address the realities of the grid. A comprehensive investigation into the U.S. electric power sector my colleagues and I conducted revealed significant misalignment between compliance with regulations and actual security. Our study found that while regulations establish a baseline, they often foster a checklist mentality. Utilities are burdened with excessive documentation requirements that divert resources away from effective security measures.

This regulatory lag is particularly concerning given the rapid evolution of the technologies that connect customers to the power grid. The widespread adoption of distributed energy resources, such as residential solar inverters, has created a large, decentralized vulnerability that current regulations barely touch.

Analysis supported by the Department of Energy has shown that these devices are often insecure. By compromising a relatively small percentage of these inverters, my colleagues and I found that an attacker could manipulate their power output to cause severe instabilities across the distribution network. Unlike centralized power plants protected by guards and security systems, these devices sit in private homes and businesses.

Accounting for the physical...Defending American infrastructure requires moving beyond the compliance checklists that currently dominate the industry. Defense strategies now require a level of sophistication that matches the attacks. This implies a fundamental shift toward security measures that take into account how attackers could manipulate physical machinery.

The integration of internet-connected computers into power grids, factories and transportation networks is creating a world where the line between code and physical destruction is irrevocably blurred.

Ensuring the resilience of critical infrastructure requires accepting this new reality and building defenses that verify every component, rather than unquestioningly trusting the software and hardware—or the green lights on a control panel.

Provided by The Conversation


TECH


EU’s digital networks act leaves big tech untouched, sparks net neutrality concerns

The European Commission on Wednesday unveiled its Digital Networks Act (DNA), which aims to boost the bloc’s competitiveness and investment in telecom infrastructure.

The draft legislation is framed as a strategic response to the growing role of digital infrastructure in Europe’s economic and geopolitical landscape. It seeks to harmonize rules across the bloc for fibre roll-out, 4G/5G mobile networks and satellite services. However, the proposal stops short of creating a fully unified telecoms market or Big Tech contributions to network costs.

If adopted, the DNA would replace the 2018 European Electronic Communications Code, which has struggled to deliver the scale or coordination Brussels hoped for. While the new proposal introduces key reforms, it leaves national governments in control of key areas like spectrum allocation, competition, and market structure. European Commission’s Executive Vice-President Henna Virkkunen called the DNA foundational to Europe’s economic future, arguing that high-performance, resilient connectivity is a prerequisite for competitiveness, innovation, and digital sovereignty.

A more integrated, but not unified, telecoms market...Although the DNA seeks to harmonize telecoms regulation across the EU, it stops short of creating a fully unified single market. Europe’s telecoms landscape will remain divided between the 27 national markets, as the Act does not mandate the consolidation of telecoms rules. Therefore, it leaves core elements, particularly in 4G and 5G mobile networks, largely under national control.

Another central pillar of the DNA is the creation of an EU-level “single passport” for providers. Under this system, companies would be able to register in one Member State and operate across the Union. The goal is to harmonize connectivity rules and make it easier for operators to scale their activities beyond national borders.

Similarly, the proposal does not introduce measures aimed at creating a pan-European fixed-line or Wi-Fi market, and those areas will continue to be largely subject to national bodies. As a result, Europe’s telecom sector will continue to operate under 27 national regimes, limiting economies of scale and cross-border competition, despite the Commission’s stated ambition to deepen the digital single market.

Big Tech avoids financial obligations...One of the most politically sensitive elements of the file — whether large content and application providers should be required to contribute to the cost of network infrastructure — has been set aside.

Instead of mandating financial transfers from platforms such as Google and Meta, the Commission has abandoned proposals to require large platforms to financially contribute to the rollout of network infrastructure. The framework includes provisions for dispute resolution but imposes no binding obligations.

Telecoms operators had long argued that Big Tech companies, which account for a significant share of internet traffic, should contribute to infrastructure costs.

Instead, the Act introduces a voluntary cooperation mechanism between connectivity providers and digital platforms, including cloud and content companies. It is designed to support commercial negotiations on interconnection, traffic efficiency, and related issues, without imposing new financial obligations – in part to avoid regulatory overlap with the Digital Markets Act.

CCIA Europe, which represents cloud, content, and internet service providers, described this as a “step backwards.” They argue that the introduction of a voluntary conciliation mechanism risks adding regulatory complexity rather than simplifying the EU’s connectivity framework.

On the other side, telecom operators argue this will be insufficient to rebalance negotiations with large digital platforms. Connect Europe has called for binding arbitration mechanisms in data traffic markets, signalling that demands for mandatory intervention have not disappeared from the debate.

Open Internet principles under pressure...The Digital Networks Act states to fully keep the principles of internet neutrality by introducing a mechanism to clarify Open Internet rules for innovative services to increase legal certainty and a voluntary ecosystem cooperation mechanism on IP interconnection, traffic efficiency, and other emerging areas.

However, rights groups are already sounding the alarms, saying the proposal puts those principles at risk. The digital rights group epicenter.works warns that a closer examination of the proposal “reveals a drastic abandonment of core protections and a gutting of the net neutrality framework.”

They based their concerns on the removal of 18 of 19 recitals from the OIR (Open Internet Regulation). As the organization puts it, those were “key provisions in the recitals that had a major impact on CJEU (Court of Justice of the European Union) jurisprudence and BEREC (Body of European Regulators for Electronic Communications) guidelines.” The OIR was adopted in 2025 as the legal foundation of net neutrality in Europe. According to the regulation, member states should guarantee that internet providers treat all online traffic equally. This means the internet should be a neutral infrastructure, not favoring those who can pay more.

According to epicenter.works, the removed recitals were very important for how jurisdiction bodies look and enforce net neutrality, and their absence from legislation would translate into serious impacts. “Without these recitals, the pillars of the framework are removed, and many determinations by courts and regulators are likely to be decided differently.”

Specifically, Article 206 of the DNA describes the changes applicable to the OIR. Specifically, it states the repeal of articles 3,4, 5 and 9 of the regulation, which establish internet traffic equally, set transparency requirements and grant power to national regulators. As Thomas Lohninge from epicenter.works told Tech Policy Press, “the DNA is removing the provisions from the OIR that concern net neutrality and only partially reintegrating them into the new law.”

The proposal states, “The DNA replaces certain existing EU legislative instruments that govern the connectivity ecosystem,” including parts of the Open Internet Regulation. In a comparison exercise, Lohninge notes that from the OIR, the new framework only safeguards consumer transparency and remedies.

Also, BEUC raised awareness of the risks of net neutrality. The BEUC (The European Consumer Organization) stated that the DNA raises questions in relation to “the preservation of net neutrality, by weakening critical safeguards.” As Agustín Reyna, Director General of BEUC, commented in a press release, “We call on the Commission to ensure that EU telecoms rules continue to defend competition, consumer welfare and the principle of net neutrality.”

While the Commission frames the DNA as part of a broader push for digital sovereignty, some analysts argue that the proposal does little to alter Europe’s structural dependence on non-EU digital infrastructure. Konstantinos Komaitis, a senior fellow at the Atlantic Council, argues that the DNA is “more about market integration than a strategic reset.”

As he put it, “It does not reduce Europe’s reliance on US digital infrastructure, nor does it meaningfully change it.” Instead, Komaitis points to a more subtle shift. His concern is that the new framework may allow telecom operators to shift responsibility for congestion and investment costs onto content and cloud providers.

Next steps...The Commission estimates that full deployment of the DNA could add up to €400 billion to the GDP by 2035 and contribute to modest emissions reductions through more energy-efficient infrastructure.

The proposal will now move to the European Parliament and the Council. Negotiations are expected to focus on spectrum governance, the balance of authority between Brussels and national regulators.

Reporter: Joana Soares is a Portuguese freelance journalist, based in Brussels, writing about technology policy, privacy, and digital rights. She reports on AI regulation and the political impact of emerging technologies across Europe, with a focus on how policymaking influences democratic institutions and civil liberties.

Wednesday, January 21, 2026


DIGITAL LIFE


To explain or not? Online dating experiment shows need for AI transparency depends on user expectation

Artificial intelligence (AI) is said to be a "black box," with its logic obscured from human understanding—but how much does the average user actually care to know how AI works?

It depends on the extent to which a system meets users' expectations, according to a new study by a team that includes Penn State researchers. Using a fabricated algorithm-driven dating website, the team found that whether the system met, exceeded or fell short of user expectations directly corresponded to how much the user trusted the AI and wanted to know about how it worked.

Implications for user trust and transparency...The findings have implications for companies across industries, including health care and finance, that are developing such systems to better understand what users want to know and to deliver useful information in a comprehensible way, according to co-author S. Shyam Sundar, Evan Pugh University Professor and James P. Jimirro Professor of Media Ethics in the Penn State Donald P. Bellisario College of Communications.

"AI can create all kinds of soul searching for people—especially in sensitive personal domains like online dating," said Sundar, who directs the Penn State Center for Socially Responsible Artificial Intelligence and co-directs the Media Effects Research Laboratory. "There's uncertainty in how algorithms produce what they produce. If a dating algorithm suggests fewer matches than expected, users may think something is wrong with them, but if it suggests more matches than expected, then they might think that their dating criteria are too broad and indiscriminate."

How the study was conducted...In this study, 227 participants in the United States who reported being single answered questions at smartmatch.com, a fictitious dating site created by the researchers for the study. Each participant was assigned to one of nine potential testing conditions and directed to answer typical dating site questions about their interests and traits they find desirable in others. The site then told them that it would provide 10 potential matches on their "Discover Page" and that it "normally generates five "Top Picks' for each user."

Depending on the testing condition, the participant would see either the five mentioned "Top Picks" with a message confirming that five options was the norm, or a variation accompanied by a message noting that while five options was typical, this time the system had found two or 10.

"If someone expects five matches, but gets two or 10, then a user may think they've done something wrong or that something is wrong with them," said lead author Yuan Sun, assistant professor in the University of Florida's College of Journalism and Communications. Advised by Sundar, she earned her doctorate from Penn State in 2023. "If the system works fine, you just go along with it; you don't need a long explanation. But what do you need if your expectations are unmet? The broader issue here is transparency."

That may be different than how humans respond when other humans defy expectations, according to co-author Joseph B. Walther, Bertelsen Presidential Chair in Technology and Society and distinguished professor of communication at the University of California, Santa Barbara, who has long studied expectancy violations in interpersonal settings. When humans violate expectations, surprised victims tend to make judgments about the violator, increase or decrease how much they like them, and approach or avoid them thereafter.

"Being able to find out 'why the surprise?' is a luxury and source of satisfaction," he said, explaining that asking another person why they behaved as they did is intrusive and potentially awkward. "But it appears that we're unafraid to ask the intelligent machine for an explanation."

Findings on explanations and user trust...Participants in the study had the opportunity to request more information about their results and then rate their trust in the system. The researchers found that when the system met expectations—delivering the promised five top picks—participants reported trusting the system without needing an explanation of the AI's inner workings. When the system overdelivered, a simple explanation to clarify the mismatched expectations bolstered user trust in the algorithm. However, when the system underdelivered, users required a more detailed explanation.

"Many developers talk about making AI more transparent and understandable by providing specific information," Sun said. "There is far less discussion about when those explanations are necessary and how much should be presented. That's the gap we're interested in filling."

The researchers pointed to how many social media apps already provide an option for users to learn more about the systems in place, but they're relatively standardized, use technical language and are buried in the fine print of broader user agreements.

"Tons of studies show that these explanations don't work well. They're not effective in the goal of transparency to enhance user experience and trust," Sundar said, noting that many of the current explanations are treated like disclaimers. "No one really benefits. It's due diligence rather than being socially responsible."

Transparency, curiosity and future directions...Sun noted that the bulk of scientific literature reports that the better a site performs, the more people trust it. Yet, these findings suggested that wasn't the case: People still wanted to understand the reasoning, even if they were given far more top picks than promised.

"Good is good, so we thought people would be satisfied with face value, but they weren't. They were curious," Sun said. "It's not just performance; it's transparency. Higher transparency gives people more understanding of the system, leading to higher trust."

However, as more industries adopt AI, the researchers said simple transparency is not sufficient.

"We can't just say there's information in the terms and conditions, and that absolves us," Sun said. "We need more user-centered, tailored explanations to help people better understand AI systems when they want them and in a way that meets their needs. This study opens the door to more research that could help achieve that."

Mengqi "Maggie" Liao, University of Georgia, also collaborated on this project.

Provided by Pennsylvania State University

  SAMSUNG Galaxy S26 Ultra: new Gorilla Glass could kill screen protectors There are rituals that are part of buying a new smartphone: openi...