Monday, January 26, 2026

 

MICROSOFT


Maia 200: the AI ​​chip Microsoft wants to start the “era of reasoning AI”

Microsoft announced on Monday (26) the launch of Maia 200, its next-generation artificial intelligence accelerator, developed to meet the demands of what the company called the “era of reasoning AI.” The new chip is designed to offer high performance in inference, with significant efficiency gains and cost reductions in large-scale AI workloads.

Maia 200 is already in operation in the central region of the United States in Microsoft's data centers, with expansion planned for the Phoenix, Arizona region, as well as other locations in the future. The first systems are being used to power new Microsoft Superintelligence models, accelerate Microsoft Foundry projects, and support Microsoft Copilot.

By operating some of the world's most demanding AI workloads, the company claims to be able to precisely align silicon design, model development, and application optimization, generating consistent gains in performance, energy efficiency, and scale in Azure.

In practice, the Maia 200 chip is capable of running the largest current AI models, with room for even larger models in the future, according to Microsoft.

The accelerator features native FP8 and FP4 tensor cores, over 100 billion transistors, and a redesigned memory subsystem with 216 GB of HBM3e and bandwidth up to 7 TB/s, plus 272 MB of integrated SRAM. In practice, each chip delivers over 10 petaFLOPS in FP4 precision and about 5 petaFLOPS in FP8, capable of running the largest current AI models and scaling to even more complex architectures.

Microsoft claims that the Maia 200 outperforms Amazon's third-generation Trainium in FP4 and Google's seventh-generation TPU in FP8, in addition to offering 30% more performance per dollar compared to the most advanced hardware currently used by the company. The project's focus is on optimizing the so-called token economy, one of the main cost bottlenecks in operating generative models at scale.

The new accelerator integrates Azure's heterogeneous AI infrastructure and will be used to run multiple models, including the latest versions of OpenAI's GPT-5.2, as well as applications such as Microsoft Foundry and Microsoft 365 Copilot. Microsoft's Superintelligence team will also employ the Maia 200 in synthetic data generation and reinforcement learning pipelines, accelerating the improvement of proprietary models.

In general, it was designed to run the largest current language models with headroom for future ones. The chip has over 100 billion transistors and promises to deliver 10 petaflops in FP4, as well as around 5 petaflops in FP8.

Currently, several technology companies are moving to create their own components for this same purpose. Microsoft's Maia 200 aims to compete in the market with Google's TPUs and Amazon's Trainium line. In fact, it offers 3x more performance in FP4 than Amazon's 3rd generation model and FP8 performance superior to Google's TPU v7.

From a systems perspective, the Maia 200 introduces a scalable, two-tier network architecture based on standard Ethernet, with an integrated NIC and proprietary communication protocols between accelerators. Each chip offers 2.8 TB/s of dedicated bidirectional bandwidth for expansion and supports clusters of up to 6,144 accelerators, which, according to Microsoft, reduces energy consumption and total cost of ownership (TCO) in high-density inference environments.

The Maia 200 is already being deployed in the Azure US Central region, with expansion planned for US West 3 in Arizona and other locations. Microsoft also announced a preview of the Maia SDK, with integration to PyTorch, the Triton compiler, and optimized kernel libraries, allowing portability between different accelerators and greater control for developers.

Importance of Maia 200...AI inference has become a critical and expensive part of the operation of AI companies. The launch of Maia 200, then, focuses on reducing costs, increasing energy efficiency and decreasing dependence on NVIDIA GPUs, as well as optimizing the execution of models such as Copilot within Azure data centers.

In addition to powering Copilot operations, the Maia 200 is also expected to support models from Microsoft's superintelligence team. Not only that, but the company has opened the chip's SDK to developers, academics, and AI labs.

mundophone

 

TECH


Coinbase power play sparks crypto rift as key bill gets delayed

It was a rare White House rebuke to the crypto industry: Don't take your newfound political muscle in Washington for granted. A week after Coinbase Global Inc. Chief Executive Officer Brian Armstrong helped stall sweeping cryptocurrency legislation in the Senate, White House crypto adviser Patrick Witt took to X to express his displeasure with Armstrong's assertion that no bill is better than a bad one.

"You might not love every part of the CLARITY Act, but I can guarantee you'll hate a future Dem version even more," Witt wrote.

With the bill likely to be delayed for at least several more weeks, some crypto executives are asking whether the industry overplayed its hand. A rift is opening up between those like Armstrong who want to extract the best possible terms, and those prepared to make concessions in order to get much-needed regulatory clarity while the U.S. political climate is still favorable.

Hanging over the debate is this year's midterm elections, which could usher in a Congress much less friendly to an industry that's still recovering from a crackdown under Joe Biden.

"His audacity and hubris struck me as breathtaking," said Arthur Wilmarth, professor emeritus of law at George Washington University, said of Armstrong. "But if I were in his shoes, I would have to ask myself, do I really think that six months from now, or a year from now, I'm going to be in a stronger position than I am right now?"

After the markup was postponed, Bloomberg Intelligence analyst Nathan Dean lowered his odds that the bill will be passed in the first half to 60% from 70%. "If we don't see progress from the committee in February, our odds of success will likely fall even more," Dean wrote.

Armstrong appeared to strike a slightly more conciliatory tone on Tuesday, telling Bloomberg TV that while the draft contained "too many giveaways to TradFi," he still sees a path for the legislation to be passed.

Coinbase did not respond to multiple requests for comment.

In helping scuttle the Senate Banking Committee's markup of the bill, Armstrong demonstrated how far crypto's political reach has come under President Donald Trump, an industry champion whose family has amassed vast riches in digital assets.

The crypto industry was the biggest donor in the 2023–2024 election cycle, lavishing more than $133 million on political candidates seen as backing its agenda, according to OpenSecrets. Industry participants also donated to Trump's presidential campaign, his inauguration and his White House ballroom.

Betting on Trump has already paid off handsomely, with a raft of executive orders supporting crypto and landmark legislation governing stablecoins passed in July.

Wall Street wakes up...But crypto's string of wins also set off alarm bells at U.S. banks, who perceived a near-existential risk of deposit flight and decided to act. Their top priority was to ensure that offering yields or rewards on stablecoins, a type of token pegged to the dollar, would be banned.

The restrictions on stablecoin rewards from third parties like exchanges that appeared in the bill version slated for markup have become perhaps the biggest flashpoint between banks and crypto, with neither side willing to budge.

Lenders argue that allowing firms like Coinbase to effectively pay yield on clients' stablecoin holdings could suck away lower-yielding bank deposits and destabilize the financial system; crypto executives counter that they simply have consumers in mind, and want to pass on yield generated from reserves backing stablecoins on to clients.

"It's totally absurd in my view," Circle Internet Group Inc. CEO Jeremy Allaire said about banks' claims about deposit flight risk. Rewards on stablecoins are akin to those offered by credit card issuers and "help with stickiness, they help with customer traction, et cetera," he said during a panel discussion at the World Economic Forum in Davos, Switzerland.

Similar warnings that were sounded when money-market funds emerged proved to be unfounded, Allaire said. Some Coinbase customers can earn rewards of around 3.5% on their holdings of Circle's stablecoin USDC on its platform.

Armstrong has sought to portray it as a kind of David-and-Goliath battle in which his and other crypto firms are standing up for the consumer.

"The bank lobbying groups and bank associations are out there trying to ban their competition," he said in Davos. "I have zero tolerance for that, I think it's un-American and it harms consumers."

There are other elements of the market structure bill seen as unfavorable to crypto, from surveillance of decentralized-finance platforms to restrictions on tokenized stocks and potentially increased powers, in some areas, for the Securities and Exchange Commission—seen as an industry bogeyman under former Chair Gary Gensler.

Long game...To some executives, holding out for rules that embed crypto deep into the U.S. financial system on favorable terms that could outlast Trump's administration is worth the political risk. Now is the time to secure a bill so favorable that any future Democrat-led government would struggle to undo crypto's gains, the argument goes.

"The crypto industry is playing the long game here, recognizing that the proposed legislation could restrain its activities under a future administration that could be less crypto-friendly and that might favor tighter regulation," said Eswar Prasad, a senior professor of trade policy at Cornell University who closely follows the sector.

Not all crypto firms share that sentiment. Kraken and Mike Novogratz's Galaxy Digital Inc. are among companies that favor getting the market structure bill approved as quickly as possible and would have preferred the markup to proceed, according to people familiar with the firms' thinking.

"Walking away now would not preserve the status quo in practice," Kraken's CEO Arjun Sethi wrote in an X post. "It would lock in uncertainty and leave American companies operating under ambiguity while the rest of the world moves forward."

Groups supportive of the DeFi industry were also prepared for the markup and expressed disappointment at the delay.

"We were gearing up for the markup, we were prepared for it to happen," said Amanda Tuminelli, executive director and chief legal officer of DeFi Education Fund. "We had been working productively with other members of the industry and with the Senate Banking Committee, who has worked very, very hard on getting this bill to a good place."

mundophone

Sunday, January 25, 2026


SAMSUNG


Galaxy Z Trifold: first impressions of Samsung's exaggerated and experimental phone

Over more than a decade of development in flexible screens and structural engineering, the Korean giant has accumulated enough knowledge to attempt something more ambitious. The Galaxy Z TriFold emerges precisely as the boldest point in this trajectory. It represents not only a specific evolution over current foldables, but a level change. If folding a screen once already required years of refinement, folding it twice exponentially increases the technical, usability, and reliability challenges.

There is also a strategic reason behind this insistence. The logic of foldables is not just aesthetics or "innovation for innovation's sake." It responds to an old quest in the mobile industry: to expand the screen area without making the device impossible to carry and use. In a scenario where more and more people work, study, and consume content on their cell phones, larger screens make sense. The problem has always been how to deliver this without sacrificing portability. The TriFold attempts to answer exactly this question.

With the new triple foldable phone, Samsung has literally decided to double down — or, in this case, triple down. The goal is to create a device that can assume multiple formats throughout the day, adapting to the user's different needs.

When closed, it functions like a traditional premium smartphone. When fully unfolded, it reveals a 10-inch screen, approaching the experience of a compact tablet, focusing on productivity, multitasking, and large-screen media consumption.

Foldable smartphones have gone from being a distant promise or a mere engineering exercise to becoming a legitimate category within the premium market. Still restricted to a specific audience, these devices have undergone a long maturation process to gain real space in users' daily lives. For years, they were seen as fragile, overly expensive, or impractical products for everyday use.

This distrust did not arise by chance. The first generations presented clear limitations in durability, ergonomics, and software integration, which made it difficult to justify the high investment required for these devices. For many people, foldable devices seemed more like a technological showcase than a practical solution to real-world problems.

The persistence of some manufacturers, especially Samsung, was crucial in changing this scenario. Since the first Galaxy Fold, the company has taken considerable risks by betting on a still immature format. The initial problems made it clear that folding a screen involved much more than simply making it flexible. It was necessary to rethink the entire architecture of the device, from hinges and materials to how the operating system behaved in multiple formats.

There were severe criticisms, design revisions, delays, and, at times, direct distrust from the consumer market. Even so, each generation brought important lessons. Samsung refined folding mechanisms, reduced thickness and weight, increased the resistance of materials, and, most importantly, invested in adapting the software. Android, which was not originally designed for flexible screens, began to receive specific optimizations, while the One UI interface was gradually shaped to better handle multitasking, dynamic resizing, and continuity of use.

It's worth noting that the Korean company is trying to distance itself from competitors who have already bet on this format before.

Instead of adopting an "accordion-style" design, which allows for greater flexibility in opening, the TriFold requires the internal screens to be fully unfolded, with flaps that fold inward. This is evident from the warnings displayed when trying to close the folds incorrectly, something that requires adaptation on the part of the user.

During the hands-on, this behavior became clear. The TriFold doesn't give the same "open and close any old way" feeling that some users already associate with traditional foldables. There is a correct order of use, designed to protect the internal screen. In practice, this changes the user's relationship with the device. It's a feature that doesn't appear in the technical specifications, but directly influences the daily experience.

The TriFold is visibly thicker and heavier than a Galaxy S25 Ultra, for example, but it still remains usable on a daily basis. The approximate weight of 309 grams is noticeable, especially in a pocket or when holding the device with just one hand, but it is consistent with the hardware, screens, and internal structure that the device offers.

Another important point is how this weight is distributed. In closed mode, the center of gravity is more concentrated, which can be tiring during prolonged use with one hand. In open mode, the weight tends to "dilute" across the larger screen area, making it more comfortable to use. This helps explain why the TriFold works best when alternating between quick closed use sessions and longer open sessions.

The 6.5-inch external screen plays a central role in the initial experience with the device. It uses Dynamic AMOLED 2X technology, with a 21:9 aspect ratio, adaptive refresh rate of up to 120 Hz, and a maximum brightness of 2,600 nits. In practice, this translates to excellent visibility in outdoor environments, well-calibrated colors, high contrast, and fluid navigation. During the hands-on, we had quick access to the device's basic functions, although more in-depth testing is needed to attest to its real effectiveness.

One of the main problems with the first generations of foldables was the constant reliance on the internal screen for simple tasks. Here, the external screen is complete, functional, and comfortable, drastically reducing the need to open the device constantly. The user only resorts to the larger format when they really need more space, making its use more natural and less forced.

Upon opening the Galaxy Z TriFold, the design reveals its complexity. The device uses two internal hinges, creating three distinct panels. The opening happens progressively, in stages, and conveys a clear sense of mechanical precision. During TudoCelular's hands-on, no strange noises, creaks, or looseness were perceived. The movement of the hinges is firm, controlled, and provides security, something fundamental in a device that directly depends on structural integrity to justify its shape.

When fully open, the Galaxy Z TriFold reveals its main feature: the 10-inch internal screen. It is a Dynamic AMOLED 2X panel with QXGA+ resolution, adaptive refresh rate between 1 and 120 Hz, and a maximum brightness of 1,600 nits. It's a large, sharp, and fluid screen, designed for both productivity and entertainment, maintaining the expected visual standard of a Samsung Ultra line product.

From an engineering standpoint, the Galaxy Z TriFold represents the pinnacle of Samsung's foldable technology to date. The Armor FlexHinge hinge assembly has been redesigned specifically to meet the demands of the triple form factor. There are two hinges of different sizes, with a double rail structure, that work together to ensure a smooth, stable fold with balanced weight distribution across the three panels.

Despite its structural complexity, the TriFold impresses with its thickness when open. At its thinnest point, it reaches only 3.9 millimeters. To achieve this result, Samsung used advanced materials, such as titanium in the hinge structure, Advanced Armor Aluminum in the chassis, and a glass fiber reinforced polymer and ceramic back panel. During the hands-on, the feeling was of a solid, well-built device that lives up to its premium proposition.

The internal screen also received special attention. An additional shock-absorbing layer has been incorporated to better handle the double fold. The creases are still present and noticeable, especially under certain lighting angles, but this has become a characteristic of current foldable devices and represents an ongoing challenge for the entire industry.

In terms of performance, the TriFold follows the standard of a current top-of-the-line device. It comes equipped with the Snapdragon 8 Elite for Galaxy chipset, manufactured using a 3-nanometer process, accompanied by 16 GB of RAM. Internal storage options are 512 GB or 1 TB, with no microSD card support. During the hands-on, performance proved consistent even under intense multitasking and constant switching between different screen modes.

The battery is another important point. The Galaxy Z TriFold uses a three-cell system distributed across the device's three panels, totaling 5,600 mAh. This configuration helps balance energy consumption and contributes to more stable battery life throughout the day. According to the manufacturer, 45W fast charging allows for approximately 50% charge recovery in about 30 minutes, in addition to offering wireless charging and power sharing.

Regarding the camera setup, Samsung focuses on versatility. The 200-megapixel main sensor is accompanied by a 12-megapixel ultrawide camera and two 10-megapixel telephoto lenses, with up to 3x optical zoom and up to 30x digital zoom. There are also front cameras on both the external and main screens, designed for video calls, selfies, and use in different device formats. The goal here is clear: to avoid the TriFold being seen as a foldable phone that sacrifices photography. 

by mundophone


TECH


All Smartphones with ultra-high-resolution cameras in 2026

If in 2025 the 200-megapixel sensor was a curiosity reserved for top-of-the-line models like the Galaxy S Ultra, in 2026 it has become the new gold standard. The mobile industry has surrendered to extreme resolution, and not just for the main camera. We are witnessing the birth of a new category of photographic "monsters" that integrate two 200 MP cameras in one device.

With new sensors from Samsung, Sony, and OmniVision flooding the market, we have prepared the definitive guide to all the cell phones — released and unreleased — that are betting on this technology to redefine mobile photography this year.

Before looking at the cell phones, it's necessary to understand the hardware. Three major manufacturers are setting the rules:

Samsung ISOCELL HP5: The most popular sensor at the moment. Released in late 2025, it found a home in everything from premium mid-range to flagships, often being used in telephoto cameras due to its balance between size and resolution.

Samsung HPB: A customized version of the HP9, exclusive to Vivo, powering its X300 series.

Sony LYT-901: Sony's answer. A massive 1/1.12-inch sensor with integrated AI, aimed at the year's most expensive top-of-the-line models.

OmniVision OV52A: The efficient option. Built on a 40nm process, it promises lower power consumption, ideal for high-performance and mid-to-high-range models.

The year has barely begun and the list of devices with 200 MP is already extensive, covering various brands and prices.

Samsung and Xiaomi:

Galaxy Z TriFold: Samsung's revolutionary triple foldable uses the HP2 sensor.

Xiaomi 17 Ultra: The brand's benchmark, with the HPE sensor (based on the HP3). Redmi Note 15 Pro+ (Global): Bringing 200 MP to a more affordable price.

Oppo and Realme:

Oppo Find X9 Pro: Stands out for using the 200 MP sensor in its periscope camera, offering unprecedented zoom quality.

Realme GT 8 Pro: Uses the HP5 as its main camera.

Vivo and Honor:

Vivo X300 Pro: Uses a custom HPB sensor in its telephoto lens.

Honor Magic 8 Pro: Equipped with Samsung HP9.

Dual 200 MP cameras and zoom monsters

The second wave of releases promises to be even more aggressive. Rumors point to devices that double down, integrating two 200 MP sensors (one for the main camera, the other for zoom).

Oppo Find X9 Ultra: This will likely be the king of photography in the first quarter. It is expected to come equipped with two 200 MP cameras: a main one (Sony LYT-901) and a periscope telephoto lens (OmniVision OV52A). Its sibling, the Find X9s, should also feature a pair of 200 MP HP5 sensors.

Vivo X300 Ultra: Vivo's response won't be long in coming. The X300 Ultra should also boast a 200 MP dual-camera system, combining the Sony LYT-901 with its custom Samsung HPB sensor.

The Foldables: The megapixel craze reaches flexible screens. Both the Oppo Find N6 and the Honor Magic V6 (to be launched at MWC 2026) and the future Galaxy Z Fold 8 should integrate 200 MP sensors, proving that the foldable format no longer requires compromises in the camera.

2026 will go down in history as the year that resolution ceased to be a marketing metric and became the foundation of zoom versatility on smartphones.

mundophone

Saturday, January 24, 2026

 

SAMSUNG


Galaxy S26 Ultra: new Gorilla Glass could kill screen protectors

There are rituals that are part of buying a new smartphone: opening the box, turning on the device and, almost religiously, applying a tempered glass screen protector to protect the $1,000 investment. Samsung may be about to end this habit. New rumors indicate that the future Galaxy S26 Ultra will come equipped with a completely new generation of Gorilla Glass, so resistant that it could make screen protectors obsolete.

The information, revealed by renowned leaker Ice Universe, points to a "high-resistance" glass that not only protects against drops, but integrates technologies that replace the functional screen protectors that many users buy.

Samsung's strategy is not just to make the glass harder; it's to make it smarter. The S26 Ultra seems to be the culmination of an approach that began with the S24 Ultra.

Extreme Durability: The new generation of Gorilla Glass promises impact and scratch resistance superior to anything seen so far. The goal is to give the user the confidence to use the phone "naked," without fear of keys in their pocket or accidental drops.

Native Anti-Glare: Like its predecessors, the S26 Ultra will retain the integrated anti-glare coating. This eliminates the need for matte films, ensuring visibility in sunlight without the loss of sharpness that third-party films cause.

Integrated Privacy: Joining previous rumors about a "Privacy Display," Samsung may be embedding protection against prying eyes directly into the panel. If the screen goes black for those looking askance, privacy films no longer make sense.

If Samsung manages to deliver on this promise, it will be a hard blow to the accessories industry. Tempered glass screen protectors are a multi-million dollar business, living off users' fear of breaking their screens.

By addressing durability, reflection, and privacy at the material level, Samsung offers a cleaner user experience:

No bubbles: No more application stress.

Perfect Touch: No extra layers reducing sensitivity.

Pure Aesthetics: The phone's design shines without additional plastics.

Technical advancements also include a new anti-reflective coating. Compared to the current Gorilla Armor 2 on the Galaxy S25 Ultra, the new generation promises even greater visual clarity in outdoor environments, eliminating problems caused by direct sunlight.

This change represents a significant generational leap. While current glass is already resistant, Samsung seeks to address the optical limitations that tempered glass films often cause, such as loss of sensitivity in the fingerprint reader and reduced color fidelity.

If confirmed, the Galaxy S26 Ultra will be the first device to offer military-grade protection, privacy filter, and advanced optical treatment in a single piece of glass. The line is scheduled to launch in the first quarter of 2026.

Trust vs. reality...The big question will be psychological. Will consumers trust glass, however "Gorilla" it may be, to protect such a high investment? The fear of breaking the screen is deeply ingrained. However, if Samsung aggressively markets this feature as a "screen protector replacement," it could change market behavior.

With a launch scheduled for February 2026 and the Snapdragon 8 Elite Gen 5 engine confirmed, the Galaxy S26 Ultra is positioning itself as the most rugged and complete smartphone of the year.

mundophone


DIGITAL LIFE


Europe’s digital reliance on US big tech: does the EU have a plan?

In the digital era, almost every part of life – from communication to healthcare infrastructure and banking – functions within an intricate digital framework, led by a handful of companies operating mainly out of the United States. If the framework collapses, so do many of the essential services that allow society to function.

Transatlantic tensions have been steadily rising during US President Donald Trump’s chaotic first year back in the White House. Trump’s repeated demands for the Danish autonomous territory of Greenland and tariff threats have driven the EU to reassess its relationship with its long-time ally, who may not be as dependable as was previously thought. US cooperation with Europe isn’t just key for trade and diplomacy, it’s also essential to maintain a robust technological and digital frontier.

The bulk of European data is stored on US cloud services. Companies like Amazon, Microsoft and Google own over two-thirds of the European market, while US-based AI pioneers like OpenAI and Anthropic are leading the artificial intelligence boom. According to a European Parliament report, the EU “relies on non-EU countries for over 80 percent of digital products, services, infrastructure, and intellectual property”.

That dependency on a handful of providers has left the EU extremely vulnerable to sovereignty risks in its public and private sectors, to the point where technical issues, geopolitical disputes or malicious activity can have wide-reaching, disastrous effects.

With that fear in mind, EU lawmakers are pushing for alternatives to US Big Tech, and providing homegrown substitutes to Google, Open AI, Microsoft and others.

EU lawmakers are pushing for digital sovereignty...According to Johan Linåker, senior researcher at RISE Research Institutes of Sweden and adjunct assistant professor at Lund University, Europe’s complacency has led the bloc to a point where most of Europe runs on clouds provided by US Big Tech.

“Public sector and governments have been suffering by a comfort syndrome for decades. There’s a tradition of conservative procurement culture, risk aversiveness, and preference for status quo," he said.

“The difference now is that the geopolitical landscape adds a new dimension of risks to the palate – beyond lack of innovation and escalating license costs.”

Lawmakers are scrambling to make up for that complacency. In 2024, the European Commission appointed its first "technology sovereignty, security and democracy" chief, Henna Virkkunen, whose job it is to reduce dependency and formulate policies that will keep the EU digitally secure. 

Lawmakers have also rallied behind the Eurostack movement, an initiative established in 2024 that aims to build an independent European digital infrastructure to limit the dependence of the European Union on foreign technology and US companies. It has lofty goals of cutting technological dependence, boosting industry competitiveness and driving innovation – all while committing to the EU’s sustainability goals.

However, an analysis by independent think-tank Bertelsmann Stiftung estimates that it will take roughly a decade and €300 billion for Eurostack to achieve its goal. A less conservative estimate by US trade group Chamber of Progress (which includes several US Big Tech companies) estimates that the full cost would be far higher at over €5 trillion.

Time and money – not will or talent – are the EU’s main problem.

“We have a brilliant pool of skillful and entrepreneurial minds, but they require beyond substantial investments and demand to fully leverage,” Linåker said. “Europe’s sovereignty hinges on its competitiveness and innovation.”

Finding realistic alternatives...France, Germany, the Netherlands and Italy have also begun investing in open-source platforms. Open-source means that the technology – hardware or software – is available to be modified, reviewed, and shared.

“Essentially, the open-source piece of software is free to use," Linåker said. "In a public procurement you are free to point out the software explicitly and focus on buying the services needed for use. It provides a toolset for governments in growing their digital sovereignty and resilience, and increasingly recognised through upcoming strategies and legislation."

Certain websites like Switch to EU and european-alternatives.eu also provide lists of European or "European-friendly" digital substitutes that can replace US Big Tech: Mastodon can be an alternative to Elon Musk’s X, Switzerland’s Proton Mail can replace Gmail, etc. But habits are hard to break and these changes can only occur through a deliberate shift in mentality.

While that shift will be “massive”, according to Linåker, it must start somewhere.

“Policy-makers and governments need to lead by example by moving the public discourse and communication beyond incumbent platforms as X to options like Mastodon, which enables an open and federated infrastructure, not dependent on any single actor," he said. "But again, this is not an easy shift – although in practice it’s not that hard."

A few pioneering projects are taking digital sovereignty seriously and leading the way to making that change concrete.

The Swedish city of Helsingborg, for example, is testing how it’s public services would function in the event of a digital blackout. The German region of Schleswig-Holstein has gone a step farther: the regional government has substituted its Microsoft-powered computer systems with open-source alternatives, cancelling almost 80 percent of its Microsoft licenses. Although the system isn’t perfect, the Schleswig-Holstein administration hopes to phase out Big Tech almost entirely by the end of the decade.

“The regional government of Schleswig-Holstein proves the fact that one can create a sovereign digital infrastructure, while working with domestic and European vendors," Linåker said. "Myths regarding security and usability are no more. All of Europe should be pointing their eyes in their direction."

But weaning off US tech entirely will take its time.

“Decoupling is unrealistic and cooperation will remain significant across the technological value chain,” an EU digital strategy report draft reviewed by POLITICO in June 2025 said – which means the EU will, for now, continue to promote collaboration with the US and other tech players including China, Japan, India and South Korea.

The draft report's admission that untangling from the dominance of US tech companies is “unrealistic” only fuels fears about the EU’s reliance on their unpredictable transatlantic ally. 

“If the plug gets pulled, consequences will be catastrophic. The likelihood is another question,” Linåker said. “Either way, policy makers and governments need to realise the risk is a fact, understand the potential consequences, and start treating digital infrastructure as a critical asset.”

by: Diya Gupta---https://www.france24.com/en/author/diya-gupta/

Friday, January 23, 2026


TECH


Q&A: Ethical, legal and social issues—what does it take for new technology to be accepted?

How do cutting-edge science and technology respond to ethical and legal issues when incorporated into society? These issues are known as ethical, legal and social issues, or "ELSI" for short, and research on these issues is being carried out both within Japan and around the world.

In 2023, Kobe University launched KOBELSI, its university-wide research project on ELSI in life sciences and natural sciences. As revolutionary technology in fields such as medicine and artificial intelligence (AI) continue to pop up one after another, how has ELSI research advanced?

Professor Chatani Naoto of the Graduate School of Humanities, leader of the project and an expert in ancient Greek philosophy and bioethics, spoke about the current state of ELSI research and its future prospects.

An introduction to research at KOBELSI was given at an event to commemorate the launch of the project in August 2023. Credit: Kobe University

When did ELSI research start to become more widespread?

ELSI research began to really spread in the U.S. from the end of the 20th century, and there were two big reasons for this.

First of all, from the 1970s, the field of molecular biology started to see rapid growth, owing to the elucidation of genetic organization and advances in genetic engineering. There were also pushes to apply this technology to medicine. On the other hand, the scientists themselves began to raise fears of the risks of these advances, such as biohazards and abuse of technology.

was at that point that James Watson, known for his discovery of the double helix structure of DNA, proposed a moratorium in 1974 for researchers to stop and consider the way research should be conducted.

Another reason was the human genome project that began in the U.S. in the 1980s. This was an international endeavor to decipher all the genetic information of humans, and during the project promotion process, Watson advocated the importance of ELSI. He proposed that when performing research, a certain percentage of the budget should be allocated to ELSI research, which the US government accepted and applied when organizing the project.

With the advancement of DNA analysis comes the possibility of all sorts of disadvantages, such as not being able to get insurance or a job due to the prediction of the formation of an illness due to a certain genetic disease. In addition, genetic information, also known as the blueprint for our bodies, was a previously unexpected intellectual property, which raises the issue of how society and laws should handle it. Promotion of the human genome project has suddenly brought these issues into the spotlight.

What about in countries besides the U.S.?

Perhaps due to lessons learned following the human experiments conducted by the Nazis, Europe tends to emphasize human rights issues involved in medical research on humans, and has for some time now. However, in Europe, research has developed not as ELSI, but as ELSA, in which the A stands for "aspects."

On the other hand, Japan has only really started its own ELSI research within the last few years. In the fifth edition, for the academic years 2016-2020, of the Cabinet Office's "Science and technology basic plan," which is released every five years, the concept of ELSI was included for the very first time. Following this, ELSI became eligible for public research grants, and in the past few years, centers for conducting ELSI research have begun to be established at national universities.

At Kobe University, the "Project innovative ethics" was launched at the Graduate School of Humanities in 2007, which carried out research and education regarding applied ethics. This project has been involved in activity such as conducting surveys on asbestos issues in industrial areas in Amagasaki City, Hyogo Prefecture, and in the aftermath of the Great East Japan Earthquake, publishing academic journals and holding research seminars.

To make use of the activity that has accumulated in the project, KOBELSI was started as a university-wide organization in 2023. Currently, some 20 researchers from graduate schools in both liberal arts and sciences participate in KOBELSI.

What about ethical issues in research even before the term ELSI came to be?

In recent years, the span between developing science and technology and its social implementation has become extremely short. Not only that, once it's implemented into society, the effects are both wide and varied. Even if there are fears that technology will be misused, there is also the issue of laws not being able to keep up. Once that technology becomes widespread in society, it's already too late to think about any issues.

Whether it's the internet or smartphones or what have you, it's impossible to go back to a society where those things didn't exist. Rapidly evolving generative AI will likely turn out much the same. Generative AI searches through massive amounts of data to provide answers to questions in an instant, but this comes with its own issues.

From biased expressions created in a world centered around Europe and North America to copyright issues associated with the original data being used, all kinds of issues have been pointed out.

One characteristic of ELSI research is that it looks to find issues not after social implementation, but from the development stage of science and technology to try to solve as many of those issues as possible. Rather than pointing out issues from the outside, ELSI research aims to think about issues together in the same circles as the researchers who are developing new science and technology.

In order to do this, researchers from ELSI fields, such as law and ethics, must have a certain level of understanding of the specialized knowledge of their target science and technology, while the researchers developing this technology must also consider the social effects of their research.

However, when these researchers do think from within the same circle, there is also the issue that they may lose their critical attitude. Even if there's a framework for researchers with ELSI-based perspectives to enter into, it's meaningless if they just approve each technology as it's developed.

The more organized this framework becomes, the more researchers need to be aware of the risks involved with the technology and emphasize communication among themselves.

Perspectives on the use of research by the military are important ELSI topics as well...I often hear the term "dual-use" being used, but this can actually mean both "military and civilian use" and "good use and misuse."

There are dual-use aspects to new science and technology. The internet we use all the time, the GPS in our car navigation systems, those were all developed for the military and expanded to civilian use. Conversely, there are also examples of technology developed for civilian use that were later adopted by the military.

I like to think based on the keywords of "intention" and "foresight" of the doctrine of double effect, which is well known in the field of ethics. Development of new science and technology for the benefit of society or for civilian use is the intention; however, such technology could also be foreseen as being used for crime or for war.

In cases where these uses are foreseen, this then requires thinking from all sorts of angles, such as cruelty, invasiveness and even potential use in weapons of mass destruction.

So, once you consider the overall situation, i.e. the scale and reliability of the benefits that this technology will provide society, only then can you decide whether a certain level of dual use can be tolerated.

The people of Japan deeply regret the use of university research for military purposes in wars past, making this problem a large point of discussion for ELSI. If we're being extreme, it's certainly possible that all science and technology could be used by the military, but what's most important when carrying out research is autonomy and openness.

Autonomy is when research begins from the intellectual curiosity of researchers and is not interfered with externally. Openness is when the content of research is made public through papers and other media. In other words, the lack of either autonomy or openness would be in opposition of the true nature of academic research.

What kinds of endeavors is KOBELSI engaged in?

In 2022, Kobe University constructed a framework called the Digital Bio and Life Sciences Research Park (DBLR), which includes research hubs in five areas, such as biomanufacturing, medical engineering and healthy longevity, where research is being conducted with a stronger awareness of enterprise partnerships and social implementation. Since research conducted at DBLR in particular requires ELSI perspectives, this led to the launch of ELSI research projects.

We are especially active in collaborations in the area of biomanufacturing. A research project that creates all kinds of chemical and medical products using microorganisms and materials of biological origins was even selected to the Ministry of Education, Culture, Sports, Science and Technology's "Program for forming Japan's peak research universities (J-PEAKS)."

Biomanufacturing makes use of techniques such as genetic engineering and genome editing, which means that we must not only think about how we can ensure the safety and security of society, but also about how we can get society to accept our research.

We also need to think about how this research aligns with the culture and traditions rooted in society. In addition, we need to do our best to keep things fair and make sure that people from specific countries or regions or the very wealthy aren't the only ones benefitting from new technology. That's also why we begin by finding issues that would lead to those types of problems.

ELSI is an area to which researchers from a wide variety of fields contribute, meaning that one group of researchers can't handle every problem that arises. At KOBELSI, we invite individuals from both inside and outside the university for research seminars held about once a month while also collaborating with universities overseas.

To this point, we've concluded exchange agreements with Lingnan University in Hong Kong, the University of Genoa in Italy and the University of Valencia in Spain, and moving forward, we'll be conducting joint research and other forms of exchange with these universities.

What is your personal topic of research?

One research topic of mine is informed consent. In medicine, this process involves the patient receiving an explanation regarding treatment and, once they understand the details, making decisions regarding treatment on their own, rather than leaving treatment strategy entirely up to their physician.

Even in medical research that involves people, there is a model in place in which research is conducted only after the research targets receive an explanation, give their consent and the research passes internal screening.

I think this process is necessary even in fields outside of medicine. Be it generative AI or genetic engineering, when it comes to the introduction of new technology, I think we're going to need a system in which we explain the technology to society to gain their understanding and consent.

The current focus of my research is on problems regarding the collection and use of personal data. When we use services on the web, most of the time we click the "I agree" button to consent to providing our personal information, but how many people actually read the rules and restrictions displayed on their screens?

Many people say that privacy is important, but the act of pushing that button without reading is saying the exact opposite. This is what is called the "privacy paradox."

Even if one piece of personal data can't identify an individual, when you link various pieces of data together, it can paint a fairly detailed picture. If people use these services with an understanding of these issues, then that's fine, but when they don't, a proper explanation is required.

In my research, I'd like to take into consideration the current state of personal data management to think about how to make these agreements possible in a practical sense, referencing the results of informed consent carried out in medical fields.

Where is ELSI research headed?

The goal of ELSI research is to pursue whether or not science and technology will truly be beneficial to society and make people happy, in other words, whether or not it will bring about well-being. Well-being shares traits with "eudaimonia," a state of happiness in ancient Greek philosophy, one of my fields of research, which describes a way of living that is fulfilling and has purpose. Thus, the E in ELSI (Ethical) forms a crucial foundation for the way we think about our research.

When introducing science and technology that could change the structure of the world we live in, each and every citizen becomes a stakeholder. It's extremely important, then, that they receive an explanation about the technology and have the right to choose. If they have the right to use the technology, then they ought to have the right to not use it. I'd like to think about how to make procedures to exercise that right possible.

Science communication is essential to researchers. What we need is a structure that will allow us to hold discussions with citizens, apply the content of those discussions to our research and then get even more feedback from citizens before the social implementation stage.

Recently, in addition to ELSI, the term "RRI," or responsible research and innovation, has also come into use. RRI further enhances the nuance that researchers themselves think about ELSI as they conduct research, but it's the way of thinking that's important, not so much what you call it. Even if terms like ELSI and RRI fall out of use, I think that it's a topic that we should continue to deal with for as long as science and technology evolve.

Provided by Kobe University

  MICROSOFT Maia 200: the AI ​​chip Microsoft wants to start the “era of reasoning AI” Microsoft announced on Monday (26) the launch of Maia...