Monday, April 27, 2026


SAMSUNG


Samsung's "ace in the hole" to face the future foldable iPhone

Samsung appears to be preparing a fierce counterattack in the foldable device market, and its secret weapon for 2026 is not just a new form factor, but the triumph of miniaturization. If you follow the market, you know that the South Korean brand has been pushing to make its flagship phones more elegant and less visually "heavy." The new Galaxy Z Wide Fold, which promises to be the great rival of the future foldable iPhone Ultra, will share with the Galaxy Z Fold 8 a technical innovation that draws attention: a drastically reduced front camera. This design change, which requires complex hardware engineering, suggests that Samsung is accelerating its pace to offer a completely clean and uninterrupted viewing experience.

The latest rumors indicate that Samsung wants to establish almost absolute parity between its two upcoming foldable flagships. Aside from how they open and the aspect ratio of their screens, the Galaxy Z Fold 8 and the Galaxy Z Wide Fold should be twins when it comes to internal specifications.

This technical “mirroring” strategy is curious. By all indications, the Galaxy Z Wide Fold is born with a very specific purpose: to curb Apple's entry into this segment. With Huawei gaining ground with the Pura X Max, Samsung cannot afford to have an “experimental” model with inferior cameras. Therefore, it decided to transfer the cutting-edge technology developed for the Fold 8 directly to the new wider-format model.

The physical reduction in the size of the front camera is not just a matter of aesthetics; it is a necessary intermediate step towards something much more ambitious. Users have long been asking for the end of the “punch-hole” (the small hole in the screen), and this extreme miniaturization indicates that Samsung is refining the components to eventually hide them completely under the pixel panel.

According to Ice Universe, the Galaxy Z Fold 8 Wide and Z Fold 8 will have the smallest selfie camera cutouts ever seen on Samsung phones.

The hole should have a radius of only 2.5 mm, considerably smaller than the 3.7 mm hole for the front camera on the Galaxy Z Fold 7's external screen.

This would be a different approach from that adopted in the Galaxy S26 Ultra, which increased the size of the hole to widen the angle of the front camera. Following this new strategy raises concerns about image quality, although Samsung did not use a hidden camera under the display, which could be even worse.

We expect more details to be revealed in the coming weeks, as the Galaxy Z Fold 8 and Z Fold 8 Wide are expected to be announced in July 2026.

Possible specifications of the Galaxy Z Fold 8 Wide (below):

Internal screen: 7.6-inch Dynamic AMOLED 2X display with 120Hz refresh rate and QXGA+ resolution

Aspect ratio: possibly 4:3 or close to 16:10

Platform: Snapdragon 8 Elite Gen 5

12 GB or 16 GB of RAM

256 GB, 512 GB or 1 TB of internal storage

Nominal battery: 4,660 mAh (dual cell)

Estimated typical capacity: around 4,800 mAh

Cell configuration: 2,267 mAh + 2,393 mAh

5G connectivity, Wi-Fi 7, Bluetooth 5.4 and NFC

Android 16 running under One UI 9.0

Hardware miniaturization: New, denser sensors that occupy less physical space.

Sleeker design: A more discreet bezel and cutout area, increasing immersion.

Direct competition: Immediate response to Apple's plans for the iPhone 18 and 20 Pro.

Technological evolution: Continuation of the innovation work started with the Galaxy Z Fold 7.

Although the under-display camera (UDC) has existed in previous generations, image quality has been its Achilles' heel. By shrinking the traditional camera so significantly, Samsung manages to maintain the photographic quality you demand, while reducing the negative visual impact on the panel.

The race against Apple's plans for 2027... It's no secret that Apple has a "perfect" iPhone on the horizon for the brand's 20th anniversary in 2027. This model should abandon any type of notch, including the Dynamic Island, moving Face ID and the camera under the glass. However, reports from the supply chain suggest that the Cupertino giant is facing considerable technical challenges.

This is where Samsung sees its window of opportunity. By achieving a reduction in camera hardware as early as 2026 with the Fold 8 and Wide Fold, the brand is positioning itself a step ahead in practical execution. While Apple tries to solve panel transparency issues, you may have devices that, although still with a small notch, feature such a tiny bezel that the screen seems to float.

The decision to seriously invest in the processor and optics of these new models shows that Samsung has realized that foldable users no longer accept compromises. If you're going to pay a premium price, you want the best technology available, not a "stretched" version of a conventional phone. The Galaxy Z Wide Fold seems to be the definitive answer for those who want maximum usable area without sacrificing the sophistication that miniaturization allows. If this trend continues, the days of obvious notches and holes in our screens are numbered.

mundophone


DOSSIER


DIGITAL LIFE


The future of cryptocurrencies may be closer to a real challenge

Over the past decade, the cryptocurrency sector has undergone a transformation so profound that it is barely recognizable today. Compared to the movement that first emerged with Bitcoin in 2009, there are virtually no common denominators left in the present state of the crypto industry.

Bitcoin, and the early cryptocurrencies that soon followed, began as a rebellion against the centralized monetary system. They served as a protest against uninvited, unnecessary third parties who forced themselves into voluntary transactions between individuals. Perhaps most important, they sought to restore privacy in financial activity.

By now, however, crypto has morphed into something that its early adopters and true believers would probably have disavowed. Today, the sector is a heavily intermediated, regulated and increasingly institutional domain. While this radical departure might be seen as a betrayal of the original vision by many, it has also brought considerable benefits.

For years, quantum computing was treated as a distant threat to the world of cryptocurrencies — something reserved for advanced laboratories and futuristic scenarios. But a new experiment has subtly and unsettlingly changed this scenario. Without directly attacking large networks, a researcher managed to demonstrate something that, until recently, seemed improbable outside the theoretical field.

What happened was not a massive attack or a collapse of digital financial systems. On the contrary, the experiment was limited, controlled, and focused on a relatively simple cryptographic key. Still, the impact was immediate.

An independent researcher managed to break a key based on elliptic curve cryptography using quantum computing available in the cloud. It wasn't a secret or inaccessible supercomputer, but commercial infrastructure that is already beginning to become popular.

The achievement involved a key of only 15 bits — extremely small when compared to those used in networks like Bitcoin, which operate with 256-bit keys. In practical terms, this means there is no immediate risk to users or to the integrity of the network.

But the central point is not the size of the broken key. It's the speed of the advancement.

Not long ago, similar experiments had only reached 6 bits. The leap to 15 bits represents an exponential growth in the capacity of quantum processing applied to cryptography. Instead of discussing whether this will ever be possible, experts are beginning to wonder when this could scale to truly critical levels.

The distance between theory and reality is beginning to decrease... Another factor that caught the community's attention was the context in which the experiment took place. It was not an isolated test in a restricted academic environment. The execution took place using quantum resources offered as a service in the cloud, which indicates an advance in the democratization of this technology.

This changes the perception of risk. Quantum computing ceases to be a distant concept and becomes an evolving tool, with increasingly wider access.

Furthermore, recent studies indicate that the technical requirements to break real cryptographic keys may be lower than previously estimated. Although thousands—or even tens of thousands—of stable cubits are still needed, the trend is toward a reduction in these barriers with the advancement of quantum architectures.

In the case of cryptocurrencies, the potential impact is significant. Systems like Bitcoin and other modern blockchains rely heavily on elliptic curve cryptography to ensure the security of transactions and digital wallets.

There is also an important detail: a portion of the assets is associated with addresses whose public keys are already visible on the blockchain. In a future scenario with advanced quantum capabilities, these funds could become more vulnerable.

An early warning—and the challenge of adapting in time...Despite the alarm generated, experts agree on one essential point: there is no reason for immediate panic. Current networks remain secure within existing technological capabilities.

However, the experiment serves as an early warning. The industry is already working on post-quantum cryptography solutions, designed to withstand this type of attack. The challenge lies not in creating these alternatives, but in the transition.

Migrating global systems that handle billions—or even trillions—of dollars is no simple task. It involves compatibility, consensus among network participants, and, above all, user trust.

What this episode makes clear is that the clock has started ticking. The threat has not yet materialized, but it is no longer purely theoretical.

And, when it comes to technology, ignoring early signs is usually the most costly mistake.

Leaving the original vision behind...The original idea of Bitcoin was simple: a peer-to-peer digital transaction network resistant to surveillance, censorship, arbitrary monetary expansion and other external interventions. To achieve this, three core conditions had to be met: decentralization, anonymity (or at least pseudonymity difficult to pierce) and the removal of intermediaries or third parties. In the first few years, it worked exactly as advertised. Transactions were borderless and non-custodial, exchanges were lightly (if at all) regulated and blockchain technology attracted those skeptical of state authority, centralized banking and fiat money.

The architecture that made it all possible has been progressively dismantled. Regulatory measures such as compulsory, extensive know-your-customer (KYC) and anti-money laundering (AML) requirements, licensing rules for exchanges, disclosure requirements and tax burdens, have forced the majority of crypto activity into identifiable, heavily monitored channels. As a result, the landscape has shifted dramatically from the crypto Wild West of years past. Nowadays, most crypto holders cannot transact meaningfully without submitting government-verified proof of identity documents, consenting to tracking of their wallets and filling out disclosure forms.

The change has been transformative on the infrastructural level, too. Crypto mining, once a core element of cryptocurrency’s decentralized nature, is now concentrated among a handful of industrial operators. It no longer makes financial sense for an individual to compete against the immense computational power of these mining farms and their massive electricity requirements.

Another essential element in guaranteeing decentralization and anonymity was crypto owners holding their own keys in self-custody wallets, embodied in the community principle “not your keys, not your coins.” For various reasons, mainly convenience, fear of loss and lack of technical skills on the part of many investors that joined the crypto space later, crypto holders now increasingly rely on custodial platforms that replicate the very third-party dependence Bitcoin was designed to eliminate. Even stablecoins, the most widely used crypto instruments today, are explicitly reliant on centralized issuers, commercial bank accounts and state-regulated custodians.

The future of cryptocurrencies points to the consolidation of Bitcoin as a digital store of value and greater institutional integration. The market is expected to mature with lower volatility, new price highs by 2026, and the tokenization of real assets, despite geopolitical uncertainties and downturns.

Trends for the future of cryptocurrencies (below):

Bitcoin as "digital gold": The view that Bitcoin is an emerging digital store of value tends to consolidate, with the possibility of being adopted in central bank reserves.

Institutional adoption: Institutional interest remains strong, with Bitcoin being seen as safe due to its mining power, while Ethereum (ETH) and Solana (SOL) lead in DeFi and practical applications.

New highs (2026): Projections suggest that, after periods of decline, Bitcoin may surpass historical records by 2026, ending traditional four-year cycles.

Regulation and maturity: Regulation is becoming clearer, with central banks debating tariffs and capital requirements, which brings more security to the market.

Technology and usability: The focus is shifting to usability and scalability (Solana) and tokenization of real assets, making the market less focused solely on speculation.

Risks and challenges:

-Persistent volatility: Despite the long-term trend, the market remains volatile, with risk correlations, especially with technology stocks (Nasdaq).

-Geopolitical uncertainty: Global events can generate extreme scenarios, from Bitcoin reaching very high values ​​to facing significant drops.

In short, the crypto future tends to be a mix of financial maturity with blockchain technology, moving from a purely speculative environment to integration into the traditional financial system.

mundophone

Sunday, April 26, 2026


TECH


The EU is preparing to open Android to rival AI assistants

The opening of Android to rival AI assistants is at the heart of one of the European Commission’s most ambitious regulatory offensives. Under the Digital Markets Act, Brussels is preparing to impose binding obligations on Google that could transform how two billion devices interact with artificial intelligence.

The European Commission opened two specification processes on January 27, 2026, each focusing on a separate obligation under the Digital Markets Act. The first, under Article 6(7), requires Google to ensure third parties “free and effective interoperability” with the Android hardware and software features that Gemini uses exclusively. The second, under Article 6(11), obliges the company to share anonymized search data with competing search engines and artificial intelligence vendors, on fair, reasonable and non-discriminatory terms.

On April 16, the Commission published preliminary findings on data sharing, in a 29-page document detailing what data to transmit, how it should be anonymized and what audit regime governs it. The public consultation ran until May 1st. The process related to Android interoperability follows a parallel timeline, with imminent conclusions, according to Bloomberg.

In practice, opening Android to competing AI assistants could mean that any user can now set ChatGPT or Claude as the system's default assistant, with the same privileges that Gemini holds. This includes voice activation, access to always-on features, and integration with apps like Gmail or Google Calendar, something that rivals cannot do with the same depth.

Brussels' position is clear. A company that controls about 65 percent of the mobile operating system market in Europe cannot be the sole arbiter of which AI speaks to the phone.

The race between Brussels and Gemini...The timing of this dispute is not accidental. Google completed the transition from Google Assistant to Gemini on Android devices in March 2026, just as the specification processes were gaining momentum. Each software update deepens the integration of Gemini into the ecosystem, which makes the regulatory task more complex as the binding decision approaches.

This decision must be adopted by July 27, 2026. If Google fails to comply, the Commission may open a formal investigation that could result in fines of up to 10 percent of annual revenue globally.

Google does not accept demands passively. The company claims that the measures could "compromise the privacy, security, and innovation" of users, further arguing that the data-sharing proposals "impose ineffective anonymization to increase data volume," putting privacy at risk to satisfy what it describes as "unlimited demands from competitors."

The skepticism is partially justified. Opening access to features such as voice-activated word detection and on-screen content reading to any artificial intelligence vendor creates a wider attack surface. The question of how to audit, in practice, the fulfillment of anonymization obligations remains, for now, without a clear answer from Brussels.

There is also legitimate market tension. OpenAI and Anthropic, the main beneficiaries of these measures, are for-profit companies with commercial interests as pronounced as those of Google itself.

A precedent with global reach...The July decision is not confined to the borders of the European Union. The UK's Competition and Markets Authority is closely monitoring developments, and regulatory pressure on digital markets in the United States, although less structured, is growing in Congress.

If Brussels confirms that the Digital Markets Act can effectively, and not just formally, impose the opening of Android to AI assistants, it creates a model that other regulators can adapt. The real test is not the decision itself. It's the implementation.

The European Union (EU) is stepping up measures to compel Google to open the Android operating system to third-party artificial intelligence (AI) assistants. The goal is to ensure that competitors have the same access to device features as Gemini. The European Commission's decision, based on the Digital Markets Act (DMA), is scheduled for July 2026.

Key points of the EU intervention(below):

Interoperability access: The European Commission has initiated procedures to ensure that assistants such as ChatGPT and Claude can use Android features, such as voice activation and on-screen content monitoring. These features are currently reserved for Gemini.

Data sharing: Google has been instructed to share search data (ranking, query, clicks) with rival search engines on fair, reasonable and non-discriminatory terms (FRAND).

Deadline and penalties: The EU has set a six-month deadline from January 2026 (ending in July) for Google to implement these changes. Non-compliance may result in fines of up to 10% of global annual revenue.

Antitrust investigation: In addition to the DMA, the EU is also investigating whether Google violated competition rules by using YouTube content to train its AI without compensation.

Context of the action...The move is part of the strict enforcement of the DMA. This law classifies tech giants as "gatekeepers," requiring greater competition and options for European consumers. The focus is not only on voice assistants but also on ensuring that search-based AI has an equal chance to compete.

Simultaneously, the EU is also pressing Meta to reverse the blocking of third-party AI assistants on WhatsApp. The goal is to prevent the exclusion of Meta AI competitors from the European market.

mundophone


DIGITAL LIFE


US warns allies about China distilling AI models

The distillation of AI models has reignited tensions between Washington and Beijing, in a new chapter of a years-long technological dispute, after the US State Department sent a diplomatic cable to consular and diplomatic posts worldwide on April 25, 2026, according to a Reuters exclusive. The document instructs missions to warn allied governments about what Washington describes as systematic efforts by Chinese companies to extract American artificial intelligence technology. Companies named include DeepSeek, Moonshot AI, and MiniMax, and a separate request was sent directly to Beijing.

The diplomatic cable was sent a day after DeepSeek launched the V4 model on April 24. The Hangzhou-based startup has unveiled two variants: the V4-Pro, with 1.6 trillion total parameters and 49 billion active parameters per token, and the V4-Flash, a lighter version with 284 billion total parameters and 13 billion active parameters. Both models support a context window of one million tokens and were released under the MIT open-source license, according to technical documentation available on the Hugging Face platform.

The company claims that the V4-Pro rivals the best closed-source systems from OpenAI and Anthropic, namely GPT-5.4 and Claude Opus 4.6. According to independent analysis by Artificial Analysis, the V4-Pro leads open-source models in programming (LiveCodeBench: 93.5%), mathematics (IMOAnswerBench: 89.8%), and autonomous agent tasks (SWE-bench Verified: 80.6%). The open-license launch creates a contradiction that Washington has not directly addressed: any entity can legally study and adapt V4, raising questions about the effectiveness of a purely diplomatic response.

The White House memo and the legislative response...On April 23, a day before the diplomatic cable, the director of the White House Office of Science and Technology Policy (OSTP), Michael Kratsios, issued a memo accusing entities “primarily based in China” of conducting “deliberate, industrial-scale campaigns” to extract American frontier models, according to the Financial Times and confirmed by Reuters. The document commits the Trump administration to sharing information on extraction tactics with American AI companies and exploring accountability measures.

In Congress, the Deterring American AI Model Theft Act bill, introduced on April 15 by Representative Bill Huizenga and registered on the official GovInfo portal under reference H.R. 8283, proposes the creation of a public list of entities that carry out model extraction attacks, making them eligible for sanctions and inclusion on restricted entity lists. The bill also creates a mechanism for the State Department to collaborate with private industry in sharing best practices and analyzing attacks. On April 16, the chairman of the House Special Committee on China, John Moolenaar, accused Chinese laboratories of resorting to "unauthorized distillation attacks" because they lack sufficient chips to develop models independently [unverified information, requires editorial confirmation].

OpenAI warned Congress in February 2026 about the use of obfuscated proxy accounts created to extract responses from ChatGPT. Anthropic published a report, reported by VentureBeat, that identified approximately 24,000 fraudulent accounts associated with three Chinese companies, but with very different volumes: MiniMax generated more than 13 million interactions with the chatbot Claude, Moonshot AI 3.4 million, and DeepSeek around 150,000. This asymmetry, which the report does not explain, weakens the narrative of a coordinated operation between the three entities.

Both companies have a direct commercial and reputational interest in discrediting Chinese competitors, which does not invalidate the evidence but requires independent scrutiny. To date, this scrutiny has not been carried out by any verifiable external body.

The spokesperson for the Chinese Ministry of Foreign Affairs, Guo Jiakun, described the accusations as "totally unfounded" and a "slanderous smear campaign against the successes of the Chinese artificial intelligence industry," in statements quoted by Notícias ao Minuto. The Chinese embassy in Washington urged Washington to “respect the facts, abandon its prejudices, and cease its policy of technological containment.” No international judicial or regulatory body has analyzed or confirmed the allegations made by the American companies.

The dispute comes less than three weeks before the summit between Donald Trump and Xi Jinping, scheduled for May 14 and 15 in Beijing, according to the BBC and the South China Morning Post. The distillation of AI models thus enters the agenda of a bilateral relationship already marked by restrictions on chip exports, tariffs, and disputes over intellectual property. The question the technology sector is asking is straightforward: will the administration impose sanctions, or will it use this dossier as a bargaining chip in Beijing?

AI model distillation, a process where a smaller "student" model is trained to replicate the behavior of a larger, more complex "teacher" model, presents significant risks, ranging from ethical and security issues to threats to intellectual property. While useful for optimization, the technique has been exploited for malicious purposes.

Key dangers of AI model distillation(below):

-Intellectual property theft (distillation attacks): Distillation can be used to "steal" advanced models, bypassing the high cost of research and development. This undermines the competitive advantage of companies that have invested significant resources.

-National security risks: Illegally distilled models can be used to bypass security protections. This can enable the development of offensive cyberattacks or facilitate the use of AI to create biological weapons.

-Replication of biases and errors: If the "teacher" model has biases or contains errors, the distilled "student" model will inherit and often amplify these flaws, generating unsafe or discriminatory results.

-Loss of control and reliability: Industrial-scale distilled models may lack the safeguards of the original models, resulting in unstable and unreliable systems.

-Geopolitical issues and unfair competition: There are reports of Chinese companies using intermediary accounts to distill US models (such as Claude and Gemini), generating tensions and investigations into technological espionage.

Threat context: It is difficult to completely prevent distillation, as the basic function of a large language model (LLM) is to answer questions. Companies are focusing on layered security measures, such as API rate limiting and detection of suspicious query patterns.

by mundophone

Saturday, April 25, 2026


TAG HEUER


TAG Heuer Formula 1 Solargraph arrives in five pastel shades

TAG Heuer unveiled the pastel collection of the TAG Heuer Formula 1 Solargraph 38mm, announced on April 21, 2026 in La Chaux-de-Fonds, Switzerland. The five new references expand the Formula 1 line with a unique color palette, combining the Calibre TH50-00 solar movement with cases in composite materials and sandblasted steel. Availability on exclusive e-commerce begins on April 28, with general sales starting on May 1, 2026.

2025 was indeed a major year for TAG Heuer; not only did they make a triumphant return as the official timekeeper for Formula 1, but they also rolled back the clock and reintroduced the Formula 1 Solargraph, an instant hit with TAG Heuer enthusiasts and collectors, but also a watch that works as a springboard for new TAG Heuer fans to enter the fold. A year on, and the watch world had just about gotten used to the fact that the colourful F1 is back in the catalogue, and so, it would be a perfect opportunity for TAG Heuer to stir things up once more. So, after plenty of stirring and playing with the Polylight colour options at the factory, the Formula 1 Solargraph is back for 2026, this time letting a pastel palette set the tone, without softening the attitude.

The welcomed, slightly beefier proportions of last year’s model carry over to this new pastel collection, measuring 38mm in diameter and 9.9mm thick, with two variations available. First, two models feature a sandblasted stainless steel case, and we’ll get onto why in a moment, while the other three feature cases made from TAG Heuer’s proprietary bio-polamide plastic, Polylight.

These three are finished in pastel blue, beige, and pink, and feature case-matching rubber straps and bidirectional-rotating Polylight bezels. For the sake of durability, all of the F1 Solargraphs come equipped with screw-down crowns and casebacks, ensuring a 100-metre water resistance.

The TAG Heuer Formula 1 Solargraph is powered by the Calibre TH50-00, the brand's proprietary solar movement. Two minutes of exposure to direct sunlight is enough to power the watch for an entire day. A full charge, achieved after less than 40 hours of light exposure, ensures up to ten months of autonomy in total darkness.

After a complete stop, the watch resumes operation with just ten seconds of light exposure. The battery has an estimated lifespan of 15 years. TAG Heuer does not detail in the press release the technical specifications that differentiate the Calibre TH50-00 from solar movements from manufacturers such as Citizen or Seiko, which have been on the market for decades at significantly lower prices.

Since its launch in 1986, the Formula 1 line was the first to bear the TAG Heuer name and introduced the use of composite material cases and bold colors in a sector dominated by conventional metals. The new pastel collection recovers this original chromatic spirit, reinterpreting it with soft tones: beige, pastel pink, pastel blue, pastel green, and lavender blue.

The choice of color palette is not merely aesthetic. By opting for colors that appeal to a younger audience and a female clientele, TAG Heuer is deliberately broadening the spectrum of potential buyers for the Formula 1 line, without abandoning the visual references that define it: the Mercedes pointer, the applied shields, and the bidirectional rotating bezel.

Five references, two price segments...The collection is divided into two distinct groups, with different materials, finishes, and prices.

The three references in TH Polylight cases, available in pastel blue, beige, and pastel pink with a matching rubber strap, are positioned in the entry-level segment of the collection. They are the most accessible and best evoke the original spirit of the 1980s Formula 1 line.

The two references in sandblasted steel, with a three-row bracelet and hour markers set with eight VS diamonds, elevate the design to a more formal level. The pastel green model and the lavender blue model with a pastel pink minute track combine luminosity and understated sophistication. 

The indicated print runs are global, according to the official TAG Heuer statement. The document does not specify distribution by market.

The exclusive online sale begins on April 28, 2026, on the official TAG Heuer website. General sales, in boutiques and authorized points of sale, begin on May 1, 2026. The reference prices by market are as follows: 1,850 to 2,650 CHF in Switzerland, 1,650 to 2,350 GBP in the United Kingdom and 1,950 to 2,800 USD in the United States.

The TAG Heuer Formula 1 Solargraph occupies a precise position in the brand's catalog: above the accessibility of large volume watchmaking groups, but below the barrier of the Carrera and Monaco segments. With prices between 1,950 and 2,800 euros and limited print runs, TAG Heuer creates conditions for controlled demand without compromising the perception of exclusivity. The integration of the Solargraph movement into a collection historically associated with color and youth culture is consistent with the brand's strategy. It remains to be seen whether this technology will be extended to other lines in the catalog in the medium term, or if it will remain a distinctive element of the Formula 1 line.




LINUX


Ubuntu 26.04 LTS: Canonical sets a new security bar

Canonical released Ubuntu 26.04 LTS, codenamed “Resolute Raccoon”, on April 23, 2026, marking the 11th extended support edition of the most widely deployed Linux distribution in enterprise infrastructures and cloud services. The release establishes full disk encryption anchored in TPM, post-quantum cryptography by default, and system tools rewritten in Rust as base configurations, with security becoming the default system state, without the need for manual administrator intervention.

For years, Linux distributions treated security as a layer to be activated during installation and rarely revisited. With Ubuntu 26.04 LTS, Canonical changes this logic: the new Security Center application transforms system protections into an inspectable and manageable surface after deployment, allowing administrators to review the TPM encryption status, Secure Boot configuration, and recovery mechanisms without the need for re-imaging.

Jon Seager, Vice President of Ubuntu Engineering at Canonical, defined resilience and memory security as the structuring priorities of this version. The technical argument is straightforward: most critical vulnerabilities in system software originate from memory management errors, and migrating to Rust eliminates this category of flaws by design of the language itself.

Regarding features implemented in Gnome specifically for Ubuntu 26.10, the left sidebar remains. The Ubuntu Dock, as it's called, now has an opaque background, no longer offering a translucent effect.

As always, there's also a new wallpaper package, which includes the default image referencing the codename of the new operating system version.

Another notable change is Showtime taking over as the system's default media player. The player has a minimalist look that, as such, makes it easier to use and tends to contribute to stability. Showtime was introduced in Gnome 49, but only now has it become the default in Ubuntu.

There are no other major visual changes, however, those with a more attentive eye will notice that the folders are no longer predominantly gray and have returned to being orange. Those who don't like this change can alter the color schemes in the system settings.

Another new feature—or "not new"—is the removal of the Software & Updates tool, which allowed users to update software and resources like drivers, but is now considered obsolete and insecure by Ubuntu developers. It's still possible to use this utility, but only if you install it manually.

The App Center, the distribution's official software manager, now officially handles Debian packages (.deb), not just Snaps.

This may please users who prefer to work directly with .deb packages, following the traditional approach. Snaps, it's worth noting, are a Canonical implementation that has the advantage of including each app's dependencies, but can be heavier or have slower startup times, among other potential disadvantages.

Base tools rewritten in Rust...The most visible replacement falls on utilities present in virtually all Linux systems. sudo-rs and uutils coreutils, Rust implementations of tools like sudo, ls, cp, and mv, become the default options. The original implementations in GNU coreutils and classic sudo remain available as a compatibility and fallback alternative, preserving operational continuity in environments with established dependencies.

Rust ensures memory safety by design, which eliminates vulnerabilities such as buffer overflows and use-after-free vulnerability (terms without established equivalents in Portuguese). For system administrators, the practical result is a reduced attack surface in the base utilities themselves, without altering the workflow. A relevant caveat: these implementations have a shorter production history than their classic C equivalents, justifying close monitoring during the initial production deployment cycles. [unverified data – requires editorial confirmation: comparative number of CVEs registered in sudo-rs versus classic sudo]

Post-quantum cryptography and hardware encryption...OpenSSH 10.2, included in this release, enables post-quantum hybrid key exchange mlkem768x25519-sha256 by default on all SSH connections, requiring no additional configuration. DSA support has been entirely removed, including DSA host key generation. Apache 2.4.66 disables TLS 1.0 and TLS 1.1 by default; Nginx 1.28.2 now only accepts TLS 1.2 and TLS 1.3, in accordance with RFC 8996, which deprecates older protocols.

Full disk encryption with TPM support has moved from experimental to general availability. The mechanism links encryption keys to the hardware's TPM chip and the Secure Boot state, making data extraction impossible without physical access to the original equipment. Canonical explicitly documents known incompatibilities, such as Absolute/Computrace, and kernel module requirements for certain storage configurations.

Ubuntu 26.04 LTS integrates full support, both in the host and guest systems, for AMD SEV-SNP and Intel TDX. These technologies allow running virtual machines with encrypted memory and processor-level integrity protection, being particularly relevant for public cloud providers, regulated industries, and artificial intelligence workloads with data sovereignty requirements.

In identity management, SSSD now runs as a dedicated user without privileges, abandoning execution as root. OpenLDAP operates in AppArmor application mode, with configurable PBKDF2 iteration control for password derivation. The version also introduces authd, an open-source authentication service that allows integrating Ubuntu systems with cloud identity providers, including Microsoft Entra ID and Google IAM, using OpenID Connect and supporting multifactor authentication.

Linux 7.0, GNOME 50, and the end of Xorg...This version includes the Linux 7.0 kernel and GNOME 50, completing the transition to Wayland as the only supported graphical environment in the LTS version. This is the first Ubuntu LTS version without an Xorg session as an alternative, with support for per-monitor scaling, native gestures, and the elimination of screen tearing. For most users, the transition is seamless; in enterprise environments with legacy software or hardware without full Wayland support, compatibility should be evaluated before any migration.

Canonical Livepatch, the service for applying kernel patches without system restarts, extends to Arm64 servers for the first time. For organizations running Ubuntu on Arm64 hardware, critical kernel updates will now be applied without service interruption. The official repositories will also include NVIDIA CUDA and AMD ROCm, the two dominant ecosystems in artificial intelligence and machine learning computing.

A platform for the next ten years...Standard support for this LTS release extends until April 2031, with extended coverage until 2036 for Ubuntu Pro subscribers. LTS cycles are the foundation upon which enterprises, governments, and cloud providers build infrastructure for consecutive years, amplifying the impact of every design decision made in this release.

The combination of post-quantum cryptography, hardware-bound cryptography, and system utilities with native memory security positions Ubuntu 26.04 LTS as a platform built for today's threat landscape, from ransomware to state espionage. Large-scale production over the next five years will tell if the maturity of sudo-rs and uutils coreutils in a full LTS cycle lives up to the confidence Canonical has placed in them.

FAQ:

-What is Ubuntu 26.04 LTS “Resolute Raccoon”?

Ubuntu 26.04 LTS, codenamed “Resolute Raccoon”, is the 11th extended support version of Canonical's Linux distribution, released on April 23, 2026. It includes standard security support until 2031 and extended coverage until 2036 with Ubuntu Pro. It is distinguished by its focus on security by default, with TPM encryption, post-quantum cryptography, and system tools rewritten in Rust.

-How does full disk encryption with TPM work in Ubuntu 26.04 LTS?

The encryption links cryptographic keys to the hardware's TPM chip and the Secure Boot state. Data is only accessible on the original equipment, with the correct boot configuration. Management is done through the Security Center, allowing you to add PINs or verify Secure Boot without reinstalling the system, even after deployment.

-What is the impact of removing Xorg in Ubuntu 26.04 LTS?

With this release, Ubuntu no longer includes Xorg sessions as an alternative to Wayland. For most users, the transition is seamless. Enterprise environments with applications or hardware dependent on Xorg should assess compatibility before migrating, as there is no native way back within this LTS release.

mundophone

Friday, April 24, 2026

 

TECH


JEDEC LPDDR6 roadmap signals major shift to memory-centric computing

Remember when LPDDR memory was just strictly meant for thin-and-light laptops and smartphones? Those days are officially over. While we originally saw JEDEC unleash the foundational LPDDR6 standard in July of last year to fuel faster mobile and AI devices, the standards group is already looking ahead to the next evolution. Today, JEDEC previewed a roadmap that completely reshapes LPDDR6, extending the standard heavily into datacenters and high-performance accelerated computing. 

We've been tracking this memory's blistering potential for a while, from our initial look at the massive speed boosts revealed for next-gen DDR6 and LPDDR6 back in 2024, to Innosilicon shipping the first commercial LPDDR6 IP at an insane 14.4Gbps per pin back in January. This latest update isn't just about raw speed, though; instead, it's about fundamentally changing how your PC's memory handles data.

The newly planned features JEDEC announced include massive capacities up to 512GB density, a new narrower x6 interface, support for processing-in-memory (PIM) and the SOCAMM2 form factor, and then a new flexible metadata carve-out.

Starting from the top, JEDEC expects to unlock staggering densities beyond the current maximums of LPDDR5 and LPDDR5X, targeting up to 512 GB. This massive scale-up is designed specifically to feed the ever-growing memory capacity requirements of AI training and inference workloads, of course. Considering those requirements are why you can't buy RAM at a reasonable price right now, that's a very good thing. To actually pull off those higher capacities, JEDEC is introducing a narrower per-die interface. Moving to a non-binary interface width (adding a new x6 sub-channel mode alongside x12, and the move from x16 to x24) allows manufacturers to cram more dies into a single package. This means higher memory capacities per component and per channel.

PIM is where the "memory-centric" part of our headline comes from. JEDEC is nearing the completion of an LPDDR6 Processing-in-Memory (PIM) standard. Essentially, by baking processing capabilities directly into the memory itself, this tech reduces the need to constantly shuttle data back and forth between the RAM and the CPU. The result is higher inference performance and much lower power consumption. This is pretty bleeding-edge stuff, but it's not completely novel; companies like Samsung and SK hynix have been talking about PIM for years now.

Also, JEDEC is actively developing an LPDDR6 SOCAMM2 module standard. This ensures the compact, serviceable module form factor has a clear upgrade path from today's LPDDR5X SOCAMM2 modules, which are currently used exclusively in massive datacenters and GPU clusters like NVIDIA's NVL72 racks. Hopefully it means that this form factor comes to the desktop as well, so we can keep getting socketed, upgradable memory without sacrificing LPDDR performance. Finally, another feature largely aimed at server farms: JEDEC is giving datacenters the option to balance their user capacity and metadata needs based on specific reliability requirements. The goal here is to implement these stability features while minimizing any hit to peak data throughput.

With rumors swirling that chips as early as AMD's upcoming Medusa Halo (expected to launch early next year) might leverage LPDDR6 for huge bandwidth gains, this JEDEC roadmap makes perfect sense. LPDDR is no longer just "low power" memory; rather, it is becoming a foundational building block for the next generation of high-capacity, insanely fast PCs and servers.

In the announcement, the company highlights the new in-memory processing architecture called PIM (processing-in-memory), which allows the RAM chip itself to perform some calculations, instead of constantly assigning this task to the processor. This avoids the constant transfer of information back and forth across the board, increasing speed and significantly reducing energy consumption.

To achieve this, significant physical modifications were made to the hardware. The internal memory communication path was widened, increasing from 16 to 24 channels. In practice, this change frees up space for more chips within the same component. The end result is memory with much greater capacity in a smaller space.

Furthermore, the design introduces the new, more compact SOCAMM2 socket format, which facilitates maintenance and allows for quick replacement in machines already using older generation technology.

Plenty of space for heavy tasks... With all these adjustments, the new memory standard can reach an impressive 512 GB of capacity, a huge leap entirely focused on handling the mountain of data required by complex tasks today. Electronics manufacturers will also have more freedom to configure memory and find the perfect balance between speed and information security for each product.

The president of JEDEC, Mian Quddus, warned that the organization is still working to finalize the last technical details before publishing the official standard. The market now awaits the start of factory testing to see how all this evolution will behave in the real world, and the expectation is that the technology will arrive in the next generation of AI servers.

Other companies have already released news about the LPDDR6 standard, such as SK Hynix, which promises 33% more speed in cell phones. Samsung and Qualcomm are already working on the next generation, LPDDR6X with up to 1 TB for AI chips, and some rumors speak of 14.4 Gbps RAM.

JEDEC's Board of Directors Chairman, Mian Quddus, noted to "stay tuned for more details" as the subcommittee evaluates these features for final publication. We'll be keeping a close eye on this as LPDDR6 gets ready to take over the hardware space!

by mundophone

SAMSUNG Samsung's "ace in the hole" to face the future foldable iPhone Samsung appears to be preparing a fierce counterattack ...