mundophone
Smartphone and Technology
Thursday, April 30, 2026
TECH
Is there a way for Windows to gain more relevance in the competitive browser market?
Microsoft seems to have woken up to a reality that many of us already felt at our fingertips when opening the task manager: Windows has become a resource hog. If 2025 was the year that Artificial Intelligence (AI) was pushed into every corner of the operating system, 2026 is proving to be the year of the "general cleanup." Satya Nadella, the man at the helm of the Redmond giant, finally admitted that it is necessary to regain the trust of the average user, putting performance and stability ahead of the futuristic promises of Copilot.
It's no secret that the last year has been turbulent for those using Windows 11. Between updates that caused system errors and the forced integration of AI tools that not everyone asked for, the user experience has degraded. The "running before you can walk" strategy with Copilot has left the system heavy and, at times, confusing.
Now, the order coming from the top is clear: prioritize quality. Nadella acknowledged, during the presentation of the company's financial results, that the fundamental work to "win back the fans" has already begun. This implies a paradigm shift, where the focus is no longer just on what AI can do for you, but on how fast and fluid your computer can perform basic day-to-day tasks.
One of the most interesting promises of this new guideline is the direct focus on RAM consumption. You know that moment when your laptop starts to overheat and the fan sounds like a jet engine just because you have your browser and a document open? Microsoft wants to put an end to that, especially on devices with fewer resources.
-Core optimization: Reducing the memory footprint of services running in the background.
- Efficiency on modest devices: Improve performance on machines with 8GB of RAM or less, which suffered considerably with the latest versions of Windows 11.
- Strategic retreat: Pause the release of experimental AI features to ensure that core functions do not break.
- Ecosystem improvement: In addition to Windows, Bing, Edge, and Xbox are under the same microscope for performance optimization.
The K2 project and the restructuring of fundamentals... Beyond the words of a CEO to investors, there is technical evidence that this change is happening. Internal reports point to an effort dubbed "K2," which serves as a kind of foundation to rectify the structural problems of Windows 11. This plan focuses on the "fundamentals"—a technical term for boot speed, interface latency, and update reliability.
By admitting that the system needs work, Microsoft is validating the complaints of millions of people who felt that Windows had become less user-focused and more of a vehicle for advertising and subscription services. The idea now is to simplify, cleaning up code that doesn't add value and ensuring that the processor isn't being occupied with useless processes.
Real impact on your daily work...What can you expect from this change of direction in the coming months? If Nadella's promise materializes, your screen will no longer be a battleground against sudden slowdowns. Optimizing RAM consumption means that heavy applications will have more room to maneuver and that multitasking will become linear again.
The message from this recent conference is that Microsoft realized that having the world's smartest assistant is useless if the user feels like turning off the computer out of frustration with the system's slowness. Returning to basics may not be as flashy as a new generative language model, but it's exactly what the vast majority of us need to maintain smooth productivity. Let's hope this intention to better serve core users doesn't just remain on paper and translates into efficient code.
Windows is known for being a memory hog, but it has several native tools and simple tweaks that help free up space for what really matters.
Here are the most effective ways to reduce RAM consumption:
1. Disable startup applications...Many programs (like Spotify, Steam, or Teams) start running as soon as you turn on your PC, even if you're not going to use them.
How to do it: Press Ctrl + Shift + Esc to open Task Manager, go to the Startup tab and disable everything that is not essential.
2. Control browser consumption...Chrome and Edge are the biggest RAM culprits.
Use efficiency mode: In Edge, enable "Inactive Tabs," which suspend tabs you are not using. In Chrome, enable "Memory Saver" in Performance settings.
Extensions: Each installed extension consumes some RAM. Remove the ones you don't use.
3. Adjust visual effects...Windows uses RAM to process animations, shadows, and transparencies.
How to: Search for "Adjust the appearance and performance of Windows" in the Start Menu. Select "Adjust for best performance" or manually uncheck items such as "Animate windows when minimizing and maximizing".
4. Close background processes...Some native Windows apps continue running unnecessarily.
How to: Go to Settings > Apps > Installed Apps. Click the three dots next to an app (such as Weather or Calculator), go to Advanced Options, and under "Background app permissions", select Never.
5. Clear the paging file (Virtual Memory)...Windows uses a portion of the hard drive/SSD as if it were RAM when real memory runs out. If this file is misconfigured, the system may become slow.
Generally, letting Windows manage it automatically is ideal, but restarting the PC clears this cache and often resolves memory hiccups.
6. Check for malware...Some viruses mine cryptocurrencies or perform hidden processes that drain RAM. Run a full scan with Windows Defender.
Extra tip: If your computer has 4GB or 8GB of RAM, these tips help, but the system will always be at its limit by current standards. If RAM usage is high even with everything closed, it might be time to consider a physical upgrade.
See how to reduce RAM usage in Windows 11 in the video below:
mundophone
Wednesday, April 29, 2026
TECH

Why has the battle for control of AI left the planet and already has leaders?
While the world still looks to rockets and manned missions, a much quieter dispute is advancing in space — and could redefine who controls the most powerful technology of the century.
For decades, the space race was synonymous with exploration, flags, and historic achievements. Today, the scenario has changed almost invisibly. What is at stake is no longer going further, but processing faster. Instead of astronauts, servers take center stage. Instead of lunar bases, orbital data centers. And in the midst of this transformation, a recent movement is beginning to indicate that someone may have taken the lead.
The new space race doesn't happen in front of the cameras. It unfolds silently, driven by chips, algorithms, and digital infrastructures. The goal is no longer just to explore space — now it's to use it as a platform for something much more strategic: advanced computing.
In recent years, artificial intelligence has become one of the most resource-intensive technologies. Increasingly larger models demand energy, cooling, and processing capacity on a massive scale. This has put pressure on data centers on Earth, which already face physical, environmental, and economic limitations.
It is in this context that an idea that once seemed distant emerges: taking computing off the planet.
And while many countries are still discussing possibilities, one has already begun to implement it.
A game-changing step forward...Without much fanfare, a set of satellites with processing capacity has been placed in orbit with a clear objective: not only to transmit data, but to analyze it directly in space.
These systems function as true computing nodes, capable of running artificial intelligence models without depending on terrestrial infrastructure. This represents a profound change. Instead of sending information to Earth and waiting for processing, data can be analyzed almost instantly, in the very environment where it is captured.
More than a test, this is an operational base under development. And that makes a difference.
While Western projects still explore prototypes, point tests, and experimental initiatives, this approach bets on direct implementation. In practice, this means gaining real-world experience, solving problems before others, and, most importantly, occupying space—literally.
The motivation isn't just technological. It's also energetic.
Data centers consume enormous amounts of electricity and water for cooling. As AI grows, this consumption increases exponentially. In space, the scenario changes completely: there is abundant and virtually continuous solar energy, in addition to a naturally cold environment that facilitates heat dissipation.
This reduces costs, increases efficiency, and eliminates competition for terrestrial resources.
But there is another even more relevant factor: autonomy.
Processing data directly in orbit allows for faster responses and reduces dependence on terrestrial infrastructure. This is especially important for strategic applications, such as environmental monitoring, communications, and defense systems.
The real prize isn't technology...Ultimately, this race isn't just about innovation. It's about control.
Whoever masters the ability to process data in space will have an advantage in critical areas: surveillance, real-time analysis, strategic decision-making, and even military operations. It's not just about efficiency—it's about power. And that's why governments and large companies are investing heavily in this type of technology.
The digital infrastructure of the future may not be in servers scattered around the planet, but orbiting around it.
Building a supercomputer in space is not simple. The challenges are extreme: constant radiation, lack of maintenance, severe thermal variations, and the need for perfect operation for years.
Even so, experts believe that this reality is closer than it seems.
Perhaps the most impressive thing is that this new race has already begun—and most people haven't realized it yet.
While we continue to associate artificial intelligence with apps and screens, something much bigger is beginning to take shape above our heads.
It's not just machines.
It's systems that can redefine the technological balance of the planet.
The battle for control over AI has literally "gone off-planet" because orbital space offers the ideal environment for the massive physical requirements of next-generation artificial intelligence—specifically unlimited energy, natural cooling, and regulatory freedom.
As AI models grow, they face "Earth-bound" bottlenecks like power grid strain, water scarcity for cooling, and local opposition to massive data centers. Moving these operations to space addresses these issues while placing them beyond the reach of traditional national laws.
Why the battle shifted to space...The shift is driven by three primary factors that make Earth increasingly inhospitable for "frontier" AI development:
Unlimited solar energy: Earth-based data centers already push local power grids to their limits, causing rising electricity costs for residents. In orbit, data centers have access to constant, high-intensity solar power 24/7 without atmospheric interference.
Thermal management (free cooling): AI hardware generates extreme heat. On Earth, this requires millions of gallons of water for cooling—a growing environmental concern. In the vacuum of space, the ambient temperature provides a natural "heat sink," significantly reducing the infrastructure needed to keep systems from melting.
The "legal void": There is a growing "crisis of control" as governments struggle to regulate AI's safety risks. Off-planet data centers operate in a "no-man's-land" where companies can bypass "NIMBY" (Not In My Backyard) protests and strict national safety or privacy laws.
The geopolitical & corporate "space race"...The "battle" is no longer just about who has the best code, but who controls the extraterrestrial infrastructure supporting it:
-Corporate sovereignty: Leaders at OpenAI and Google are reportedly exploring "orbital data farms" to decouple their most advanced models from terrestrial constraints.
-Global dominance: Geopolitical rivals like the U.S. and China view AI as a "winner-take-all" race. Dominating the space-based compute layer ensures that their AI systems can operate at a scale—and with a level of autonomy—that Earth-bound rivals cannot match.
-The "alien mind" perspective...Some experts and futurists suggest that a sufficiently advanced AI would naturally prefer space. Unlike biological life, AI is not tethered to Earth's biosphere; its only "food" is data and energy, both of which are more abundant and accessible in the cosmos. In this view, the "battle" is the beginning of a civilizational split where the most powerful intelligence eventually outgrows its "cradle" on Earth.
mundophone
TECH
Google Pixel 11: Why is Google still betting on the insufficient Tensor G6?
Google's silicon continues to be a rollercoaster of emotions for those who follow the smartphone market. While the Mountain View giant seems to have finally hit the "heart" of the processor, the news coming out about the graphics component is a real bucket of cold water. The Pixel 11, which should reach our hands in 2026, promises to be an impressive productivity machine, but it risks being a "race car" with bicycle tires when it comes to visual processing. The Tensor G6 is shaping up to be a giant with feet of clay, and I'll explain why.
Let's start with the good news, because it exists and is substantial. According to the most recent leaks, Google will finally stop playing defensively when it comes to the CPU. The Tensor G6 should adopt the new Arm architectures, known as C1 Ultra. We're talking about a high-performance core capable of reaching an impressive 4.11 GHz. To give you an idea, this is the same kind of "muscle" you expect to find in MediaTek's Dimensity 9500, a processor that usually doesn't mess around.
Despite the positive evolution regarding the potential final processing power, the leak throws a "bucket of cold water" on those expecting a more powerful GPU.
This is because the listing reveals that the Google Tensor G6 will use the PowerVR C-Series CXTP-48-1536, which could repeat the poor performance in demanding games.
Although Google may implement an updated variant of this GPU, analysts point out that the Pixel 11 line will likely not be positioned as a high-performance family for demanding games, maintaining the brand's tradition of focusing on software optimization and intelligent features.
The configuration seems to shift to a 7-core architecture, focusing on thermal efficiency and raw power when it's really needed. It's a step forward that puts the Pixel 11 at a level of competitiveness that we've rarely seen in the Tensor line. If the G5 already promised improvements with the transition to TSMC manufacturing, the G6 wants to consolidate Google as a semiconductor manufacturer that doesn't just adapt old designs.
The shadow of PowerVR and the ghost of 2021...Now, the moment when the conversation gets uncomfortable: the GPU. While Apple and Qualcomm invest billions in graphics architectures that enable Ray Tracing games and console performance in your pocket, Google seems to want to recycle the past. Data indicates that the Tensor G6 will once again use the PowerVR CXTP-48-1536 GPU.
If that name means nothing to you, let me translate: it's an architecture that originally saw the light of day in 2021. Yes, you read that right. In a world where technology becomes obsolete in six months, Google plans to launch a flagship phone in 2026 with graphics technology from five years ago. This isn't just conservative; it's a decision that could condemn the phone's performance in heavy tasks, such as high-bitrate 4K video editing or next-generation games.
Outdated drivers and the technological bottleneck...The problem isn't just the physical hardware, but how it communicates with the software. The current Tensor G5 already suffers from drivers that seem to have been forgotten, lacking support for Vulkan 1.4. The scenario could be identical with the Tensor G6. Even if Google tries to "push" clock speeds (the so-called overclock), the technological base is old and inefficient.
-Graphics architecture: Based on Imagination Technologies from 2021.
-Software limitations: Lack of native support for newer graphics APIs.
-Consequences: Less fluidity in demanding games and greater heating when trying to compensate for the age of the hardware with raw power.
-Positive point: The change to a non-Samsung modem may finally solve network problems.
A smartphone for non-gamers? This strategy from Google leaves us with a clear, but somewhat bitter message. The Pixel 11 will most likely be the best smartphone on the market for artificial intelligence, computational photography, and daily productivity tasks, thanks to the new Arm C1 Ultra cores. However, if you are a mobile gaming enthusiast or expect your thousand-euro investment to last for many years with top-tier graphics performance, the news is not good.
Despite all the cutting-edge AI features that Pixel phones offer, Google's internal Tensor chips hold them back. The Tensor G5 marked a major leap by switching to TSMC's more advanced process, reducing power consumption. Still, its performance lags behind the competition, with the GPU standing out especially as a weak point. Unfortunately, it seems Google will do little to improve the Tensor G6's GPU performance in the Pixel 11.
Leaker Mystic Leaks, known for its accurate Pixel leaks, today released some information about the Pixel 11's Tensor G6. The chip will use PowerVR's CXTP-48-1536 GPU, released in 2021.
To make matters worse, the Tensor G5's GPU performance is hampered by outdated drivers. It lacks Vulkan 1.4 support, limiting its performance in games.
With the PowerVR GPU inside the Pixel 10 offering below-average performance, it's disappointing to see that Google may not do much to improve the situation this time. While higher clock speeds and other driver optimizations should help, they alone won't be enough to compensate for the older architecture.
There is at least some good news, however. The leak suggests that the Tensor G6 will use Arm's C1 Ultra core clocked at 4.11 GHz, 4 Arm C1 Pro cores running at 3.38 GHz, and 2x Arm C1 Pro cores running at 2.65 GHz.
These are Arm's latest CPU cores and should offer a significant leap in performance and efficiency over the Cortex-X4, A-725, and A520 cores of the Tensor G5. As the name suggests, the C1 Ultra is Arm's most powerful CPU core. The Pixel will use it for intensive tasks that require power spikes. The MediaTek Dimensity 9500 uses the same Arm C1 cores.
It's not clear from the screenshot, but Google may switch to a 7-core CPU layout for the Tensor G6, using only one C1 Ultra core. For comparison, the Tensor G5 comes with an 8-core CPU.
At least from a CPU standpoint, the Pixel 11's Tensor G6 looks promising. It may not yet rival the Snapdragon 8 Elite Gen 5, but it should offer respectable performance for a 2026 flagship. GPU performance may be another story.
Google seems to believe its software can work miracles, but there are limits to what optimization can do when the silicon doesn't keep up. The Tensor G6 may be the brand's most balanced processor to date, but this insistence on an outdated GPU is a recurring mistake that leaves us wondering if Google will ever take graphics hardware as seriously as it takes its camera algorithms. In the end, you'll have an incredibly smart phone, but one that might stutter where its rivals glide effortlessly.
by mundophone
Tuesday, April 28, 2026
DIGITAL LIFE

Agentic AI threatens research funding system
In a new analysis, two UCL researchers argue that the present system used to allocate billions in research funding was designed for a world without AI agents and may no longer be fit for that purpose.
In their Comment published in Nature, Professors Geraint Rees and James Wilsdon highlight how this new breed of AI tools could fundamentally upend how research is funded and provide recommendations as to how funders can adapt.
AI agents—also referred to as agentic AI—are more advanced capabilities within large language models that don't respond to a single prompt but pursue goals across multiple steps. They can search the web, read documents, write and execute code, call external services and more to deliver a specified goal.
When it comes to writing a grant proposal, AI agents can be trained on a researcher's publicly available body of work, grant criteria, and previously successfully funded grants to generate ideas and even fully formed applications. Because of this, a seemingly high-quality grant proposal can be created in a tiny fraction of the time it once took, with minimal effort.
This runs the risk of overwhelming funding agencies with huge volumes of high-quality submissions to assign to a limited number of awards, who will have to make largely arbitrary choices about what or who to fund.
Lead author Professor Geraint Rees, UCL Vice-Provost of Research, Innovation & Global Engagement, said, "Funding panels have always faced hard choices, but they could at least claim to be distinguishing excellent ideas from merely good ones. Agentic AI is making that claim increasingly hollow. Funders aren't facing a distant threat—the data suggest the system is already under strain. The good news is that better approaches exist, but the window to act is narrowing."
Additionally, new research carried out by Professors Rees and Wilsdon, found that the number of grant applications has been increasing in recent years.
In a survey of hundreds of thousands of grant applications from 12 multidisciplinary research funders in six countries that are partners in the Research on Research Institute (RoRI), the funders reported an increase of 17% in application numbers between 2022 and 2024, growing to a 57% increase between 2022 and 2025.
This growth ranged from 14% for postdoctoral fellowship applications at the British Academy to 142% for EU Marie Skłodowska-Curie fellowships. There could be several explanations for some of these changes, but the researchers think that AI has played a significant part.
Co-author Professor Wilsdon (UCL Science, Technology, Engineering and Public Policy and Executive Director of the Research on Research Institute), said, "These sharp increases in the volume of funding applications begin soon after the launch of ChatGPT, so it's likely that a significant portion of this increase is linked to the use of generative AI. This is just the product of earlier versions of large language models: the capabilities of newer agentic systems will drive volumes even higher in 2026.
"Meanwhile, peer reviewers will be using the same agentic tools to assess proposals—so we quickly reach a point where systems of grant funding and review will collapse, unless funders adopt new strategies for managing volume and demand, and for assessing quality."
However, the researchers caution against clamping down on the use of generative AI by applicants, which would likely be impossible to enforce and inadequate to the challenge at hand. Instead, they urge funders to deploy the power of agentic AI systems to reinvent the funding system, rather than to suppress their use.
This could include using AI to profile applicants from multiple perspectives, allowing funders to identify and compare candidates more completely than a funding panel. It could also include prioritizing and shortlisting applications by identifying candidates whose record is consistent with the claims in their application, or by using predictive heuristics that look for novelty and potential impact.
Researchers conclude that when developing these kinds of systems, care is required to avoid reinforcing many of the pitfalls that current funding systems face, such as concentrating resources on those who have already been successful. Transparency would be key to avoid exacerbating biases against early-career researchers, under-represented groups, less established or prestigious institutions, or interdisciplinary and emerging fields.
Agentic AI, or AI agents capable of planning and executing tasks autonomously, is posing a significant, near-term threat to the traditional research funding system. A April 2026 report in Nature by researchers at UCL and the [Research on Research Institute (RoRI)] argues that the current system of grants and peer review, designed for a world without such technology, risks collapse due to an unsustainable influx of AI-assisted, high-quality proposals.
Overwhelming application volumes: Research agencies are being flooded with proposals; RoRI found a 57% increase in applications between 2022 and 2025 across 12 funders.
Degradation of peer review: As both applicants and reviewers start using agentic AI to write and assess proposals, the system risks becoming a closed loop that evaluates how well agents mimic previously successful proposals rather than genuine scientific merit.
"Garbage In, garbage out" risks: If an agent's foundational assumptions are incorrect, entire research proposals could be flawed, yet disguised in high-quality, persuasive writing.
Systemic bias: Existing biases might be reinforced, with resources disproportionately concentrated on established researchers, early-career researchers and novel research fields, potentially missing out.
Replacement of scientific training: The rigorous process of learning scientific reasoning could be replaced by prompting, turning future researchers into "prompt engineers" rather than independent thinkers.
Impact on the funding landscape...The rapid rise of AI-generated grant writing threatens to make traditional funding panels, which were designed to differentiate good ideas from excellent ones, redundant or unable to identify truly transformative research.
Massive rise in applications: Prestigious grants, such as the [EU Marie Skłodowska-Curie fellowships], have seen increases of over 140% in applications in recent years.
Increased costs: The use of advanced agentic AI can also lead to higher operational costs for researchers, further complicating the funding landscape.
Need for new strategies: Rather than banning AI—which is likely impossible—researchers suggest funders must adopt AI-native methods to evaluate applications and track records.
Potential solutions...Experts argue that the solution is not to fight the technology but to harness it(below):
AI-Powered assessment: Funders should use agents to profile applicants and compare candidates, analyzing their entire body of work.
Focus on track records: Shifting from evaluating detailed, long-term plans to evaluating the past performance and reputation of research teams.
Enhanced verification: Employing AI to verify that the proposed work is consistent with a researcher's past achievements
Provided by University College London
APPLE

Apple wants to push “Ultra” to the price limit, and the competition will love it
Get ready, because Apple seems determined to push its bank account to the limit with a new luxury strategy. The watchword in Cupertino is “Ultra,” and according to the latest rumors circulating in tech circles, this brand will not just be a pompous name for what we already know. We are talking about a radical change in the hierarchy of Apple products, starting with the long-awaited foldable iPhone and extending to a MacBook that breaks one of Steve Jobs' biggest taboos: the touchscreen.
There has been much speculation about when Apple would enter the foldable market. Well, it seems that the moment is approaching, but forget the idea of an “iPhone 18 Fold.” The new leak confirms that the device will simply be called iPhone Ultra. This choice is not innocent. By decoupling the foldable from the annual numbering line (like the iPhone 18 Pro and Pro Max), the brand gains the freedom to launch new generations at its own pace, without the pressure of a mandatory renewal every September.
Although Apple's goal is to present this luxury machine at the big September event, alongside the iPhone 18 family, the reality of production may force a wait. It is very likely that the iPhone Ultra will reach your hands just a few weeks after the official launch, marking a new era of exclusivity.
If you thought the MacBook Pro was the top of the mountain, think again. Apple is preparing the MacBook Ultra, and this notebook promises to be a game-changer for two reasons that brand purists will feel immediately:
Touchscreen: For the first time in the history of the Mac line, you will be able to interact directly with your fingers on the panel. It's the end of the barrier that separated the iPad from the MacBook.
Even with the thinner structure, the camera setup seems to follow an opposite direction. The documents highlight a rather protruding rear module, capable of increasing the total thickness to approximately 13.9 mm. The expectation is that this space will be occupied by two 48 MP sensors, possibly in the main and ultrawide functions, maintaining the brand's focus on premium photography.
Another detail that draws attention is the internal front camera. And, for the first time, there is an indication of a hole-punch notch in the display for the front camera, a common implementation in Android devices, but unprecedented for Apple.
Positioned in the left corner of the screen when open, the element also suggests the absence of Face ID. In this context, the return of Touch ID is practically confirmed, but positioned on the side, being a simpler solution to enable the foldable design. Although it seems like a conservative decision, it can contribute to reducing internal complexity, optimizing space, and balancing the price.
Finally, the documents indicate that Apple will market the product in two colors: black and white. With an announcement expected in September, the iPhone Fold — or iPhone Ultra — should share the spotlight with the upcoming iPhone 18 Pro line.
Until then, new details are likely to continue emerging, fueling anticipation for one of Apple's most awaited releases in recent years.
OLED Technology: After years of relying on LCD and Mini-LED, the MacBook Ultra will be the first to adopt OLED screens, guaranteeing deep blacks and contrast that until now was exclusive to high-end iPhones and iPads.
Initial forecasts pointed to a launch this year, but problems with RAM supply have pushed the debut to the first half of 2027. This delay suggests that the processor and the set of specifications will be so advanced that the production chain is still trying to keep up with the ambition of the engineers.
The era of John Ternus and the priority given to the iPad Ultra... With Tim Cook's departure from the company's helm increasingly imminent, John Ternus, seen as the natural successor to the CEO position, already seems to be making his mark. Under his leadership, the development of a foldable iPad Ultra has become one of Apple's top priorities.
This will not just be a larger tablet. It will predictably be the most expensive iPad ever made, positioned somewhere between elite entertainment and extreme productivity. The strategy is clear: create an “Ultra” category that sits above the “Pro,” justifying prices that will make the current MacBook Pro seem like an affordable deal.
This aggressive segmentation shows that Apple is no longer satisfied with dominating the premium market; it wants to create a super-luxury market. By adopting the Ultra nomenclature, the brand follows in the footsteps of what it has already done with the Apple Watch Ultra, where durability and exclusive features came with a significantly higher price.
For you, who use these devices daily, the choice will be more complex. The “Pro” will no longer be the best you can buy, becoming the balanced option for professionals. Those who want true innovation — whether in the foldable format, the Mac's touchscreen, or the most powerful processor on the market — will have to pay the “Ultra tax.” It remains to be seen whether the set of completely new features that Apple promises will be enough to convince users to make the leap to this new price level.
mundophone
Monday, April 27, 2026
SAMSUNG

Samsung's "ace in the hole" to face the future foldable iPhone
Samsung appears to be preparing a fierce counterattack in the foldable device market, and its secret weapon for 2026 is not just a new form factor, but the triumph of miniaturization. If you follow the market, you know that the South Korean brand has been pushing to make its flagship phones more elegant and less visually "heavy." The new Galaxy Z Wide Fold, which promises to be the great rival of the future foldable iPhone Ultra, will share with the Galaxy Z Fold 8 a technical innovation that draws attention: a drastically reduced front camera. This design change, which requires complex hardware engineering, suggests that Samsung is accelerating its pace to offer a completely clean and uninterrupted viewing experience.
The latest rumors indicate that Samsung wants to establish almost absolute parity between its two upcoming foldable flagships. Aside from how they open and the aspect ratio of their screens, the Galaxy Z Fold 8 and the Galaxy Z Wide Fold should be twins when it comes to internal specifications.
This technical “mirroring” strategy is curious. By all indications, the Galaxy Z Wide Fold is born with a very specific purpose: to curb Apple's entry into this segment. With Huawei gaining ground with the Pura X Max, Samsung cannot afford to have an “experimental” model with inferior cameras. Therefore, it decided to transfer the cutting-edge technology developed for the Fold 8 directly to the new wider-format model.
The physical reduction in the size of the front camera is not just a matter of aesthetics; it is a necessary intermediate step towards something much more ambitious. Users have long been asking for the end of the “punch-hole” (the small hole in the screen), and this extreme miniaturization indicates that Samsung is refining the components to eventually hide them completely under the pixel panel.
According to Ice Universe, the Galaxy Z Fold 8 Wide and Z Fold 8 will have the smallest selfie camera cutouts ever seen on Samsung phones.
The hole should have a radius of only 2.5 mm, considerably smaller than the 3.7 mm hole for the front camera on the Galaxy Z Fold 7's external screen.
This would be a different approach from that adopted in the Galaxy S26 Ultra, which increased the size of the hole to widen the angle of the front camera. Following this new strategy raises concerns about image quality, although Samsung did not use a hidden camera under the display, which could be even worse.
We expect more details to be revealed in the coming weeks, as the Galaxy Z Fold 8 and Z Fold 8 Wide are expected to be announced in July 2026.
Possible specifications of the Galaxy Z Fold 8 Wide (below):
Internal screen: 7.6-inch Dynamic AMOLED 2X display with 120Hz refresh rate and QXGA+ resolution
Aspect ratio: possibly 4:3 or close to 16:10
Platform: Snapdragon 8 Elite Gen 5
12 GB or 16 GB of RAM
256 GB, 512 GB or 1 TB of internal storage
Nominal battery: 4,660 mAh (dual cell)
Estimated typical capacity: around 4,800 mAh
Cell configuration: 2,267 mAh + 2,393 mAh
5G connectivity, Wi-Fi 7, Bluetooth 5.4 and NFC
Android 16 running under One UI 9.0
Hardware miniaturization: New, denser sensors that occupy less physical space.
Sleeker design: A more discreet bezel and cutout area, increasing immersion.
Direct competition: Immediate response to Apple's plans for the iPhone 18 and 20 Pro.
Technological evolution: Continuation of the innovation work started with the Galaxy Z Fold 7.
Although the under-display camera (UDC) has existed in previous generations, image quality has been its Achilles' heel. By shrinking the traditional camera so significantly, Samsung manages to maintain the photographic quality you demand, while reducing the negative visual impact on the panel.
The race against Apple's plans for 2027... It's no secret that Apple has a "perfect" iPhone on the horizon for the brand's 20th anniversary in 2027. This model should abandon any type of notch, including the Dynamic Island, moving Face ID and the camera under the glass. However, reports from the supply chain suggest that the Cupertino giant is facing considerable technical challenges.
This is where Samsung sees its window of opportunity. By achieving a reduction in camera hardware as early as 2026 with the Fold 8 and Wide Fold, the brand is positioning itself a step ahead in practical execution. While Apple tries to solve panel transparency issues, you may have devices that, although still with a small notch, feature such a tiny bezel that the screen seems to float.
The decision to seriously invest in the processor and optics of these new models shows that Samsung has realized that foldable users no longer accept compromises. If you're going to pay a premium price, you want the best technology available, not a "stretched" version of a conventional phone. The Galaxy Z Wide Fold seems to be the definitive answer for those who want maximum usable area without sacrificing the sophistication that miniaturization allows. If this trend continues, the days of obvious notches and holes in our screens are numbered.
mundophone
TECH Mythos AI triggers record number of patches and divides experts Mythos AI, Anthropic's latest model, has identified, according to t...
-
OPPO New and unprecedented model Find X6 will debut with a huge photo sensor Oppo has a strong smartphone lineup lately, offering great fo...
-
TECH In Memoriam: The Tech That Died in 2018 A bongo enthusiast once said, "time is a flat circle," which is a pretentious...
-
APPLE Company Reveals iPhone Software Support Policy For The First Time Apple’s iPhones have been widely known for their extended software s...