Friday, May 1, 2026


DIGITAL LIFE


Study reveals playing League of Legends may improve brain function

Recent research suggests that one of the world's most popular games may be doing more than just entertaining. The observed effects have even caught the attention of the scientific community.

For a long time, video games were seen only as a pastime — or even as a harmful distraction. But this perception has been changing as science begins to investigate their effects on the brain. Amid this debate, a recent study analyzed one of the world's most popular games and found results that may surprise even the most skeptical.

League of Legends, one of the most influential titles of the last decade, was analyzed in a study conducted by scientists at the University of Electronic Science and Technology of China.

The study, published in the journal Brain Sciences, followed 68 students over five months. The goal was to understand how different types of games impact cognitive performance.

To do this, the researchers compared League of Legends players with participants who played Sanguosha, a popular turn-based card game in China. The difference in results was clear.

What changes in the brain of someone who plays...Participants who played League of Legends showed better performance in tasks that require simultaneous attention to multiple elements and quick decision-making.

This type of skill is essential in competitive games, where the player needs to monitor several objectives at the same time, react to unexpected changes, and make decisions in fractions of a second.

But the most relevant data came from neurological analyses. Electroencephalogram exams showed that players developed more efficient brain networks, suggesting an adaptation of the brain to this type of complex stimulus.

An effect that goes beyond game time...Another point that caught attention was the duration of the effects. Even after the end of the gaming sessions, the cognitive benefits continued to be observed for weeks. About ten weeks later, the participants still showed improvements compared to the comparison group. This indicates that the impact is not just momentary, but can generate more lasting changes.

This result reinforces the idea that certain types of games not only stimulate the brain in the short term, but also contribute to the development of cognitive skills over time. League of Legends belongs to the MOBA (Multiplayer Online Battle Arena) genre, known for demanding a high level of strategy, coordination, and constant adaptation.

Unlike more predictable games, it places the player in dynamic scenarios, where each match is different from the previous one. This forces the brain to process information quickly, adjust strategies, and deal with multiple variables at the same time.

This highly complex environment seems to be one of the factors that explain the gains observed in the research.

Since its launch in 2009, League of Legends has not only established itself as one of the most popular games in the world, but has also helped shape the esports landscape.

With a massive global community, high-level competitions, and a cultural impact that transcends video games—including productions like Arcane—the game has become a modern entertainment phenomenon.

Now, with scientific evidence pointing to cognitive benefits, it is also taking on a different space: that of a potential tool for mental development.

This does not mean that all video games bring the same effects, nor that screen time should be ignored. But it indicates that, in certain contexts, playing games can be more than just fun—it can be training for the brain.

Several scientific studies indicate that playing video games can improve brain function, provided it is done in moderation. The positive impact occurs because many games act as a "training" for the brain, requiring rapid information processing and decision-making under pressure.

The main functions that benefit include:

-Attention and focus: Gamers tend to show greater efficiency in areas of the brain that control sustained and selective attention.

-Cognitive ability and IQ: Research with children has shown that those who play regularly may show gains in IQ tests and better performance in problem-solving tasks.

-Memory and flexibility: Games that require switching between tasks quickly (such as strategy or RPGs) help with cognitive flexibility and working memory.

-Motor coordination: Titles that demand precision improve fine motor skills and spatial perception.

-Visual Skills: Action games can enhance visual perception and the ability to identify details in complex environments.

The Role of Game Type: Not every game stimulates the brain in the same way. See how different genres impact the mind:

-Action/shooter (FPS): Focus on attention, quick reflexes, and sensory perception.

-Strategy (RTS): Excellent for executive functions, planning, and resource management.

-Puzzle: Stimulate logical reasoning and creativity.

Attention to Limits...Despite the benefits, excessive use can cause negative effects, such as addiction and an imbalance of neurotransmitters like dopamine, which can lead to procrastination and irritability. Experts recommend moderate use so that video games act as a complement to cognitive health, and not as an escape from reality.

mundophone


APPLE


iPhone 18: the future gadget may arrive with new Samsung OLED screens

If there's one rivalry that has defined the tech world in the last decade, it's undoubtedly the constant tug-of-war between Apple and Samsung. But, as you may have noticed if you follow this market, what happens on marketing screens and social media is quite different from what happens behind the scenes in the factories. When it comes to ultra-high-quality screens, the South Korean giant continues to dictate the rules of the game. And now, a new explosive rumor suggests that Apple may have to swallow its pride and hand over the total monopoly on the production of the future iPhone 18 screens to its biggest rival.

The relationship of dependence between these two brands when it comes to supplying image components is already long-standing. Samsung Display has always been the "queen" of OLED screens and has supplied panels for all Apple smartphones that have adopted this technology, from the launch of the revolutionary iPhone X to the current and highly sought-after iPhone 17 series.

However, Tim Cook and his operations management team's usual strategy is never to rely on a single supplier. Apple hates putting all its eggs in one basket, as this takes away its bargaining power and leaves the brand vulnerable to failures in the production chain. To mitigate this risk, the Cupertino company has made a colossal effort over the years to integrate other manufacturers into the process, sharing the gigantic screen orders with LG Display and the Chinese manufacturer BOE.

However, a very recent and detailed report, published by the prestigious South Korean newspaper The Korea Herald, indicates that the wind may be about to change direction drastically. Apple may be planning to acquire the screens for the entire future iPhone 18 line absolutely exclusively from Samsung. If this information is officially confirmed, it will be a historic milestone: it will mark the first time since the debut of the iPhone X that Apple has entrusted the entire supply of such a critical component to a single manufacturing partner.

A curved screen on all four sides to celebrate 20 years... Don't forget a very important detail that is pushing Apple to take more risks than usual: the launch of the iPhone 18 series will mark the long-awaited 20th anniversary of the launch of the original iPhone. The brand wants (and needs) to shine to mark the date, and a normal, flat, boring screen simply won't be enough to leave the world speechless.

According to industry sources, Apple demanded that Samsung develop an incredibly complex OLED panel. The request focuses on a screen that is fully curved on all four sides (creating an "infinity pool" illusion in your hands, with no visible aluminum bezel when you look at it from the front) and, above all, that does not use any type of polarizing layer.

The magic of COE technology and its major obstacles...But what does removing this polarizer mean for you in practical cell phone use? Traditionally, OLED screens use a polarizing layer to drastically reduce reflections from external light and improve image contrast. The major technical problem is that this plastic layer blocks a significant portion of the light emitted by the screen itself, forcing the phone to consume much more battery power to achieve high brightness levels.

To eliminate the need for this obstructive layer, Samsung will have to resort to a cutting-edge technology it already masters, called COE (Color Filter on Encapsulation). With this innovation working in its favor, its future iPhone 18 could be physically even thinner, have a brutally brighter and more visible screen in summer sunlight, and, most importantly, save immense energy, considerably extending battery life with each charge cycle.

The major obstacle to this idyllic vision is the cruel reality of assembly lines. Industry experts and market analysts remain quite skeptical about the deadlines imposed by Apple. They firmly believe that this type of highly advanced screen, which combines extreme quad curvature and the absence of a polarizer, is a genuine engineering nightmare to mass-produce with the low defect rates that the brand demands. We can only wait with great anticipation for the coming months to see if Samsung's elite engineering team will be able to pull off this industrial "miracle" in time for the grand anniversary of Apple's most iconic product.

Recycled design with good improvements in specifications...A new rumor indicates that Apple should keep the look of the iPhone 18 practically identical to its predecessor, focusing innovations on the inside of the devices. Apparently, we would see an increase in dimensions, especially in the Pro models, which would become thicker to possibly house larger batteries, in addition to debuting the brand's long-awaited 2nd generation 5G modem.

The news was shared on the Chinese network Weibo by leaker Fixed-focus digital cameras, who suggests that the overall appearance of the line planned for 2026 will remain unchanged, but the dimensions will grow. The informant does not mention exactly what differences we would see, nor which specific models would be affected, but the details complement other recent rumors that give a better context to the comment.

As far as is known, the main adjustments would be in the thickness, which would increase especially in the Pro versions. It is described that the iPhone 18 Pro Max would be the thickest and heaviest smartphone ever made by the company, a sacrifice that would have an important advantage: a leap in battery life.

It is speculated that the batteries would increase in size, with estimates suggesting that the batteries could reach the 5,200 mAh range in the eSIM versions, which doesn't seem like such a big gain compared to the iPhone 17 family, but the giant's known optimizations could complement this aspect, along with other internal adjustments.

The main ones would be the improved efficiency of the future A20 and A20 Pro chips, as well as the arrival of the long-awaited Apple C2 5G modem. Combined, in addition to offering features such as improved satellite connectivity, these characteristics would ensure healthy advances in phone usage time.

mundophone

Thursday, April 30, 2026



TECH



Mythos AI triggers record number of patches and divides experts

Mythos AI, Anthropic's latest model, has identified, according to the company itself, thousands of unknown vulnerabilities in just seven weeks. The tool triggered a record volume of security patches and pressured governments and central banks to coordinate emergency responses. Launched in April 2026 to a select group of organizations, it exposes a growing tension between the speed of automated detection and the human capacity for response.

Microsoft was one of the first to feel the impact. The April edition of Patch Tuesday included fixes for 167 security flaws, a number that Adam Barnett, senior software engineer at Rapid7, described as "a new record." Barnett himself acknowledged that it was tempting to link this volume to the announcement of Project Glasswing the previous week, although without establishing a direct causal relationship.

Mozilla followed suit. Firefox 150 integrated fixes for 271 vulnerabilities detected with the support of Mythos AI, although only three were formally credited to the tool in Mozilla's official security note, according to The Register. Anthropic claims, in its official System Card, that the flaws found cover all major operating systems and browsers, some decades old.

Project Glasswing: controlled access, increasing pressure...The program was launched with eleven named partners, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, joined by more than 40 additional organizations responsible for critical software infrastructures. The more time partners have to fix flaws before the model is made widely available, the lower the risk of malicious exploitation.

Anthropic confirmed plans to extend access to European and UK banks. The European Central Bank is preparing to warn banks under its supervision about the risks of Mythos AI, according to Reuters, cited by the Business Standard. Unlike in the US, this consultation is taking place through the usual channels of dialogue with banking staff, without any extraordinary meetings with top management scheduled for now.

Banks and governments mobilized...In the US, Treasury Secretary Scott Bessent and Federal Reserve Chairman Jerome Powell met with banking executives on April 8th to encourage them to test their own systems with Mythos AI. Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley responded to the call and began internal testing, according to Bloomberg, cited by TechCrunch. Jamie Dimon, CEO of JPMorgan Chase, warned that Mythos AI exposes more vulnerabilities to potential cyberattacks, according to CNBC.

The mobilization went beyond the US. Indian Finance Minister Nirmala Sitharaman chaired a high-level meeting with bank directors, the Reserve Bank of India, the Ministry of Electronics and Information Technology, the NPCI, and CERT-In to assess the risks associated with Mythos AI, according to the Economic Times. Sitharaman urged banks to take preventative measures to protect their systems and customer data, tasking the Banking Association of India with coordinating the institutional response.

Christian Sewing, CEO of Deutsche Bank, told Bloomberg that the German banking sector does not see Mythos AI as an existential threat, although he acknowledges that its cybersecurity capabilities warrant heightened vigilance.

The dual-use dilemma...Palo Alto Networks warned that capabilities similar to Mythos AI will eventually be available outside the controlled perimeter of American companies with built-in safeguards. The risk pointed out by the company is accurate. Threat actors with access to equivalent tools could create “unprecedented autonomous attack agents in the industry,” a category of risk for which current defenses are unprepared.

Anthropic's cybersecurity assessment documents the offensive capabilities of the model and the rationale behind restricted access. Artificial intelligence is finding vulnerabilities at a faster rate than teams can fix them, and defenders are facing a race for which they are not yet equipped.

Anthropic has committed up to $100 million in usage credits to partners and $4 million in donations to open-source security organizations, including Alpha-Omega, the Open Source Security Foundation, and the Apache Software Foundation, according to the official Project Glasswing page. The company guarantees that the model will not be widely available until new safeguards are operational

The release of the Mythos model (or Claude Mythos Preview) by Anthropic in April 2026 triggered a record volume of security fixes by automating the discovery of critical flaws. The model was able to identify thousands of zero-day (unknown) vulnerabilities in just seven weeks of testing, equivalent to about 30% of the world's annual production of such discoveries before the use of AI.

The Impact of Mythos on Cybersecurity...Mythos's differentiating factor is not only the volume of flaws found, but its autonomy and speed. It can perform complex analyses, chain multiple vulnerabilities, and generate functional exploits in minutes or hours, tasks that would take weeks for experienced human researchers.

Emblematic discoveries: Mythos identified a 27-year-old flaw in OpenBSD and a 16-year-old vulnerability in the FFmpeg video software, both ignored by decades of human audits and traditional automated tools.

Patch Wave: Anthropic formed the Project Glasswing consortium — including Microsoft, Google, Apple, and the Linux Foundation — to provide early access to the model. The goal is to allow these partners to patch their systems before the model (or similar capabilities) falls into malicious hands.

Patch Bottleneck: Experts warn that the speed of AI discovery has surpassed human patching capacity. This has created a "congestion" of updates, forcing companies to prioritize exploitable flaws instead of trying to patch the entire reported volume.

Why wasn't the model released to the public? Due to its high potential for offensive use (dual-use), Anthropic decided to keep Mythos as an internal research model, with no plans for general release. The company cited its Responsible Escalation Policy (RSP), indicating that the model has reached capability levels (ASL-3) that require extreme safeguards against the development of biological weapons and large-scale cyberattacks.

Recommendations for users and businesses...With the acceleration of discoveries, the traditional model of "waiting for a vulnerability to fix" has become insufficient.

Automatic updates: Enable automatic updates on all devices, especially browsers and operating systems.

Digital hygiene: Use password managers and two-factor authentication (MFA) to mitigate the impact if an account is compromised by a system failure.

Data security: Companies should focus on protecting the data itself (data-centric security) and behavioral monitoring, assuming that breaches in the external perimeter will become increasingly common.

mundophone


TECH


Is there a way for Windows to gain more relevance in the competitive browser market?

Microsoft seems to have woken up to a reality that many of us already felt at our fingertips when opening the task manager: Windows has become a resource hog. If 2025 was the year that Artificial Intelligence (AI) was pushed into every corner of the operating system, 2026 is proving to be the year of the "general cleanup." Satya Nadella, the man at the helm of the Redmond giant, finally admitted that it is necessary to regain the trust of the average user, putting performance and stability ahead of the futuristic promises of Copilot.

It's no secret that the last year has been turbulent for those using Windows 11. Between updates that caused system errors and the forced integration of AI tools that not everyone asked for, the user experience has degraded. The "running before you can walk" strategy with Copilot has left the system heavy and, at times, confusing.

Now, the order coming from the top is clear: prioritize quality. Nadella acknowledged, during the presentation of the company's financial results, that the fundamental work to "win back the fans" has already begun. This implies a paradigm shift, where the focus is no longer just on what AI can do for you, but on how fast and fluid your computer can perform basic day-to-day tasks.

One of the most interesting promises of this new guideline is the direct focus on RAM consumption. You know that moment when your laptop starts to overheat and the fan sounds like a jet engine just because you have your browser and a document open? Microsoft wants to put an end to that, especially on devices with fewer resources.

-Core optimization: Reducing the memory footprint of services running in the background.

- Efficiency on modest devices: Improve performance on machines with 8GB of RAM or less, which suffered considerably with the latest versions of Windows 11.

- Strategic retreat: Pause the release of experimental AI features to ensure that core functions do not break.

- Ecosystem improvement: In addition to Windows, Bing, Edge, and Xbox are under the same microscope for performance optimization.

The K2 project and the restructuring of fundamentals... Beyond the words of a CEO to investors, there is technical evidence that this change is happening. Internal reports point to an effort dubbed "K2," which serves as a kind of foundation to rectify the structural problems of Windows 11. This plan focuses on the "fundamentals"—a technical term for boot speed, interface latency, and update reliability.

By admitting that the system needs work, Microsoft is validating the complaints of millions of people who felt that Windows had become less user-focused and more of a vehicle for advertising and subscription services. The idea now is to simplify, cleaning up code that doesn't add value and ensuring that the processor isn't being occupied with useless processes.

Real impact on your daily work...What can you expect from this change of direction in the coming months? If Nadella's promise materializes, your screen will no longer be a battleground against sudden slowdowns. Optimizing RAM consumption means that heavy applications will have more room to maneuver and that multitasking will become linear again.

The message from this recent conference is that Microsoft realized that having the world's smartest assistant is useless if the user feels like turning off the computer out of frustration with the system's slowness. Returning to basics may not be as flashy as a new generative language model, but it's exactly what the vast majority of us need to maintain smooth productivity. Let's hope this intention to better serve core users doesn't just remain on paper and translates into efficient code.

Windows is known for being a memory hog, but it has several native tools and simple tweaks that help free up space for what really matters.

Here are the most effective ways to reduce RAM consumption:

1. Disable startup applications...Many programs (like Spotify, Steam, or Teams) start running as soon as you turn on your PC, even if you're not going to use them.

How to do it: Press Ctrl + Shift + Esc to open Task Manager, go to the Startup tab and disable everything that is not essential.

2. Control browser consumption...Chrome and Edge are the biggest RAM culprits.

Use efficiency mode: In Edge, enable "Inactive Tabs," which suspend tabs you are not using. In Chrome, enable "Memory Saver" in Performance settings.

Extensions: Each installed extension consumes some RAM. Remove the ones you don't use.

3. Adjust visual effects...Windows uses RAM to process animations, shadows, and transparencies.

How to: Search for "Adjust the appearance and performance of Windows" in the Start Menu. Select "Adjust for best performance" or manually uncheck items such as "Animate windows when minimizing and maximizing".

4. Close background processes...Some native Windows apps continue running unnecessarily.

How to: Go to Settings > Apps > Installed Apps. Click the three dots next to an app (such as Weather or Calculator), go to Advanced Options, and under "Background app permissions", select Never.

5. Clear the paging file (Virtual Memory)...Windows uses a portion of the hard drive/SSD as if it were RAM when real memory runs out. If this file is misconfigured, the system may become slow.

Generally, letting Windows manage it automatically is ideal, but restarting the PC clears this cache and often resolves memory hiccups. 

6. Check for malware...Some viruses mine cryptocurrencies or perform hidden processes that drain RAM. Run a full scan with Windows Defender.

Extra tip: If your computer has 4GB or 8GB of RAM, these tips help, but the system will always be at its limit by current standards. If RAM usage is high even with everything closed, it might be time to consider a physical upgrade.

See how to reduce RAM usage in Windows 11 in the video below:

mundophone

Wednesday, April 29, 2026


TECH


Why has the battle for control of AI left the planet and already has leaders?

While the world still looks to rockets and manned missions, a much quieter dispute is advancing in space — and could redefine who controls the most powerful technology of the century.

For decades, the space race was synonymous with exploration, flags, and historic achievements. Today, the scenario has changed almost invisibly. What is at stake is no longer going further, but processing faster. Instead of astronauts, servers take center stage. Instead of lunar bases, orbital data centers. And in the midst of this transformation, a recent movement is beginning to indicate that someone may have taken the lead.

The new space race doesn't happen in front of the cameras. It unfolds silently, driven by chips, algorithms, and digital infrastructures. The goal is no longer just to explore space — now it's to use it as a platform for something much more strategic: advanced computing.

In recent years, artificial intelligence has become one of the most resource-intensive technologies. Increasingly larger models demand energy, cooling, and processing capacity on a massive scale. This has put pressure on data centers on Earth, which already face physical, environmental, and economic limitations.

It is in this context that an idea that once seemed distant emerges: taking computing off the planet.

And while many countries are still discussing possibilities, one has already begun to implement it.

A game-changing step forward...Without much fanfare, a set of satellites with processing capacity has been placed in orbit with a clear objective: not only to transmit data, but to analyze it directly in space.

These systems function as true computing nodes, capable of running artificial intelligence models without depending on terrestrial infrastructure. This represents a profound change. Instead of sending information to Earth and waiting for processing, data can be analyzed almost instantly, in the very environment where it is captured.

More than a test, this is an operational base under development. And that makes a difference.

While Western projects still explore prototypes, point tests, and experimental initiatives, this approach bets on direct implementation. In practice, this means gaining real-world experience, solving problems before others, and, most importantly, occupying space—literally.

The motivation isn't just technological. It's also energetic.

Data centers consume enormous amounts of electricity and water for cooling. As AI grows, this consumption increases exponentially. In space, the scenario changes completely: there is abundant and virtually continuous solar energy, in addition to a naturally cold environment that facilitates heat dissipation.

This reduces costs, increases efficiency, and eliminates competition for terrestrial resources.

But there is another even more relevant factor: autonomy.

Processing data directly in orbit allows for faster responses and reduces dependence on terrestrial infrastructure. This is especially important for strategic applications, such as environmental monitoring, communications, and defense systems.

The real prize isn't technology...Ultimately, this race isn't just about innovation. It's about control.

Whoever masters the ability to process data in space will have an advantage in critical areas: surveillance, real-time analysis, strategic decision-making, and even military operations. It's not just about efficiency—it's about power. And that's why governments and large companies are investing heavily in this type of technology.

The digital infrastructure of the future may not be in servers scattered around the planet, but orbiting around it.

Building a supercomputer in space is not simple. The challenges are extreme: constant radiation, lack of maintenance, severe thermal variations, and the need for perfect operation for years.

Even so, experts believe that this reality is closer than it seems.

Perhaps the most impressive thing is that this new race has already begun—and most people haven't realized it yet.

While we continue to associate artificial intelligence with apps and screens, something much bigger is beginning to take shape above our heads.

It's not just machines.

It's systems that can redefine the technological balance of the planet.

The battle for control over AI has literally "gone off-planet" because orbital space offers the ideal environment for the massive physical requirements of next-generation artificial intelligence—specifically unlimited energy, natural cooling, and regulatory freedom. 

As AI models grow, they face "Earth-bound" bottlenecks like power grid strain, water scarcity for cooling, and local opposition to massive data centers. Moving these operations to space addresses these issues while placing them beyond the reach of traditional national laws. 

Why the battle shifted to space...The shift is driven by three primary factors that make Earth increasingly inhospitable for "frontier" AI development: 

Unlimited solar energy: Earth-based data centers already push local power grids to their limits, causing rising electricity costs for residents. In orbit, data centers have access to constant, high-intensity solar power 24/7 without atmospheric interference.

Thermal management (free cooling): AI hardware generates extreme heat. On Earth, this requires millions of gallons of water for cooling—a growing environmental concern. In the vacuum of space, the ambient temperature provides a natural "heat sink," significantly reducing the infrastructure needed to keep systems from melting.

The "legal void": There is a growing "crisis of control" as governments struggle to regulate AI's safety risks. Off-planet data centers operate in a "no-man's-land" where companies can bypass "NIMBY" (Not In My Backyard) protests and strict national safety or privacy laws. 

The geopolitical & corporate "space race"...The "battle" is no longer just about who has the best code, but who controls the extraterrestrial infrastructure supporting it:

-Corporate sovereignty: Leaders at OpenAI and Google are reportedly exploring "orbital data farms" to decouple their most advanced models from terrestrial constraints.

-Global dominance: Geopolitical rivals like the U.S. and China view AI as a "winner-take-all" race. Dominating the space-based compute layer ensures that their AI systems can operate at a scale—and with a level of autonomy—that Earth-bound rivals cannot match. 

-The "alien mind" perspective...Some experts and futurists suggest that a sufficiently advanced AI would naturally prefer space. Unlike biological life, AI is not tethered to Earth's biosphere; its only "food" is data and energy, both of which are more abundant and accessible in the cosmos. In this view, the "battle" is the beginning of a civilizational split where the most powerful intelligence eventually outgrows its "cradle" on Earth.

mundophone


TECH


Google Pixel 11: Why is Google still betting on the insufficient Tensor G6?

Google's silicon continues to be a rollercoaster of emotions for those who follow the smartphone market. While the Mountain View giant seems to have finally hit the "heart" of the processor, the news coming out about the graphics component is a real bucket of cold water. The Pixel 11, which should reach our hands in 2026, promises to be an impressive productivity machine, but it risks being a "race car" with bicycle tires when it comes to visual processing. The Tensor G6 is shaping up to be a giant with feet of clay, and I'll explain why.

Let's start with the good news, because it exists and is substantial. According to the most recent leaks, Google will finally stop playing defensively when it comes to the CPU. The Tensor G6 should adopt the new Arm architectures, known as C1 Ultra. We're talking about a high-performance core capable of reaching an impressive 4.11 GHz. To give you an idea, this is the same kind of "muscle" you expect to find in MediaTek's Dimensity 9500, a processor that usually doesn't mess around.

Despite the positive evolution regarding the potential final processing power, the leak throws a "bucket of cold water" on those expecting a more powerful GPU.

This is because the listing reveals that the Google Tensor G6 will use the PowerVR C-Series CXTP-48-1536, which could repeat the poor performance in demanding games.

Although Google may implement an updated variant of this GPU, analysts point out that the Pixel 11 line will likely not be positioned as a high-performance family for demanding games, maintaining the brand's tradition of focusing on software optimization and intelligent features.

The configuration seems to shift to a 7-core architecture, focusing on thermal efficiency and raw power when it's really needed. It's a step forward that puts the Pixel 11 at a level of competitiveness that we've rarely seen in the Tensor line. If the G5 already promised improvements with the transition to TSMC manufacturing, the G6 wants to consolidate Google as a semiconductor manufacturer that doesn't just adapt old designs.

The shadow of PowerVR and the ghost of 2021...Now, the moment when the conversation gets uncomfortable: the GPU. While Apple and Qualcomm invest billions in graphics architectures that enable Ray Tracing games and console performance in your pocket, Google seems to want to recycle the past. Data indicates that the Tensor G6 will once again use the PowerVR CXTP-48-1536 GPU.

If that name means nothing to you, let me translate: it's an architecture that originally saw the light of day in 2021. Yes, you read that right. In a world where technology becomes obsolete in six months, Google plans to launch a flagship phone in 2026 with graphics technology from five years ago. This isn't just conservative; it's a decision that could condemn the phone's performance in heavy tasks, such as high-bitrate 4K video editing or next-generation games.

Outdated drivers and the technological bottleneck...The problem isn't just the physical hardware, but how it communicates with the software. The current Tensor G5 already suffers from drivers that seem to have been forgotten, lacking support for Vulkan 1.4. The scenario could be identical with the Tensor G6. Even if Google tries to "push" clock speeds (the so-called overclock), the technological base is old and inefficient.

-Graphics architecture: Based on Imagination Technologies from 2021.

-Software limitations: Lack of native support for newer graphics APIs.

-Consequences: Less fluidity in demanding games and greater heating when trying to compensate for the age of the hardware with raw power.

-Positive point: The change to a non-Samsung modem may finally solve network problems.

A smartphone for non-gamers? This strategy from Google leaves us with a clear, but somewhat bitter message. The Pixel 11 will most likely be the best smartphone on the market for artificial intelligence, computational photography, and daily productivity tasks, thanks to the new Arm C1 Ultra cores. However, if you are a mobile gaming enthusiast or expect your thousand-euro investment to last for many years with top-tier graphics performance, the news is not good.

Despite all the cutting-edge AI features that Pixel phones offer, Google's internal Tensor chips hold them back. The Tensor G5 marked a major leap by switching to TSMC's more advanced process, reducing power consumption. Still, its performance lags behind the competition, with the GPU standing out especially as a weak point. Unfortunately, it seems Google will do little to improve the Tensor G6's GPU performance in the Pixel 11.

Leaker Mystic Leaks, known for its accurate Pixel leaks, today released some information about the Pixel 11's Tensor G6. The chip will use PowerVR's CXTP-48-1536 GPU, released in 2021.

To make matters worse, the Tensor G5's GPU performance is hampered by outdated drivers. It lacks Vulkan 1.4 support, limiting its performance in games.

With the PowerVR GPU inside the Pixel 10 offering below-average performance, it's disappointing to see that Google may not do much to improve the situation this time. While higher clock speeds and other driver optimizations should help, they alone won't be enough to compensate for the older architecture.

There is at least some good news, however. The leak suggests that the Tensor G6 will use Arm's C1 Ultra core clocked at 4.11 GHz, 4 Arm C1 Pro cores running at 3.38 GHz, and 2x Arm C1 Pro cores running at 2.65 GHz.

These are Arm's latest CPU cores and should offer a significant leap in performance and efficiency over the Cortex-X4, A-725, and A520 cores of the Tensor G5. As the name suggests, the C1 Ultra is Arm's most powerful CPU core. The Pixel will use it for intensive tasks that require power spikes. The MediaTek Dimensity 9500 uses the same Arm C1 cores.

It's not clear from the screenshot, but Google may switch to a 7-core CPU layout for the Tensor G6, using only one C1 Ultra core. For comparison, the Tensor G5 comes with an 8-core CPU.

At least from a CPU standpoint, the Pixel 11's Tensor G6 looks promising. It may not yet rival the Snapdragon 8 Elite Gen 5, but it should offer respectable performance for a 2026 flagship. GPU performance may be another story.

Google seems to believe its software can work miracles, but there are limits to what optimization can do when the silicon doesn't keep up. The Tensor G6 may be the brand's most balanced processor to date, but this insistence on an outdated GPU is a recurring mistake that leaves us wondering if Google will ever take graphics hardware as seriously as it takes its camera algorithms. In the end, you'll have an incredibly smart phone, but one that might stutter where its rivals glide effortlessly.

by mundophone

Tuesday, April 28, 2026


DIGITAL LIFE


Agentic AI threatens research funding system

In a new analysis, two UCL researchers argue that the present system used to allocate billions in research funding was designed for a world without AI agents and may no longer be fit for that purpose.

In their Comment published in Nature, Professors Geraint Rees and James Wilsdon highlight how this new breed of AI tools could fundamentally upend how research is funded and provide recommendations as to how funders can adapt.

AI agents—also referred to as agentic AI—are more advanced capabilities within large language models that don't respond to a single prompt but pursue goals across multiple steps. They can search the web, read documents, write and execute code, call external services and more to deliver a specified goal.

When it comes to writing a grant proposal, AI agents can be trained on a researcher's publicly available body of work, grant criteria, and previously successfully funded grants to generate ideas and even fully formed applications. Because of this, a seemingly high-quality grant proposal can be created in a tiny fraction of the time it once took, with minimal effort.

This runs the risk of overwhelming funding agencies with huge volumes of high-quality submissions to assign to a limited number of awards, who will have to make largely arbitrary choices about what or who to fund.

Lead author Professor Geraint Rees, UCL Vice-Provost of Research, Innovation & Global Engagement, said, "Funding panels have always faced hard choices, but they could at least claim to be distinguishing excellent ideas from merely good ones. Agentic AI is making that claim increasingly hollow. Funders aren't facing a distant threat—the data suggest the system is already under strain. The good news is that better approaches exist, but the window to act is narrowing."

Additionally, new research carried out by Professors Rees and Wilsdon, found that the number of grant applications has been increasing in recent years.

In a survey of hundreds of thousands of grant applications from 12 multidisciplinary research funders in six countries that are partners in the Research on Research Institute (RoRI), the funders reported an increase of 17% in application numbers between 2022 and 2024, growing to a 57% increase between 2022 and 2025.

This growth ranged from 14% for postdoctoral fellowship applications at the British Academy to 142% for EU Marie Skłodowska-Curie fellowships. There could be several explanations for some of these changes, but the researchers think that AI has played a significant part.

Co-author Professor Wilsdon (UCL Science, Technology, Engineering and Public Policy and Executive Director of the Research on Research Institute), said, "These sharp increases in the volume of funding applications begin soon after the launch of ChatGPT, so it's likely that a significant portion of this increase is linked to the use of generative AI. This is just the product of earlier versions of large language models: the capabilities of newer agentic systems will drive volumes even higher in 2026.

"Meanwhile, peer reviewers will be using the same agentic tools to assess proposals—so we quickly reach a point where systems of grant funding and review will collapse, unless funders adopt new strategies for managing volume and demand, and for assessing quality."

However, the researchers caution against clamping down on the use of generative AI by applicants, which would likely be impossible to enforce and inadequate to the challenge at hand. Instead, they urge funders to deploy the power of agentic AI systems to reinvent the funding system, rather than to suppress their use.

This could include using AI to profile applicants from multiple perspectives, allowing funders to identify and compare candidates more completely than a funding panel. It could also include prioritizing and shortlisting applications by identifying candidates whose record is consistent with the claims in their application, or by using predictive heuristics that look for novelty and potential impact.

Researchers conclude that when developing these kinds of systems, care is required to avoid reinforcing many of the pitfalls that current funding systems face, such as concentrating resources on those who have already been successful. Transparency would be key to avoid exacerbating biases against early-career researchers, under-represented groups, less established or prestigious institutions, or interdisciplinary and emerging fields.

Agentic AI, or AI agents capable of planning and executing tasks autonomously, is posing a significant, near-term threat to the traditional research funding system. A April 2026 report in Nature by researchers at UCL and the [Research on Research Institute (RoRI)] argues that the current system of grants and peer review, designed for a world without such technology, risks collapse due to an unsustainable influx of AI-assisted, high-quality proposals. 

Overwhelming application volumes: Research agencies are being flooded with proposals; RoRI found a 57% increase in applications between 2022 and 2025 across 12 funders.

Degradation of peer review: As both applicants and reviewers start using agentic AI to write and assess proposals, the system risks becoming a closed loop that evaluates how well agents mimic previously successful proposals rather than genuine scientific merit.

"Garbage In, garbage out" risks: If an agent's foundational assumptions are incorrect, entire research proposals could be flawed, yet disguised in high-quality, persuasive writing.

Systemic bias: Existing biases might be reinforced, with resources disproportionately concentrated on established researchers, early-career researchers and novel research fields, potentially missing out.

Replacement of scientific training: The rigorous process of learning scientific reasoning could be replaced by prompting, turning future researchers into "prompt engineers" rather than independent thinkers. 

Impact on the funding landscape...The rapid rise of AI-generated grant writing threatens to make traditional funding panels, which were designed to differentiate good ideas from excellent ones, redundant or unable to identify truly transformative research. 

Massive rise in applications: Prestigious grants, such as the [EU Marie Skłodowska-Curie fellowships], have seen increases of over 140% in applications in recent years.

Increased costs: The use of advanced agentic AI can also lead to higher operational costs for researchers, further complicating the funding landscape.

Need for new strategies: Rather than banning AI—which is likely impossible—researchers suggest funders must adopt AI-native methods to evaluate applications and track records. 

Potential solutions...Experts argue that the solution is not to fight the technology but to harness it(below): 

AI-Powered assessment: Funders should use agents to profile applicants and compare candidates, analyzing their entire body of work.

Focus on track records: Shifting from evaluating detailed, long-term plans to evaluating the past performance and reputation of research teams.

Enhanced verification: Employing AI to verify that the proposed work is consistent with a researcher's past achievements

Provided by University College London

DIGITAL LIFE Study reveals playing League of Legends may improve brain function Recent research suggests that one of the world's most po...