Saturday, May 2, 2026

 

TECH


Toilet giant Toto may hold the key to ending the RAM shortage

Toto, the Japanese company best known for its heated bidet toilets, is suddenly looking less like a bathroom brand and more like an unlikely beneficiary of the global RAM shortage. In a year when memory has become scarce, expensive, and generally annoying to anyone trying to buy a computer, the company has a semiconductor materials business that is riding a surge in demand tied to memory-chip production.

We doubt that business school case studies will look back on this crisis and say, “And then the toilet people saved the day,” but then again, here we are. Apparently, Toto’s advanced ceramics division makes components used in NAND memory chips, including electrostatic chucks that help hold wafers steady during manufacturing. As AI data centers expand, demand for memory chips has tightened, and that has quietly boosted Toto’s industrial side.

Toto is mainly known for its advanced toilets and sanitary ceramics. However, the company's expertise in ceramic production can also be applied to semiconductor manufacturing. Since the 1980s, Toto has made electrostatic chucks (e-chucks) indispensable in modern semiconductor manufacturing. According to Nikkei, operating profits from these products are expected to exceed $100 million this year.

In contemporary semiconductor manufacturing, electrostatic chucks (ESCs) securely hold a silicon wafer (or other substrate) in place using electrostatic forces rather than mechanical clamping or vacuum-based methods. The ESC is a key component in many steps of chip production, including EUV lithography steps, plasma etching, chemical vapor deposition (CVD), physical vapor deposition (PVD), and other steps requiring precise wafer positioning and minimal contamination.

While e-chucks have traditionally been used for CVD, PVD, and plasma etching, but not for DUV lithography steps as they are carried out in an ambient environment or immersion fluid, and therefore, a vacuum system under the wafer is good enough to maintain wafer flatness and position. However, with EUV, things are different. EUV lithography operates at very short 13.5nm wavelengths and requires a high-vacuum environment to prevent absorption of EUV light. Therefore, chipmakers use e-chucks instead of vacuum chucks as they are easier to use in such environments. Also, they can provide more uniform clamping force, reduce stress, minimize distortion, and improve overlay and critical dimension (CD) control.

It takes over 4,000 steps to process a wafer and make a chip. Usage of EUV steps has been increasing in recent years, just like steps requiring precise wafer positioning, so usage of ESCs has been on the rise, driving Toto's revenues and profits. Ceramics used for e-chucks must be both strong and resistant to cracking. Toto has developed materials with uniform properties, applying its expertise in molding and firing from its long history in toilet manufacturing. However, competition in this field is growing. In e-chucks, Toto faces Shinko Electric Industries, which has strong ties to chip equipment manufacturers and Applied Materials.

To strengthen its position, Toto has invested heavily in manufacturing. In 2020, the company spent ¥11.8 billion constructing a ceramics production facility in Oita, Japan. Between April 2020 and April 2024, it increased its ceramics production workforce by around 20%. 

The image above shows electrostatic chucks used in high-precision industrial processes

Toto's ceramics business grew 34% year-over-year and accounted for 55% of Toto’s 53.8 billion yen, or about $343.5 million, in operating profit so far this year. The company expects the division to keep expanding, with roughly 27% growth projected next year. Moreover, the company says it will invest another 30 billion yen (about $192 million) over the next fiscal year to increase mass production and strengthen R&D.

For perspective, Toto is currently the world’s second-largest producer of electrostatic chucks (e-chucks) for NAND memory production. Stock markets have also taken notice. In January, Toto shares jumped as much as 11% after analysts highlighted the company’s chip-related prospects, calling out the potential for significant profit growth from the business. Investors who once would have lumped Toto in with slow-moving consumer and housing equipment firms are now being forced to look at it through a much more industrial, and much more semiconductor-shaped, lens.

In reality, Toto's increased contribution to memory production won't necessarily drive memory and gadget prices down, but who knows? For the company, the irony is almost too perfect, however. As the world runs short of memory, Toto remembers where the money is.

mundophone


TECH


Why delaying cell phone repair can increase the final bill?

Delaying cell phone repairs while they are still operational can result in a significantly higher financial burden in the medium term. This warning comes from iServices, a company that analyzes a recurring pattern in the Brazilian market: seemingly minor failures that evolve into serious and expensive damage. Based on more than 184,000 interventions carried out in 2025, the technology company identifies that ignored wear and tear on components such as the screen or battery ends up compromising the overall integrity of the device.

The tendency to ignore minor physical damage is based on the perception that the equipment continues to perform its basic functions. Bruno Borges, CEO of iServices, clarifies the rationale behind this consumer behavior:

When a device is still working, it is natural for the customer to choose to postpone the repair. The problem is that, in practice, this decision can be more expensive. A degraded battery, a broken screen, or an unstable charging port are problems that, when ignored, can compromise other components of the equipment.

Technical evidence, corroborated by a reliability analysis published in the Journal of Cleaner Production, highlights that the lifespan of these devices can be extended through timely repair measures.

A screen with small cracks, for example, fails to ensure the hardware's watertightness, allowing moisture or dust particles to enter and oxidize internal circuits. Delaying the repair of cell phones with degraded batteries constitutes another technical risk. A battery that no longer holds a stable charge causes overheating, which impairs processor performance and deforms other components.

The new regulatory framework of the European Union...The European legal framework addresses the need to extend the durability of digital products. Since June 20, 2025, the European Commission has been applying new ecodesign and energy labeling rules for cell phones and tablets. These standards require manufacturers to guarantee the availability of spare parts and facilitate access to technical information. According to the European Environment Agency, monitoring product lifecycle trends is crucial for a functional circular economy.

Sustainability and the cost of the digital footprint...Timely maintenance is a decision that benefits personal finances and environmental balance. With high-end equipment reaching values ​​above 1,500 euros, replacing specific components is the most rational option. A study by ADEME and Arcep on the environmental footprint of digital technology indicates that the production phase accounts for approximately 80% of a device's total impact.

The European Environmental Bureau (EEB) estimates that adding just one year to the lifespan of mobile phones in the European Union would prevent the emission of 4 million tons of carbon dioxide per year by 2030. Keeping a device longer reduces the need for premature replacement and avoids the unnecessary extraction of raw materials.

How to act in the face of specific damage...To help the reader mitigate damage before reaching technical assistance, there are procedures that can prevent the technical situation from worsening:

-Broken screens: If the glass has cracks, avoid direct exposure to sunlight or environments with high humidity (such as bathrooms), as protection against external elements is compromised.

-Swollen batteries: If you notice a deformation in the back structure or the screen lifting, turn off the phone immediately. Do not attempt to charge the device, as there is a risk of ignition or explosion due to chemical instability.

-Liquid damage: If the phone comes into contact with water, turn it off and do not attempt to charge it. The use of external heat sources (hair dryers) or home methods (rice) is usually ineffective and can accelerate internal corrosion.

The impact on warranty and resale value...With high-end phones potentially exceeding €1,500, preserving market value is a rational economic decision. Continued use of a damaged device may void the manufacturer's warranty, as user negligence in the face of visible damage is often invoked to refuse coverage for secondary faults.

The European Union has reinforced this paradigm with new ecodesign rules in effect since June 2025. These rules require manufacturers to ensure the availability of parts and facilitate access to technical information. Extending the lifespan of equipment has gone from being an individual choice to a cornerstone of family savings and the circular economy.

Signs that require immediate diagnosis...The following table systematizes the indicators that justify a professional evaluation:

Wear Indicator -- Risk of Worsening -- Cost Consequence

Partial cracked screen -- Liquid and dust infiltration -- Total replacement of internal components

Overheating battery -- Processing instability -- Irreversible damage to the motherboard

Intermittent charging -- Accelerated battery wear -- Multiple circuit repairs

Slowness or restarts -- Critical hardware fatigue -- Total equipment replacement

Avoiding the postponement of cell phone repairs is the most effective strategy for preserving the device's functionality. The convergence of high hardware prices and European regulations protecting the right to repair creates a new context for the consumer. In the current scenario, proactive maintenance ceases to be an accessory expense and becomes an investment in technological durability and personal data security.

Delaying cell phone repair—particularly for a cracked screen or minor charging issue—often increases the final bill because minor damage creates a "domino effect," allowing moisture, dust, and pressure to degrade internal components over time. What might start as a simple $100 screen repair can escalate into a $500+ motherboard replacement if the delay causes the phone to stop functioning entirely.

Here is why delaying repairs significantly increases the final cost:

1. The snowball effect of damage:

-Screen cracks spread: Minor cracks rarely stay small. Daily use (typing, scrolling) combined with temperature changes causes cracks to branch out, often damaging the touch digitizer beneath, resulting in a full display assembly replacement rather than just a top glass fix.

-Internal exposure: A cracked screen or broken casing loses its protective seal. This allows dust, debris, and moisture to enter the phone, which can lead to corrosion on the motherboard—one of the most expensive parts to repair.

-Water damage escalation: Even if a phone is water-resistant, a cracked screen or broken port voids that protection. A quick, cheap drying/cleaning repair can become a complete, high-cost overhaul if corrosion spreads to the motherboard.

2. Component failure spreads:

-Charging port damage: A loose charging port can destroy the motherboard if not addressed early, turning a small, cheap fix into a massive, expensive repair.

-Battery degradation: A broken, exposed screen can allow glass shards to reach the battery, damaging it and leading to swelling, which can then destroy other internal parts.

3. Impact on functionality and security:

-"Ghost touches": A cracked screen can cause the phone to register phantom touches, leading to failed password attempts and potentially locking you out of your device, which may require a total factory reset and loss of data.

-Reduced resale value: A phone with a cracked screen loses 30-50% of its value immediately. Delaying repair can make the phone unusable, reducing its resale or trade-in value to zero.

4. Other hidden costs:

-Higher labor/diagnosis costs: A phone with multiple issues from neglect requires more intense diagnostic time from technicians, increasing the labor fee.

-Lost productivity: If the phone finally dies entirely, the cost of an urgent, last-minute repair or a brand-new replacement device is far higher than addressing the issue early

by mundophone

Friday, May 1, 2026


DIGITAL LIFE


Study reveals playing League of Legends may improve brain function

Recent research suggests that one of the world's most popular games may be doing more than just entertaining. The observed effects have even caught the attention of the scientific community.

For a long time, video games were seen only as a pastime — or even as a harmful distraction. But this perception has been changing as science begins to investigate their effects on the brain. Amid this debate, a recent study analyzed one of the world's most popular games and found results that may surprise even the most skeptical.

League of Legends, one of the most influential titles of the last decade, was analyzed in a study conducted by scientists at the University of Electronic Science and Technology of China.

The study, published in the journal Brain Sciences, followed 68 students over five months. The goal was to understand how different types of games impact cognitive performance.

To do this, the researchers compared League of Legends players with participants who played Sanguosha, a popular turn-based card game in China. The difference in results was clear.

What changes in the brain of someone who plays...Participants who played League of Legends showed better performance in tasks that require simultaneous attention to multiple elements and quick decision-making.

This type of skill is essential in competitive games, where the player needs to monitor several objectives at the same time, react to unexpected changes, and make decisions in fractions of a second.

But the most relevant data came from neurological analyses. Electroencephalogram exams showed that players developed more efficient brain networks, suggesting an adaptation of the brain to this type of complex stimulus.

An effect that goes beyond game time...Another point that caught attention was the duration of the effects. Even after the end of the gaming sessions, the cognitive benefits continued to be observed for weeks. About ten weeks later, the participants still showed improvements compared to the comparison group. This indicates that the impact is not just momentary, but can generate more lasting changes.

This result reinforces the idea that certain types of games not only stimulate the brain in the short term, but also contribute to the development of cognitive skills over time. League of Legends belongs to the MOBA (Multiplayer Online Battle Arena) genre, known for demanding a high level of strategy, coordination, and constant adaptation.

Unlike more predictable games, it places the player in dynamic scenarios, where each match is different from the previous one. This forces the brain to process information quickly, adjust strategies, and deal with multiple variables at the same time.

This highly complex environment seems to be one of the factors that explain the gains observed in the research.

Since its launch in 2009, League of Legends has not only established itself as one of the most popular games in the world, but has also helped shape the esports landscape.

With a massive global community, high-level competitions, and a cultural impact that transcends video games—including productions like Arcane—the game has become a modern entertainment phenomenon.

Now, with scientific evidence pointing to cognitive benefits, it is also taking on a different space: that of a potential tool for mental development.

This does not mean that all video games bring the same effects, nor that screen time should be ignored. But it indicates that, in certain contexts, playing games can be more than just fun—it can be training for the brain.

Several scientific studies indicate that playing video games can improve brain function, provided it is done in moderation. The positive impact occurs because many games act as a "training" for the brain, requiring rapid information processing and decision-making under pressure.

The main functions that benefit include:

-Attention and focus: Gamers tend to show greater efficiency in areas of the brain that control sustained and selective attention.

-Cognitive ability and IQ: Research with children has shown that those who play regularly may show gains in IQ tests and better performance in problem-solving tasks.

-Memory and flexibility: Games that require switching between tasks quickly (such as strategy or RPGs) help with cognitive flexibility and working memory.

-Motor coordination: Titles that demand precision improve fine motor skills and spatial perception.

-Visual Skills: Action games can enhance visual perception and the ability to identify details in complex environments.

The Role of Game Type: Not every game stimulates the brain in the same way. See how different genres impact the mind:

-Action/shooter (FPS): Focus on attention, quick reflexes, and sensory perception.

-Strategy (RTS): Excellent for executive functions, planning, and resource management.

-Puzzle: Stimulate logical reasoning and creativity.

Attention to Limits...Despite the benefits, excessive use can cause negative effects, such as addiction and an imbalance of neurotransmitters like dopamine, which can lead to procrastination and irritability. Experts recommend moderate use so that video games act as a complement to cognitive health, and not as an escape from reality.

mundophone


APPLE


iPhone 18: the future gadget may arrive with new Samsung OLED screens

If there's one rivalry that has defined the tech world in the last decade, it's undoubtedly the constant tug-of-war between Apple and Samsung. But, as you may have noticed if you follow this market, what happens on marketing screens and social media is quite different from what happens behind the scenes in the factories. When it comes to ultra-high-quality screens, the South Korean giant continues to dictate the rules of the game. And now, a new explosive rumor suggests that Apple may have to swallow its pride and hand over the total monopoly on the production of the future iPhone 18 screens to its biggest rival.

The relationship of dependence between these two brands when it comes to supplying image components is already long-standing. Samsung Display has always been the "queen" of OLED screens and has supplied panels for all Apple smartphones that have adopted this technology, from the launch of the revolutionary iPhone X to the current and highly sought-after iPhone 17 series.

However, Tim Cook and his operations management team's usual strategy is never to rely on a single supplier. Apple hates putting all its eggs in one basket, as this takes away its bargaining power and leaves the brand vulnerable to failures in the production chain. To mitigate this risk, the Cupertino company has made a colossal effort over the years to integrate other manufacturers into the process, sharing the gigantic screen orders with LG Display and the Chinese manufacturer BOE.

However, a very recent and detailed report, published by the prestigious South Korean newspaper The Korea Herald, indicates that the wind may be about to change direction drastically. Apple may be planning to acquire the screens for the entire future iPhone 18 line absolutely exclusively from Samsung. If this information is officially confirmed, it will be a historic milestone: it will mark the first time since the debut of the iPhone X that Apple has entrusted the entire supply of such a critical component to a single manufacturing partner.

A curved screen on all four sides to celebrate 20 years... Don't forget a very important detail that is pushing Apple to take more risks than usual: the launch of the iPhone 18 series will mark the long-awaited 20th anniversary of the launch of the original iPhone. The brand wants (and needs) to shine to mark the date, and a normal, flat, boring screen simply won't be enough to leave the world speechless.

According to industry sources, Apple demanded that Samsung develop an incredibly complex OLED panel. The request focuses on a screen that is fully curved on all four sides (creating an "infinity pool" illusion in your hands, with no visible aluminum bezel when you look at it from the front) and, above all, that does not use any type of polarizing layer.

The magic of COE technology and its major obstacles...But what does removing this polarizer mean for you in practical cell phone use? Traditionally, OLED screens use a polarizing layer to drastically reduce reflections from external light and improve image contrast. The major technical problem is that this plastic layer blocks a significant portion of the light emitted by the screen itself, forcing the phone to consume much more battery power to achieve high brightness levels.

To eliminate the need for this obstructive layer, Samsung will have to resort to a cutting-edge technology it already masters, called COE (Color Filter on Encapsulation). With this innovation working in its favor, its future iPhone 18 could be physically even thinner, have a brutally brighter and more visible screen in summer sunlight, and, most importantly, save immense energy, considerably extending battery life with each charge cycle.

The major obstacle to this idyllic vision is the cruel reality of assembly lines. Industry experts and market analysts remain quite skeptical about the deadlines imposed by Apple. They firmly believe that this type of highly advanced screen, which combines extreme quad curvature and the absence of a polarizer, is a genuine engineering nightmare to mass-produce with the low defect rates that the brand demands. We can only wait with great anticipation for the coming months to see if Samsung's elite engineering team will be able to pull off this industrial "miracle" in time for the grand anniversary of Apple's most iconic product.

Recycled design with good improvements in specifications...A new rumor indicates that Apple should keep the look of the iPhone 18 practically identical to its predecessor, focusing innovations on the inside of the devices. Apparently, we would see an increase in dimensions, especially in the Pro models, which would become thicker to possibly house larger batteries, in addition to debuting the brand's long-awaited 2nd generation 5G modem.

The news was shared on the Chinese network Weibo by leaker Fixed-focus digital cameras, who suggests that the overall appearance of the line planned for 2026 will remain unchanged, but the dimensions will grow. The informant does not mention exactly what differences we would see, nor which specific models would be affected, but the details complement other recent rumors that give a better context to the comment.

As far as is known, the main adjustments would be in the thickness, which would increase especially in the Pro versions. It is described that the iPhone 18 Pro Max would be the thickest and heaviest smartphone ever made by the company, a sacrifice that would have an important advantage: a leap in battery life.

It is speculated that the batteries would increase in size, with estimates suggesting that the batteries could reach the 5,200 mAh range in the eSIM versions, which doesn't seem like such a big gain compared to the iPhone 17 family, but the giant's known optimizations could complement this aspect, along with other internal adjustments.

The main ones would be the improved efficiency of the future A20 and A20 Pro chips, as well as the arrival of the long-awaited Apple C2 5G modem. Combined, in addition to offering features such as improved satellite connectivity, these characteristics would ensure healthy advances in phone usage time.

mundophone

Thursday, April 30, 2026



TECH



Mythos AI triggers record number of patches and divides experts

Mythos AI, Anthropic's latest model, has identified, according to the company itself, thousands of unknown vulnerabilities in just seven weeks. The tool triggered a record volume of security patches and pressured governments and central banks to coordinate emergency responses. Launched in April 2026 to a select group of organizations, it exposes a growing tension between the speed of automated detection and the human capacity for response.

Microsoft was one of the first to feel the impact. The April edition of Patch Tuesday included fixes for 167 security flaws, a number that Adam Barnett, senior software engineer at Rapid7, described as "a new record." Barnett himself acknowledged that it was tempting to link this volume to the announcement of Project Glasswing the previous week, although without establishing a direct causal relationship.

Mozilla followed suit. Firefox 150 integrated fixes for 271 vulnerabilities detected with the support of Mythos AI, although only three were formally credited to the tool in Mozilla's official security note, according to The Register. Anthropic claims, in its official System Card, that the flaws found cover all major operating systems and browsers, some decades old.

Project Glasswing: controlled access, increasing pressure...The program was launched with eleven named partners, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, joined by more than 40 additional organizations responsible for critical software infrastructures. The more time partners have to fix flaws before the model is made widely available, the lower the risk of malicious exploitation.

Anthropic confirmed plans to extend access to European and UK banks. The European Central Bank is preparing to warn banks under its supervision about the risks of Mythos AI, according to Reuters, cited by the Business Standard. Unlike in the US, this consultation is taking place through the usual channels of dialogue with banking staff, without any extraordinary meetings with top management scheduled for now.

Banks and governments mobilized...In the US, Treasury Secretary Scott Bessent and Federal Reserve Chairman Jerome Powell met with banking executives on April 8th to encourage them to test their own systems with Mythos AI. Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley responded to the call and began internal testing, according to Bloomberg, cited by TechCrunch. Jamie Dimon, CEO of JPMorgan Chase, warned that Mythos AI exposes more vulnerabilities to potential cyberattacks, according to CNBC.

The mobilization went beyond the US. Indian Finance Minister Nirmala Sitharaman chaired a high-level meeting with bank directors, the Reserve Bank of India, the Ministry of Electronics and Information Technology, the NPCI, and CERT-In to assess the risks associated with Mythos AI, according to the Economic Times. Sitharaman urged banks to take preventative measures to protect their systems and customer data, tasking the Banking Association of India with coordinating the institutional response.

Christian Sewing, CEO of Deutsche Bank, told Bloomberg that the German banking sector does not see Mythos AI as an existential threat, although he acknowledges that its cybersecurity capabilities warrant heightened vigilance.

The dual-use dilemma...Palo Alto Networks warned that capabilities similar to Mythos AI will eventually be available outside the controlled perimeter of American companies with built-in safeguards. The risk pointed out by the company is accurate. Threat actors with access to equivalent tools could create “unprecedented autonomous attack agents in the industry,” a category of risk for which current defenses are unprepared.

Anthropic's cybersecurity assessment documents the offensive capabilities of the model and the rationale behind restricted access. Artificial intelligence is finding vulnerabilities at a faster rate than teams can fix them, and defenders are facing a race for which they are not yet equipped.

Anthropic has committed up to $100 million in usage credits to partners and $4 million in donations to open-source security organizations, including Alpha-Omega, the Open Source Security Foundation, and the Apache Software Foundation, according to the official Project Glasswing page. The company guarantees that the model will not be widely available until new safeguards are operational

The release of the Mythos model (or Claude Mythos Preview) by Anthropic in April 2026 triggered a record volume of security fixes by automating the discovery of critical flaws. The model was able to identify thousands of zero-day (unknown) vulnerabilities in just seven weeks of testing, equivalent to about 30% of the world's annual production of such discoveries before the use of AI.

The Impact of Mythos on Cybersecurity...Mythos's differentiating factor is not only the volume of flaws found, but its autonomy and speed. It can perform complex analyses, chain multiple vulnerabilities, and generate functional exploits in minutes or hours, tasks that would take weeks for experienced human researchers.

Emblematic discoveries: Mythos identified a 27-year-old flaw in OpenBSD and a 16-year-old vulnerability in the FFmpeg video software, both ignored by decades of human audits and traditional automated tools.

Patch Wave: Anthropic formed the Project Glasswing consortium — including Microsoft, Google, Apple, and the Linux Foundation — to provide early access to the model. The goal is to allow these partners to patch their systems before the model (or similar capabilities) falls into malicious hands.

Patch Bottleneck: Experts warn that the speed of AI discovery has surpassed human patching capacity. This has created a "congestion" of updates, forcing companies to prioritize exploitable flaws instead of trying to patch the entire reported volume.

Why wasn't the model released to the public? Due to its high potential for offensive use (dual-use), Anthropic decided to keep Mythos as an internal research model, with no plans for general release. The company cited its Responsible Escalation Policy (RSP), indicating that the model has reached capability levels (ASL-3) that require extreme safeguards against the development of biological weapons and large-scale cyberattacks.

Recommendations for users and businesses...With the acceleration of discoveries, the traditional model of "waiting for a vulnerability to fix" has become insufficient.

Automatic updates: Enable automatic updates on all devices, especially browsers and operating systems.

Digital hygiene: Use password managers and two-factor authentication (MFA) to mitigate the impact if an account is compromised by a system failure.

Data security: Companies should focus on protecting the data itself (data-centric security) and behavioral monitoring, assuming that breaches in the external perimeter will become increasingly common.

mundophone


TECH


Is there a way for Windows to gain more relevance in the competitive browser market?

Microsoft seems to have woken up to a reality that many of us already felt at our fingertips when opening the task manager: Windows has become a resource hog. If 2025 was the year that Artificial Intelligence (AI) was pushed into every corner of the operating system, 2026 is proving to be the year of the "general cleanup." Satya Nadella, the man at the helm of the Redmond giant, finally admitted that it is necessary to regain the trust of the average user, putting performance and stability ahead of the futuristic promises of Copilot.

It's no secret that the last year has been turbulent for those using Windows 11. Between updates that caused system errors and the forced integration of AI tools that not everyone asked for, the user experience has degraded. The "running before you can walk" strategy with Copilot has left the system heavy and, at times, confusing.

Now, the order coming from the top is clear: prioritize quality. Nadella acknowledged, during the presentation of the company's financial results, that the fundamental work to "win back the fans" has already begun. This implies a paradigm shift, where the focus is no longer just on what AI can do for you, but on how fast and fluid your computer can perform basic day-to-day tasks.

One of the most interesting promises of this new guideline is the direct focus on RAM consumption. You know that moment when your laptop starts to overheat and the fan sounds like a jet engine just because you have your browser and a document open? Microsoft wants to put an end to that, especially on devices with fewer resources.

-Core optimization: Reducing the memory footprint of services running in the background.

- Efficiency on modest devices: Improve performance on machines with 8GB of RAM or less, which suffered considerably with the latest versions of Windows 11.

- Strategic retreat: Pause the release of experimental AI features to ensure that core functions do not break.

- Ecosystem improvement: In addition to Windows, Bing, Edge, and Xbox are under the same microscope for performance optimization.

The K2 project and the restructuring of fundamentals... Beyond the words of a CEO to investors, there is technical evidence that this change is happening. Internal reports point to an effort dubbed "K2," which serves as a kind of foundation to rectify the structural problems of Windows 11. This plan focuses on the "fundamentals"—a technical term for boot speed, interface latency, and update reliability.

By admitting that the system needs work, Microsoft is validating the complaints of millions of people who felt that Windows had become less user-focused and more of a vehicle for advertising and subscription services. The idea now is to simplify, cleaning up code that doesn't add value and ensuring that the processor isn't being occupied with useless processes.

Real impact on your daily work...What can you expect from this change of direction in the coming months? If Nadella's promise materializes, your screen will no longer be a battleground against sudden slowdowns. Optimizing RAM consumption means that heavy applications will have more room to maneuver and that multitasking will become linear again.

The message from this recent conference is that Microsoft realized that having the world's smartest assistant is useless if the user feels like turning off the computer out of frustration with the system's slowness. Returning to basics may not be as flashy as a new generative language model, but it's exactly what the vast majority of us need to maintain smooth productivity. Let's hope this intention to better serve core users doesn't just remain on paper and translates into efficient code.

Windows is known for being a memory hog, but it has several native tools and simple tweaks that help free up space for what really matters.

Here are the most effective ways to reduce RAM consumption:

1. Disable startup applications...Many programs (like Spotify, Steam, or Teams) start running as soon as you turn on your PC, even if you're not going to use them.

How to do it: Press Ctrl + Shift + Esc to open Task Manager, go to the Startup tab and disable everything that is not essential.

2. Control browser consumption...Chrome and Edge are the biggest RAM culprits.

Use efficiency mode: In Edge, enable "Inactive Tabs," which suspend tabs you are not using. In Chrome, enable "Memory Saver" in Performance settings.

Extensions: Each installed extension consumes some RAM. Remove the ones you don't use.

3. Adjust visual effects...Windows uses RAM to process animations, shadows, and transparencies.

How to: Search for "Adjust the appearance and performance of Windows" in the Start Menu. Select "Adjust for best performance" or manually uncheck items such as "Animate windows when minimizing and maximizing".

4. Close background processes...Some native Windows apps continue running unnecessarily.

How to: Go to Settings > Apps > Installed Apps. Click the three dots next to an app (such as Weather or Calculator), go to Advanced Options, and under "Background app permissions", select Never.

5. Clear the paging file (Virtual Memory)...Windows uses a portion of the hard drive/SSD as if it were RAM when real memory runs out. If this file is misconfigured, the system may become slow.

Generally, letting Windows manage it automatically is ideal, but restarting the PC clears this cache and often resolves memory hiccups. 

6. Check for malware...Some viruses mine cryptocurrencies or perform hidden processes that drain RAM. Run a full scan with Windows Defender.

Extra tip: If your computer has 4GB or 8GB of RAM, these tips help, but the system will always be at its limit by current standards. If RAM usage is high even with everything closed, it might be time to consider a physical upgrade.

See how to reduce RAM usage in Windows 11 in the video below:

mundophone

Wednesday, April 29, 2026


TECH


Why has the battle for control of AI left the planet and already has leaders?

While the world still looks to rockets and manned missions, a much quieter dispute is advancing in space — and could redefine who controls the most powerful technology of the century.

For decades, the space race was synonymous with exploration, flags, and historic achievements. Today, the scenario has changed almost invisibly. What is at stake is no longer going further, but processing faster. Instead of astronauts, servers take center stage. Instead of lunar bases, orbital data centers. And in the midst of this transformation, a recent movement is beginning to indicate that someone may have taken the lead.

The new space race doesn't happen in front of the cameras. It unfolds silently, driven by chips, algorithms, and digital infrastructures. The goal is no longer just to explore space — now it's to use it as a platform for something much more strategic: advanced computing.

In recent years, artificial intelligence has become one of the most resource-intensive technologies. Increasingly larger models demand energy, cooling, and processing capacity on a massive scale. This has put pressure on data centers on Earth, which already face physical, environmental, and economic limitations.

It is in this context that an idea that once seemed distant emerges: taking computing off the planet.

And while many countries are still discussing possibilities, one has already begun to implement it.

A game-changing step forward...Without much fanfare, a set of satellites with processing capacity has been placed in orbit with a clear objective: not only to transmit data, but to analyze it directly in space.

These systems function as true computing nodes, capable of running artificial intelligence models without depending on terrestrial infrastructure. This represents a profound change. Instead of sending information to Earth and waiting for processing, data can be analyzed almost instantly, in the very environment where it is captured.

More than a test, this is an operational base under development. And that makes a difference.

While Western projects still explore prototypes, point tests, and experimental initiatives, this approach bets on direct implementation. In practice, this means gaining real-world experience, solving problems before others, and, most importantly, occupying space—literally.

The motivation isn't just technological. It's also energetic.

Data centers consume enormous amounts of electricity and water for cooling. As AI grows, this consumption increases exponentially. In space, the scenario changes completely: there is abundant and virtually continuous solar energy, in addition to a naturally cold environment that facilitates heat dissipation.

This reduces costs, increases efficiency, and eliminates competition for terrestrial resources.

But there is another even more relevant factor: autonomy.

Processing data directly in orbit allows for faster responses and reduces dependence on terrestrial infrastructure. This is especially important for strategic applications, such as environmental monitoring, communications, and defense systems.

The real prize isn't technology...Ultimately, this race isn't just about innovation. It's about control.

Whoever masters the ability to process data in space will have an advantage in critical areas: surveillance, real-time analysis, strategic decision-making, and even military operations. It's not just about efficiency—it's about power. And that's why governments and large companies are investing heavily in this type of technology.

The digital infrastructure of the future may not be in servers scattered around the planet, but orbiting around it.

Building a supercomputer in space is not simple. The challenges are extreme: constant radiation, lack of maintenance, severe thermal variations, and the need for perfect operation for years.

Even so, experts believe that this reality is closer than it seems.

Perhaps the most impressive thing is that this new race has already begun—and most people haven't realized it yet.

While we continue to associate artificial intelligence with apps and screens, something much bigger is beginning to take shape above our heads.

It's not just machines.

It's systems that can redefine the technological balance of the planet.

The battle for control over AI has literally "gone off-planet" because orbital space offers the ideal environment for the massive physical requirements of next-generation artificial intelligence—specifically unlimited energy, natural cooling, and regulatory freedom. 

As AI models grow, they face "Earth-bound" bottlenecks like power grid strain, water scarcity for cooling, and local opposition to massive data centers. Moving these operations to space addresses these issues while placing them beyond the reach of traditional national laws. 

Why the battle shifted to space...The shift is driven by three primary factors that make Earth increasingly inhospitable for "frontier" AI development: 

Unlimited solar energy: Earth-based data centers already push local power grids to their limits, causing rising electricity costs for residents. In orbit, data centers have access to constant, high-intensity solar power 24/7 without atmospheric interference.

Thermal management (free cooling): AI hardware generates extreme heat. On Earth, this requires millions of gallons of water for cooling—a growing environmental concern. In the vacuum of space, the ambient temperature provides a natural "heat sink," significantly reducing the infrastructure needed to keep systems from melting.

The "legal void": There is a growing "crisis of control" as governments struggle to regulate AI's safety risks. Off-planet data centers operate in a "no-man's-land" where companies can bypass "NIMBY" (Not In My Backyard) protests and strict national safety or privacy laws. 

The geopolitical & corporate "space race"...The "battle" is no longer just about who has the best code, but who controls the extraterrestrial infrastructure supporting it:

-Corporate sovereignty: Leaders at OpenAI and Google are reportedly exploring "orbital data farms" to decouple their most advanced models from terrestrial constraints.

-Global dominance: Geopolitical rivals like the U.S. and China view AI as a "winner-take-all" race. Dominating the space-based compute layer ensures that their AI systems can operate at a scale—and with a level of autonomy—that Earth-bound rivals cannot match. 

-The "alien mind" perspective...Some experts and futurists suggest that a sufficiently advanced AI would naturally prefer space. Unlike biological life, AI is not tethered to Earth's biosphere; its only "food" is data and energy, both of which are more abundant and accessible in the cosmos. In this view, the "battle" is the beginning of a civilizational split where the most powerful intelligence eventually outgrows its "cradle" on Earth.

mundophone

  TECH Toilet giant Toto may hold the key to ending the RAM shortage Toto, the Japanese company best known for its heated bidet toilets, is ...