Thursday, January 29, 2026

 

DIGITAL LIFE


Google dismantles network in China that used 9 million Androids for cybercrimes

If you've noticed your smartphone is slow or consuming more data than usual, it might not be your fault. Google has just announced one of the largest cleanup operations in Android history, dismantling a massive network that silently used about nine million devices worldwide as "gateways" to the internet, without their owners suspecting anything.

The operation targeted the China-based company Ipidea, which is accused of operating the world's largest "residential proxy network." In practice, Ipidea transformed millions of cell phones, computers, and smart devices belonging to ordinary people into an infrastructure rented to criminals.

The scheme was simple and insidious. Ipidea convinced developers of free applications (games, utilities, etc.) to include a secret software development kit (SDK) in their applications, paying them for each installation.

When a user installed one of these “free” apps, the SDK turned the device into a proxy. This allowed hackers and cybercriminals to route their internet traffic through the victim's phone.

The result: To the outside world, it appeared that the criminal activity (such as fraud or computer attacks) was being carried out by the innocent phone owner, concealing the true identity of the attackers.

This network was the basis for the creation of the Kimwolf botnet, which last year hijacked millions of devices to launch devastating DDoS attacks, considered the most powerful ever observed.

IPIDEA, one of the world's largest residential proxy networks...John Hultquist, chief analyst at Google Threat Intelligence Group (GTIG), highlighted the danger of these networks.

According to him, residential proxies are common tools, used for everything from sophisticated espionage to large-scale criminal schemes.

“By routing traffic through a person's home connection, attackers can hide in plain sight while invading corporate environments.

By taking down IPIDEA's infrastructure, we were able to dismantle a global marketplace that sold access to millions of hijacked consumer devices.”

Google also revealed that, until recently, IPIDEA's infrastructure was used by more than 550 threat groups with varied motivations, such as cybercrime, espionage, advanced persistent operations (APTs), and disinformation campaigns, originating from countries such as China, North Korea, Iran, and Russia.

These activities included access to SaaS environments and local infrastructure, as well as password spray attacks.

A recent study by Synthient showed that malicious actors behind the AISURU/Kimwolf botnet exploited vulnerabilities in residential proxy services, such as IPIDEA, to send commands to vulnerable IoT devices behind firewalls, spreading malware.

This malware, capable of transforming consumer devices into proxy endpoints, is secretly installed in applications and games already pre-installed on Android TV streaming boxes from little-known brands.

In this way, these infected devices begin to relay malicious traffic and participate in distributed denial-of-service (DDoS) attacks.

In addition, IPIDEA launched independent apps, aimed directly at users interested in making "easy money," offering payment for installing the applications and allowing the use of their "idle bandwidth."

Residential proxy networks allow traffic routing through IP addresses provided by internet service providers (ISPs), but they are also ideal for criminals to disguise the origin of malicious actions.

The Google Play Protect cleanup...With a US federal court order in hand, Google took down dozens of back-end systems and websites that controlled the network. In addition, the Google Play Protect security system was updated to automatically identify and remove any app containing Ipidea's malicious code, blocking new installations.

Despite Ipidea claiming that its services were for “legitimate business” use, Google considered the risks unacceptable. This case serves as a stark reminder that "free" on the internet often comes at a hidden cost. The recommendation remains: avoid installing apps from unknown sources and regularly review the permissions and apps you have on your phone.

On January 28, 2026, Google announced it had dismantled a massive, China-based residential proxy network operated by IPIDEA. The operation, supported by a U.S. federal court order, crippled a network that had hijacked approximately 9 million Android devices, along with computers, to facilitate global cybercrime and espionage.

Key details of the operation(below):

The culprit: IPIDEA operated at least 13 residential proxy brands, using software development kits (SDKs) embedded in legitimate-looking apps to turn consumer devices into "exit nodes".

The scale: The botnet comprised over 9 million infected Android devices and other internet-connected, non-Play-certified devices.

The impact: Google’s threat intelligence Group (GTIG) observed over 550 threat groups using IPIDEA’s network in a single week in January 2026 to hide their identities while performing hacking, password-spraying, and other malicious activities.

The action: Google seized dozens of domains, disabled the technical backend, and removed hundreds of associated applications, degrading the network by millions of devices.

How the network operated(below):

IPIDEA’s network acted as a "residential proxy" service, meaning criminals could route their traffic through regular consumers' home Android phones and devices.

Invisible infection: Users likely installed apps that seemed harmless but contained malicious code that turned their device into a proxy relay.

Cybercriminal anonymity: This enabled hackers to make their illegal traffic appear as if it was originating from legitimate residential homes, making it difficult for law enforcement to track them.

Global reach: The compromised devices were used for various malicious actions, including data theft and large-scale, automated attacks.

Broader context: 2025–2026 Android Threats

This action follows a trend of massive, China-linked botnets.

BADBOX 2.0 (July 2025): Google filed lawsuits against 25 Chinese entities for the "BADBOX" botnet, which infected over 10 million Android devices (smart TVs, projectors, tablets) with pre-installed malware.

Lighthouse (Nov 2025): Google sued another group, "Lighthouse," for a phishing-as-a-service platform that targeted over one million victims.

mundophone

 

DIGITAL LIFE


Now under pressure, digital platforms create filters against ‘AI junk

As “artificial intelligence junk” content spreads across the internet, efforts to contain the flood of images and videos considered low quality are also growing. Productions such as cats painting pictures, celebrities in compromising situations, or cartoon characters promoting products have become ubiquitous with the use of easily accessible AI tools, such as Google's Veo and OpenAI's Sora.

— The advancement of AI has generated questions about low-quality content, also known as AI junk [a term that has become popular in English as “AI slop”] — says Neal Mohan, CEO of YouTube.

According to critics, this is material created on a large scale, with little creative effort. This type of content is “cheap, bland and mass-produced,” says Swiss engineer Yves, who preferred not to give his last name, to AFP.

This assessment contrasts with the view of industry leaders. Satya Nadella, head of Microsoft, argues that the debate should be overcome and that technology should be adopted as an instrument to enhance creativity and productivity. The company is among the giants that invest heavily in artificial intelligence.

There are also those who see in the criticism of so-called "AI junk" a broader resistance to the democratization of creative tools.

— At its core, the criticism of AI junk is a criticism of individual creative expression — argues Bob Doyle, a YouTube influencer specializing in AI-generated content.

Stricter measures(below):

Despite the disagreements, digital platforms have begun to react. Pinterest informed AFP that it created a specific filter after receiving recurring requests from users who wanted to see fewer images of this type. TikTok introduced a similar feature at the end of last year.

YouTube, as well as Instagram and Facebook — both belonging to Meta — offer mechanisms to reduce exposure to this content, although they do not have explicit filters aimed solely at AI-generated productions.

Smaller companies are also adopting stricter measures. The music platform Coda Music, which has around 2,500 users, has begun allowing content created by artificial intelligence to be reported or even completely blocked.

"Until now, there has been a lot of participation in identifying AI artists," the company's CEO and founder, Randy Fusee, told AFP.

In the visual arts segment, the social network Cara, aimed at artists and designers and with over one million users, has implemented a combination of algorithms and human moderation to filter out AI-generated content. For its founder, Jingna Zhang, the users' demand is clear: "People want human connection."

The term "AI junk" (often called AI slop in English) refers to content (texts, images, videos) generated en masse by artificial intelligence that lacks effort, quality, or meaning, polluting the digital environment with clickbait and misinformation.

mundophone answers: Which platforms generate the most AI waste?...Based on recent reports (2025-2026), the main platforms where this content proliferates are:

YouTube: Reports indicate that more than 20% of the platform's content can already be considered "AI-generated junk," including fake videos, with synthetic avatars, or automatically generated to generate revenue.

TikTok: the platform generates a significant amount of "digital waste" and has a considerable environmental impact, mainly due to the high energy consumption required to maintain its data centers and the data traffic of short videos...Currently, it is adopting some measures to try to reduce digital waste.

Facebook and Meta (Threads/Instagram): There has been a massive increase in AI-generated posts, with researchers pointing out that the algorithm itself sometimes boosts this content, which includes fake images of political figures and curiosity clicks.

Pinterest: Users report that the visual inspiration platform has been "flooded" with AI-generated images, leading the platform to create tools to "tune" (adjust/reduce) the amount of AI in feeds.

Music Platforms (e.g., Deezer): Streaming services are facing a flood of AI-generated music (tools like Suno and Udio), with reports of over 60,000 automatically uploaded tracks daily, often by fake accounts.

Amazon Kindle and E-books: A notable increase in AI-generated digital books with fictitious authors and low quality, forcing platforms to create publishing limits.

LinkedIn: The network has seen an increase in generic, low-quality posts, automatically produced by content automation tools that scrape websites and generate posts with little "human touch." Main Tools/Sources of AI Slop:

Text and image generators: Copy.ai, Jasper.ai, Writesonic, Midjourney, DALL-E, Canva.

Video/Avatar Tools: Synthesia, InVideo.

Audio/Music Tools: Suno, Udio.

Context of the "War on AI Slop"

In 2026, platforms like YouTube intensified the fight against this material, trying to balance the use of AI for creators with combating low-quality automated content. Meanwhile, new platforms focused on human content (such as Cara and Pixelfed) are emerging as "anti-AI" alternatives.

mundophone

Wednesday, January 28, 2026


DIGITAL LIFE


In times of AI generating billions in profits for a few and being omnipresent, cell phone-free clubs become an urban refuge in Europe

In-person meetings that require the surrender of smartphones at the entrance propose hours of silence, manual activities, and screen-free conversations as a response to mental exhaustion, hyperconnectivity, and the growing difficulty of being present in the physical world.

Contemporary life is marked by screens, constant notifications, packed schedules, and hurried commutes. In response, groups are forming in major European cities to experience something increasingly rare: in-person meetings without cell phones, where silence, mindfulness, and direct interaction replace, even if only for a few hours, the logic of hyperconnectivity.

Joel Khalili, a journalist for Wired, participated in one of these meetings in London, England, and reported on the experience. “I was greeted at the door by the event host, who was wearing a T-shirt that said ‘The Offline Club.’ I handed over my cell phone, which was stored in a locker specially built for the event — a kind of miniature capsule hotel,” he said.

According to him, the Offline Club started somewhat impromptu in 2021 in the Netherlands. It was organized by Ilya Kneppelhout, Jordy van Bennekon, and Valentijn Klol. After an initial meeting, the trio, considering the experience instructive, began promoting sporadic offline getaways with the goal of fostering informal interaction between strangers.

Officially, the club was founded in February 2024, with events in a café in Amsterdam. Later, it expanded to 19 other cities, mainly in Europe, with each branch run as a franchise by part-time organizers.

“The events generally follow a defined format: an hour of silence, during which people are free to do whatever they want — read, assemble puzzles, color, do crafts, and so on — followed by an hour of conversation without cell phones with the other participants,” explained Khalili.

Laura Wilson, co-host of the London branch of Offline Club, told the journalist that the gatherings are designed as a remedy for the frenetic, noisy, and impersonal pace of city life, where every fraction of time is measured and controlled by alerts and reminders sent by smartphones.

“It’s like a moment of free time, where you have no responsibilities for a while,” she noted. “It’s rekindling that magic of when you were with people for no apparent reason and had no sense of time passing.”

On the night Khalili participated, she met Sangeet Narayan, programmer of the notification system for Facebook, Instagram, and WhatsApp, from Meta. He reported that he went to the event hoping to break free from his addiction to those same apps.

“I feel like I’m addicted to my cell phone,” he confessed. “I feel an urgent need to look at my phone—to open it, for no reason at all.”

He added that, during the first hour of the meeting, he found himself resisting the urge to look around to see what other people were doing. “It felt like I was snooping into their private lives,” he pointed out.

Khalili, for his part, admitted that it took him a while to get used to the combination of silence and collective concentration. “Twice, I found myself reaching into my pocket where my phone should be, to check how much time had passed. A flash of panic—I must have lost it somewhere!—gave way to embarrassment at this unwelcome evidence of my own pre-programming. While taking notes or playing with colored pencils, however, I managed to forget about the 40 strangers in the room,” he added.

When it was time to socialize, he started chatting with the people closest to him, talking about the quiet activities they had chosen, the books they were reading, the prospect of raising children in the smartphone age, and the recent ban on social media in Australia.

“The conversation often revolved around a hypocrisy widely shared by the group: the belief that the habit of constantly scrolling invades free time, notifications disturb the peace, and algorithms pollute discourse, combined with a simultaneous reluctance to give up any of these things,” the journalist commented.

And he concluded: “When it was time to leave, I got in line to get my phone…I put on my headphones, selected a song, and opened Google Maps to see the way home.”

The Offline Club is a Dutch movement created by Ilya Kneppelhout, Jordy van Bennekom, and Valentijn Klok that promotes digital detox gatherings. With over 530,000 followers on Instagram, the group organizes events where participants put away their cell phones and laptops, focusing on conversation, reading, and hobbies, aiming to reduce screen addiction and loneliness, expanding to cities like London and Paris.

Main characteristics and proposals:

Mission: To exchange "screen time" for "real time" and restore humanity to society.

Meetings: Include coffee, dinner, and technology-free retreats where people read, paint, play board games, or relax.

Origin and Expansion: Created in Amsterdam (Netherlands) in February 2024, the movement has grown rapidly and now holds meetings in several European cities and also in New York.

Objective: To combat the epidemic of loneliness and the rampant consumption of digital media. The club, despite using Instagram to organize events, strongly encourages physical disconnection, creating the paradox of using technology to promote "offline" activities.

mundophone


DIGITAL LIFE


Why the future of AI depends on trust, safety, and system quality

When Daniel Graham, an associate professor in the University of Virginia School of Data Science, talks about the future of intelligent systems, he does not begin with the usual vocabulary of cybersecurity or threat mitigation. Instead, he focuses on quality assurance and on how to build digital and physical systems we can trust.

"We are moving toward a world where software does not just live in the digital space," said Graham, who earned bachelor's and master's degrees in engineering at UVA. "It's embodied in cars, robots, medical devices and public infrastructure. Once systems can act in the real world, the cost of failure becomes physical. So, the question is not only 'Is it smart?' but also "Is it safe, reliable and high quality?'"

Graham joined the School of Data Science in 2025 after teaching computer science for seven years. The move, he said, was a chance to refresh, collaborate with new colleagues and teach in smaller, more engaged classroom environments.

The intersection of security and safety...Graham's research explores secure embedded systems and networks, particularly those that directly interact with the physical world, including medical devices, water treatment systems, autonomous vehicles and other forms of operational infrastructure.

Early in his career, Graham saw firsthand how vulnerabilities in software could translate into real-world consequences. Over time, this led him to view security not as a defensive activity, but as a measure of system quality and safety.

"We already know how to build incredibly powerful smart systems," he said. "What we need now is assurance." He emphasized that as society increasingly relies on intelligent systems to manage hospitals, transportation networks, power grids and military hardware, those systems must be dependable.

He believes the model already exists. Just as a professional engineer must sign off on the safety of a bridge, he says, future data and AI systems should require comparable review, oversight and certification.

"We have strong regulatory norms for physical infrastructure," he said. "But the digital infrastructure that increasingly runs everything does not yet follow comparable accountability standards. That has to change."

A public voice on responsible system evaluation...Graham also writes and teaches widely on secure systems evaluation and penetration testing. His book, released this year, "Metasploit: The Penetration Tester's Guide (Second Edition)," introduces readers to professional methods for testing and auditing complex systems. Its reach is global, with planned translations in Mandarin, Korean, French and Russian.

"Penetration testing is the digital equivalent of financial auditing," he said. "Just as organizations require audits to ensure the integrity of their financial systems, critical digital and embedded systems should be routinely evaluated for quality and resilience."

Framing cybersecurity in this way, Graham translates a highly technical concept into terms the public can easily understand.

"People understand quality," Graham said. "They understand the difference between something that is built well and something that is built carelessly. We should expect the same quality from the systems that run our world."

Looking ahead...As data science extends further into automation, embedded intelligence and decision-making systems, Graham hopes to help shape how future practitioners view their responsibilities.

"The most important systems of the century ahead will be intelligent, networked and physical," he said. "The people building them must think carefully about safety, reliability and impact. Quality is not optional. It is the foundation of trust."

The future of Artificial Intelligence (AI) does not depend on a single factor, but on a complex combination of technical advancements, physical resources, regulation, and human acceptance. Recent research indicates that the evolution of AI in the coming years (2026 and beyond) will be shaped primarily by:

Energy and physical infrastructure: The future of AI depends on megawatts of energy, not just faster chips. The demand for sustainable energy and data centers is one of the biggest bottlenecks, often exceeding hardware production capacity.

Autonomous agents (The Next Wave): The transition from Generative AI (creating content) to autonomous agents (performing tasks and acting independently) is the central trend.

Data quality and volume: The availability of high-quality data to train models is crucial, driven by the use of Big Data and advancements in processing.

Regulation and ethics: The creation of regulatory frameworks (such as the EU AI Act) is fundamental to managing risks, ensuring privacy, and defining security standards. Ethical governance is crucial for public trust.

Human talent and skills: Organizations' ability to develop and implement AI depends on reskilling the workforce, focusing on "humans in charge" (human-in-the-loop).

Transparency (Explainable AI): The future demands overcoming the "black box" problem, making AI systems more explainable and auditable.

In short, AI is moving towards becoming a "basic work infrastructure," where deep integration, energy efficiency, and ethical accountability will determine who leads the market.

Provided by University of Virginia 

Tuesday, January 27, 2026


DIGITAL LIFE


Silent Call Scam: the new scam where criminals use silent calls to collect audio and feed it to artificial intelligence software for voice cloning

A new scam, called the "silent call scam," has become a threat to individuals and businesses. Criminals use silent calls to collect audio and feed it to artificial intelligence software for voice cloning. In this way, they can create scams like the imposter scam, in which they pressure victims into sending money and data.

This pattern, called the "Silent Call Scam," appears to be just a strange call or a poorly executed telemarketing call. The scam takes advantage of a common behavior, which is that of the person answering and automatically saying "hello." This response can become a starting point for voice cloning, especially when the criminal combines the audio with other excerpts found on social media, videos, interviews, and apps.

Adriano Warmenhoven, a cybersecurity expert at NordVPN, says that cloning tools only need a few seconds of clear audio. With ten to twenty seconds, it is already possible to create a convincing imitation. “If the victim continues talking, out of irritation, curiosity, or to create some kind of evidence, they may unintentionally give more material to the scammers,” he says.

The expert gives tips on how to identify and protect yourself from the new scam. In addition, he explains how large corporations can take steps to protect themselves from criminals.

How to identify..If the call is silent or suspicious, it is already a warning sign. The best thing to do in this case is to hang up the phone. If you say a simple "hello," it may be enough for the cloning tool to analyze the tone and pitch of your voice, along with other vocal characteristics.

How to protect yourself...One of the most effective protections is to avoid providing usable voice samples to scammers. In practice, this means that the person calling should have the opportunity to speak first. You should not answer the call with a "hello," and neutral responses should be given whenever possible. You should end the call if it seems suspicious, instead of prolonging it.

Beyond phone calls, people should be cautious about the amount of voice content they share online. Social media platforms are among the largest public sources of voice recordings, videos, voice messages, and interviews, and criminals can misuse them.

What are the risks? The biggest risk is criminals using your voice to create a realistic clone. Modern voice cloning software is inexpensive, widely available, and capable of producing highly convincing results from small samples. With this cloned voice, scammers can impersonate you to deceive friends or family members.

Scammers use emotional stories, such as accidents or emergencies, to trick victims into giving them money or confidential information. Even if the person realizes they are talking to a scammer and no immediate harm occurs, criminals can still use the recording later in further attacks.

What to do if you think you've fallen for a scam...If you suspect that criminals have recorded your voice, you should be alert to subsequent attempts to capture your voice through other channels. Furthermore, it's important to warn family, friends, or colleagues that a scammer may try to impersonate you using a cloned voice.

Alert contacts that any urgent or unusual request should be verified by returning the call using a known phone number or a commonly used communication method. Quick verification can prevent scammers from escalating the attack and spreading it to others.

Risk to businesses...There are significant risks for businesses that fall victim to these scams. In fact, criminals can use voice cloning to impersonate company executives, including managers or other high-ranking positions, and make requests that sound legitimate and authentic. This can lead employees to make unauthorized transactions, cause leaks of confidential information, and damage the company's reputation.

A silent call scam (or "blank call scam") is a deceptive tactic where a recipient answers a phone call only to be met with total silence before the caller hangs up. While some silent calls are caused by automated telemarketing systems (predictive dialers) failing to connect to an agent, many are calculated first steps in more advanced fraudulent schemes. 

How the scam works(below)

Active Number Validation: The primary goal is often to confirm that your phone number is "live" and answered by a human. Once verified, your number is marked as a high-value target and sold on the dark web or added to lists for future phishing, vishing (voice phishing), or SMS scams.

Voice Recording and AI Cloning: Scammers may stay quiet to prompt you to speak first (e.g., saying "Hello?" multiple times). As of 2026, experts warn that just a few seconds of your voice can be recorded and used by AI tools to create a convincing voice clone. This clone is then used to impersonate you in "emergency" calls to your friends or family to demand money.

Voice Authorization Fraud: Some scams specifically try to get you to say the word "Yes" (e.g., by asking "Can you hear me?"). This recording can potentially be used as a "voice signature" to authorize unauthorized charges or account changes. 

Immediate steps to take(below)

Hang Up Immediately: If you answer and hear no response within a second or two, disconnect the call. Do not wait or continue saying "Hello".

Stay Silent: If you must answer unknown calls, do not speak first. Wait for the caller to identify themselves.

Block and Report: Use your phone’s built-in "Block Contact" feature. You can also report the number to national agencies like the FTC (US), Action Fraud (UK), or the Chakshu portal (India).

Use Call Screening: Enable features like "Silence Unknown Callers" (iPhone) or "Spam Protection" (Android/Google) to automatically filter suspicious calls.

Set a Family "Safe Word": To counter AI voice cloning, establish a secret word with family members to verify each other's identity during urgent phone requests for money. 

mundophone


TECH


Low-cost system turns smartphones into emergency radiation detectors

Prompt, individual-based dose assessment is essential to protect people from the negative consequences of radiation exposure after large-scale nuclear or radiological incidents. However, traditional dosimetry methods often require expensive equipment or complex laboratory analysis.

Now, researchers at Hiroshima University have developed a cost-effective, portable dosimetry system that can provide immediate on-site readings using radiochromic film and a smartphone.

A low-cost, portable radiation tool...The study, published in Radiation Measurements, demonstrates a practical solution for personal preparedness in mass-casualty events. The system combines a small piece of Gafchromic EBT4 film with a foldable, battery-powered portable scanner and a smartphone camera.

Setup of the portable scanning system: a smartphone positioned above an LED-lit chamber for consistent film image capture. Credit: Bantan et al., 2026, Radiation Measurements

"To protect people in the event of a severe radiological or nuclear accident, voluntary on-site dose assessments and prompt decisions regarding medical actions must be performed immediately," says study corresponding author Hiroshi Yasuda, a professor at Hiroshima University's Research Institute for Radiation Biology and Medicine. "Simplicity, universality, and cost-effectiveness are critical factors for these emergency measures."

Graph showing the relationship between X-ray dosage and cyan color intensity as measured by different smartphone models. The results demonstrate consistent performance across devices, particularly for detecting high doses. Credit: Bantan et al., 2026, Radiation Measurements

How the smartphone system works...The EBT4 film is designed to change color instantly when exposed to radiation, a change that can be detected by the naked eye. By placing the film in a portable scanner and capturing an image with a smartphone, users can quantify relatively high radiation doses—up to 10 Gray—using mobile image-processing applications. To put this into perspective, a 10 Gray dose to the skin is high enough to cause permanent hair loss.

The research team tested the system using various smartphone models, including Samsung and iPhone devices. Their analysis showed that the cyan color channel in digital images provided the most consistent and reliable dose-response data.

While professional desktop scanners offer higher precision, this smartphone-based approach provides an adequate solution that is highly portable and costs less than USD $70.

"Our goal was to design a system that works even under the worst-case accident scenarios, such as after a natural disaster where infrastructure might be damaged," Yasuda adds. The team is now working to standardize the protocols and ensure the system remains reliable under diverse environmental conditions.

Provided by Hiroshima University  

Monday, January 26, 2026

 

MICROSOFT


Maia 200: the AI ​​chip Microsoft wants to start the “era of reasoning AI”

Microsoft announced on Monday (26) the launch of Maia 200, its next-generation artificial intelligence accelerator, developed to meet the demands of what the company called the “era of reasoning AI.” The new chip is designed to offer high performance in inference, with significant efficiency gains and cost reductions in large-scale AI workloads.

Maia 200 is already in operation in the central region of the United States in Microsoft's data centers, with expansion planned for the Phoenix, Arizona region, as well as other locations in the future. The first systems are being used to power new Microsoft Superintelligence models, accelerate Microsoft Foundry projects, and support Microsoft Copilot.

By operating some of the world's most demanding AI workloads, the company claims to be able to precisely align silicon design, model development, and application optimization, generating consistent gains in performance, energy efficiency, and scale in Azure.

In practice, the Maia 200 chip is capable of running the largest current AI models, with room for even larger models in the future, according to Microsoft.

The accelerator features native FP8 and FP4 tensor cores, over 100 billion transistors, and a redesigned memory subsystem with 216 GB of HBM3e and bandwidth up to 7 TB/s, plus 272 MB of integrated SRAM. In practice, each chip delivers over 10 petaFLOPS in FP4 precision and about 5 petaFLOPS in FP8, capable of running the largest current AI models and scaling to even more complex architectures.

Microsoft claims that the Maia 200 outperforms Amazon's third-generation Trainium in FP4 and Google's seventh-generation TPU in FP8, in addition to offering 30% more performance per dollar compared to the most advanced hardware currently used by the company. The project's focus is on optimizing the so-called token economy, one of the main cost bottlenecks in operating generative models at scale.

The new accelerator integrates Azure's heterogeneous AI infrastructure and will be used to run multiple models, including the latest versions of OpenAI's GPT-5.2, as well as applications such as Microsoft Foundry and Microsoft 365 Copilot. Microsoft's Superintelligence team will also employ the Maia 200 in synthetic data generation and reinforcement learning pipelines, accelerating the improvement of proprietary models.

In general, it was designed to run the largest current language models with headroom for future ones. The chip has over 100 billion transistors and promises to deliver 10 petaflops in FP4, as well as around 5 petaflops in FP8.

Currently, several technology companies are moving to create their own components for this same purpose. Microsoft's Maia 200 aims to compete in the market with Google's TPUs and Amazon's Trainium line. In fact, it offers 3x more performance in FP4 than Amazon's 3rd generation model and FP8 performance superior to Google's TPU v7.

From a systems perspective, the Maia 200 introduces a scalable, two-tier network architecture based on standard Ethernet, with an integrated NIC and proprietary communication protocols between accelerators. Each chip offers 2.8 TB/s of dedicated bidirectional bandwidth for expansion and supports clusters of up to 6,144 accelerators, which, according to Microsoft, reduces energy consumption and total cost of ownership (TCO) in high-density inference environments.

The Maia 200 is already being deployed in the Azure US Central region, with expansion planned for US West 3 in Arizona and other locations. Microsoft also announced a preview of the Maia SDK, with integration to PyTorch, the Triton compiler, and optimized kernel libraries, allowing portability between different accelerators and greater control for developers.

Importance of Maia 200...AI inference has become a critical and expensive part of the operation of AI companies. The launch of Maia 200, then, focuses on reducing costs, increasing energy efficiency and decreasing dependence on NVIDIA GPUs, as well as optimizing the execution of models such as Copilot within Azure data centers.

In addition to powering Copilot operations, the Maia 200 is also expected to support models from Microsoft's superintelligence team. Not only that, but the company has opened the chip's SDK to developers, academics, and AI labs.

mundophone

  DIGITAL LIFE Google dismantles network in China that used 9 million Androids for cybercrimes If you've noticed your smartphone is slow...