Wednesday, March 11, 2026


TECH


How does the 'living computer' with 800,000 human neurons capable of playing video games work?

A technological demonstration released this month caught the attention of the innovation sector by showing something that, at first glance, seems like science fiction: human neurons grown in a laboratory playing video games. The experiment was presented by the Australian startup Cortical Labs, which released a video of its CL1 biological device running the classic game Doom.

A clump of human brain cells can play the classic computer game Doom. While its performance is not up to par with humans, experts say it brings biological computers a step closer to useful real-world applications, like controlling robot arms.

In 2021, the Australian company Cortical Labs used its neuron-powered computer chips to play Pong. The chips consisted of clumps of more than 800,000 living brain cells grown on top of microelectrode arrays that can both send and receive electrical signals. Researchers had to carefully train the chips to control the paddles on either side of the screen.

Now, Cortical Labs has developed an interface that makes it easier to program these chips using the popular programming language Python. An independent developer, Sean Cole, then used Python to teach the chips to play Doom, which he did in around a week.

“Unlike the Pong work that we did a few years ago, which represented years of painstaking scientific effort, this demonstration has been done in a matter of days by someone who previously had relatively little expertise working directly with biology,” says Brett Kagan of Cortical Labs. “It’s this accessibility and this flexibility that makes it truly exciting.”

The neuronal computer chip, which used about a quarter as many neurons as the Pong demonstration, played Doom better than a randomly firing player, but far below the performance of the best human players. However, it learnt much faster than traditional, silicon-based machine learning systems and should be able to improve its performance with newer learning algorithms, says Kagan.

Unlike systems based solely on algorithms, the equipment uses real human brain cells connected to a silicon chip. The neurons receive electrical stimuli corresponding to the game information and respond with signals that are interpreted as actions within the digital environment, such as moving or aiming at enemies.

A computer made of neurons...Presented during the Mobile World Congress 2025 in Barcelona, ​​the CL1 is described by the company as the first commercially viable biological computer. At its core are approximately 800,000 human neurons derived from stem cells reprogrammed from skin and blood samples from adult donors, according to information released by IEEE Spectrum magazine.

These cells grow on an electrode array capable of sending electrical impulses and recording neural tissue responses in real time. In the Doom demonstration, approximately 200,000 neurons received game data converted into electrical signals, processed this information, and produced commands that controlled gameplay.

The public demonstration was not published in a peer-reviewed study. However, the scientific basis of the project has academic precedents: in 2022, researchers affiliated with the company reported in the journal Neuron that similar neuronal cultures were able to learn to play Pong in a few minutes, spontaneously reorganizing themselves.

Energy efficiency as an advantage...The advance comes amidst the debate about the high energy consumption of artificial intelligence. While large model training centers use enormous amounts of energy, the human brain operates with about 20 watts, comparable to the consumption of an energy-saving light bulb.

According to the company's chief scientist, Brett Kagan, a rack with 30 CL1 units consumes less than one kilowatt in total. The proposal is not to compete directly with GPUs used in artificial intelligence, such as those produced by Nvidia, but to operate in areas where adaptive learning and energy efficiency are more relevant, such as robotics, drug discovery, and modeling neurological diseases.

Convergence between brain and machine...The development occurs in parallel with initiatives that seek to directly integrate the human brain and technology. One of the best known is Neuralink, a company that works on implanting electrodes in the brain for communication with computers.

While projects of this type connect devices to the human brain, the Cortical Labs system follows the reverse path: it takes biological tissue into the machine. Experts point out that, in the future, these two approaches may converge in the creation of hybrid interfaces between biological intelligence and digital computing.

Neurons as a service...In addition to selling the device, whose announced price is around US$35,000 per unit, the company is betting on a remote access model called "wetware as a service". In this system, researchers can use live neuronal cultures hosted in a laboratory for approximately US$300 per week, without needing to maintain their own infrastructure.

Among the startup's investors is In-Q-Tel, a venture capital fund associated with the US intelligence community, indicating a strategic interest in the technology's development.

According to the company, the neuronal cultures used in the system do not present structures associated with consciousness. Even so, researchers acknowledge that the expansion of this type of technology raises ethical and regulatory questions that do not yet have a clear legal framework. For many experts, the discussion about the use of human tissue in commercial computing is only just beginning.

mundophone

Tuesday, March 10, 2026


DIGITAL LIFE


The AI that taught itself: How AI can learn what it never knew

For years, the guiding assumption of artificial intelligence has been simple: an AI is only as good as the data it has seen. Feed it more, train it longer, and it performs better. Feed it less, and it stumbles. A new study from the USC Viterbi School of Engineering was accepted at the IEEE SoutheastCon 2026, taking place March 12–15. It suggests something far more surprising: with the right method in place, an AI model can dramatically improve its performance in territory it was barely trained on, pushing well past what its training data alone would ever allow.

The method was developed by Minda Li, a USC Viterbi undergraduate who has been pursuing research since her freshman year, working alongside her advisor Bhaskar Krishnamachari, a Faculty Fellow and Systems Professor in the Ming Hsieh Department of Electrical and Computer Engineering, with a joint appointment in the Thomas Lord Department of Computer Science at the USC Viterbi School of Engineering and the USC School of Advanced Computing. Their paper is available on the arXiv preprint server.

Together, they tested GPT-5's ability to write code in Idris, an extraordinarily obscure programming language with a fraction of the online presence of mainstream languages like Python. The results were striking: by giving the AI feedback on its errors and letting it try again, Li pushed the model's success rate from a dismal 39% all the way to 96%.

"Our AI tools are now able to transcend their initial training. Used to be, maybe a year or two ago, you would say an AI model is only as good as the data it has seen. This paper is saying something different." said Prof. Bhaskar Krishnamachari

A language so obscure, even the researchers didn't know it...Python, the world's most popular programming language, has over 24 million code repositories publicly available online, a vast library that AI models like GPT-5 learn from during training. Idris, the language Li and Krishnamachari chose to test, has approximately 2,000. That is roughly 10,000 times less data.

The choice of Idris was deliberate, and, as Krishnamachari describes it, a little playful. "We were hunting for a language so obscure that we hadn't heard of it," he said. "I think we were just in my office together, googling around, trying to find some crazy language that no one's ever heard of." They found Idris, a dependently typed functional programming language used by a small community of specialists, and decided it was the perfect test case.

Crucially, neither researcher could write a line of it themselves. "Neither Minda nor I had ever coded in it, and frankly, we could not tell you if the code was correct or wrong," Krishnamachari admitted. That is part of what makes the findings so striking: Li was guiding an AI to master a language that its own guides could not speak.

The breakthrough: A feedback loop that changes everything...Li started by simply asking GPT-5 to solve 56 Idris coding exercises on Exercism, a popular coding practice platform. Out of the box, the model solved only 22 of them, a 39% success rate, far below its 90% success rate in Python and 74% in Erlang.

She then tried several approaches to improve performance: providing documentation, error manuals, and reference guides. These helped somewhat, pushing the success rate to the low 60s, but never dramatically.

The breakthrough came when she implemented what they call a compiler feedback loop. A compiler is the software that translates human-written code into instructions a computer can execute. When code is wrong, the compiler says so, precisely and in technical detail. She began capturing those error messages and feeding them directly back to GPT-5, asking it to fix the specific problems identified and try again. Up to 20 times per problem.

"I thought we'd probably get a 10% jump," said Li, who designed and ran the experiments. "I was surprised that just that alone, seemingly one simple thing, just keep recompiling, keep trying, was able to get to 96%."

Beyond code: Why this changes everything...What Li and Krishnamachari built is essentially a method for unlocking capability that was always there but inaccessible. By engineering the right kind of structured feedback, they found a way to get far more out of an AI model than its training data alone would ever produce.

Krishnamachari envisions this approach being applied far outside the world of software and niche programming languages. "Imagine you're trying to get an AI to build 3D models of buildings," he said. "You have something that gives feedback: this model is structurally unsafe, it doesn't have the right distribution of materials, it's too expensive to build. Whatever it is, it just gives feedback on everything the AI generates, every iteration.

"What I've learned from this project is that, so long as you can figure out how to provide that kind of clear and correct feedback, there's a chance we can now significantly improve the quality of AI outputs."

He also sees applications in mathematical reasoning and even legal logic, any domain with rules clear enough to generate objective feedback. "If you asked an AI agent to produce a proof of a theorem, it should be fairly easy to say this is incorrect, and here's why, and have it take another crack at it."

The research may also have meaningful implications for endangered and low-resource human languages. Krishnamachari's former Ph.D. student Jared Coleman has been working on Owens Valley Paiute, a Native American language with very limited written data, exploring whether AI can assist in translation with minimal training, mirroring the same core challenge Li tackled with Idris.

One problem down, one to go...Li is already thinking about what comes next. The current system essentially brute-forces its way to a solution, trying and failing until something works, but does not retain what it learned from problem to problem. She wants the model to get smarter with each problem rather than starting from scratch every time.

For Krishnamachari, the bigger picture is about what AI is becoming. "Part of the craziness of all of this is getting an AI tool to do a task that we cannot do ourselves," he said. "We are building tools that are, in some sense, more powerful than we are."

That doesn't worry him; it excites him. AI, he believes, will allow us to execute ideas we previously thought were out of reach, freeing us from the grunt work and putting the onus back where it belongs: on having good ideas in the first place.

It started, after all, with two people in an office, googling around, wondering what would happen if they tried something a little bit crazy.

Provided by University of Southern California 


APPLE



iPhone 17e: the great irony of having a screen manufactured by rival Samsung

You may have noticed that in the world of technology, rivalries often hide great behind-the-scenes partnerships. When you look at the eternal war between Apple and Samsung, it's easy to think they are mortal enemies who don't even exchange a "good morning." But the reality of business is very different. The recently announced iPhone 17e, the new more "accessible" offering from the Apple company, is the perfect example of this curious dynamic.

Did you know that a huge part of the screen you'll see every day if you buy this phone is actually manufactured by Samsung? That's right, the South Korean giant has once again secured the largest share of OLED panel supply for Apple.

According to a recent and quite detailed report from the specialized portal TheElec, Samsung Display (the independent division responsible for creating screens) is not playing around in this market. To give you an idea of ​​the scale of this massive business, last year alone, the brand supplied around 11 million OLED panels for the predecessor iPhone 16e. This represented no less than 50% of the total screen volume for that model. The rest of the pie was divided between LG Display, with 7.5 million units, and the Chinese manufacturer BOE, which came in at a modest 3.5 million. The information revealed now indicates that Apple will maintain exactly the same structure and a very similar order volume for the brand new iPhone 17e.

You might ask: but doesn't Apple try to escape this extreme dependence on Samsung? The truth is that it tries, very hard. A few years ago, the Cupertino company made a huge effort to increase BOE's production share and, in this way, not put all its eggs in the basket of its biggest rival in the smartphone market. The big problem they ran into was that Apple's requirements are legendary, and the Chinese manufacturer BOE simply couldn't deliver the consistent quality demanded, failing rigorous quality control tests multiple times over the years. Without viable alternatives, Apple was forced to swallow its pride and increase the order again for Samsung, which remains arguably the largest and most innovative OLED panel manufacturer on the planet.


What does the iPhone 17e's screen and internals offer? But what does the iPhone 17e's screen actually offer in practice? It's a 6.1-inch OLED panel that guarantees incredibly vibrant colors and absolute blacks, achieving a peak brightness of 1,200 nits. However, there's a technical detail that has generated immense controversy within the community: the refresh rate remains stubbornly stuck at the old 60Hz. At a time when almost all competing Android phones already offer 120Hz for immaculate fluidity when scrolling or playing games, Apple continues to reserve this feature (which it calls ProMotion) exclusively for its most expensive Pro line models.

To compensate for this visual limitation, the phone comes "armed" with tremendous processing power. Inside, it features the powerful A19 processor built with 3-nanometer technology, 8 GB of RAM, and an excellent photographic system that includes a 48 MP main camera equipped with optical image stabilization (OIS) and a 12 MP front camera for your selfies.

The great duel: iPhone 17e vs. Galaxy A56...If you're actively doing the math, it's totally impossible not to compare the new iPhone 17e with its main direct rival and screen "cousin": the Samsung Galaxy A56. In the United States, the basic 256GB version of the iPhone 17e has an official price of US$599, making it about US$50 more expensive than the equivalent version of the Galaxy A56. Which one should you choose? It all depends strictly on what you value most in your daily life.

If you're looking for absurdly fast and long-lasting performance (thanks to the superior A19 chip), top-notch quality in main and face photos, vital access to the satellite emergency SOS system, and the convenience of wireless charging, the iPhone is clearly the way to go. On the other hand, if you can't do without a much smoother and slightly larger screen, if you love taking group photos with a dedicated ultra-wide-angle camera, if you need a giant battery that never lets you down, and if you prefer faster charging, the Galaxy A56 ends up delivering more value in these specific aspects.

The final decision is in your hands and your wallet, but it's somewhat ironic to know that, whichever operating system you choose, Samsung invariably ends up making a little bit of your money.

mundophone

Monday, March 9, 2026


TECH


Battle of the Titans: does the iPhone 17 Pro beat the Galaxy S26 Ultra in camera performance?

The eternal war between Apple and Samsung has just gained another exciting chapter. If you're a fan of mobile photography and were expecting the brand-new Samsung Galaxy S26 Ultra to completely crush the iPhone 17 Pro (which has been on the market since September 2025), you might have to moderate your expectations a bit. The renowned experts at DxOMark have just released their initial tests for the cameras of the new South Korean "monster," and the results are truly surprising.

Despite Samsung making gigantic and highly anticipated improvements to its hardware, Apple seems to continue holding the crown in terms of consistency and precision. Let's delve into the technical details to understand exactly what's happening.

On paper, when you look at the spec sheet, the Galaxy S26 Ultra even looks quite similar to its predecessor, the S25 Ultra. Samsung decided to play it safe in one part of the setup, keeping the exact same 50MP ultra-wide-angle camera (with the 1/2.52-inch sensor and f/1.9 aperture) and the same 10MP telephoto lens for its usual 3x zoom. But the real revolution, and where the Asian brand spent the bulk of its budget, was in the main and long-range zoom cameras.

The company's star attraction remains the 200MP main sensor, but now with a phenomenal trick up its sleeve: an incredibly bright f/1.4 aperture (replacing the old f/1.7). For you, in everyday use, this translates to something simple but vital: this lens lets in about 47% more light to the sensor. In theory and practice, this means that your night photos, or shots taken in very enclosed and dark environments, will have much more detail and much less of that annoying "grain" (digital noise) that usually ruins night photos.

Another drastic change occurred in the 5x periscope telephoto camera, which now has 50 MP and an aperture of f/2.9. Samsung introduced a more compact internal design called ALoP (Adaptive Lens on Prism). If you like taking photos with an artistically blurred background, you'll notice that the light points in the background (the famous bokeh effect) now appear much rounder, smoother, and more natural, losing that square and artificial look of previous models. The downside of this new lens is that the minimum focusing distance has increased to 52 centimeters. This means you'll have a little more difficulty if you want to take macro photos or focus on objects that are very close to the phone's lens.

Why does the iPhone 17 Pro continue to win? Exhaustive DxOMark tests confirm that these Samsung tweaks make a real difference. The S26 Ultra really captures cleaner night photos and presents much more natural and faithful skin tones than last year's model. Portraits are also more balanced and pleasing. So, why does the iPhone 17 Pro keep winning the tug-of-war? The secret lies in the surgical precision of Apple's software.

The DxOMark team noted that the Galaxy S26 Ultra's autofocus still stutters and occasionally struggles to detect moving faces. In addition, in portrait mode, Samsung still creates some artificial cutouts (so-called artifacts) around the subject, occasionally blending the person's hair with the blurred background. The iPhone 17 Pro continues to deliver slightly cleaner images without incident in the most extreme and challenging low-light conditions. When you take a portrait with the iPhone, the computational separation between the person and a complex background is virtually immaculate and instantaneous.

To conclude the analysis, what is clear to consumers is that Samsung has done a remarkable technical job in evolving its photographic system, delivering an undeniably luxurious piece of equipment for any photography enthusiast. However, Apple's relentless consistency, especially in how its software processes difficult images and focuses on faces without hesitation, continues to give it a small but decisive technical advantage in the coveted DxOMark tests. The final choice will always be yours, depending purely on whether you prefer the brutal zoom versatility offered by Samsung or the consistent "point and shoot" reliability of the iPhone.

by mundophone



DIGITAL LIFE




AI fake-news detectors may look accurate but fail in real use, study finds

A dubious link from a friend. A headline too sensational to be true. A video that seems fake but you can't be sure. As online misinformation grows harder to detect, new artificial-intelligence tools promise to help us separate fact from fiction. But do they actually work?

Not really, according to Dorsaf Sallami. For her doctoral research at Université de Montréal's Department of Computer Science and Operations Research, she examined the limitations of AI systems designed to detect fake news.

Her conclusion: these tools have significant flaws that their technical performance often masks.

She detailed her findings in a paper published last fall in the proceedings of an international conference on AI, ethics and society, co-authored with her supervisor Esma Aïmeur and professor Gilles Brassard.

A mirror, not a fact-checker..."Current AI systems for detecting fake news are built on a fundamental misconception," Sallami said. "When AI flags content as false, it doesn't fact-check as a journalist would. It calculates probabilities based on its training data."

In other words, these systems don't check the facts against reality. They only reflect what they've been shown, like a mirror, complete with all the biases and gaps in their training data.

Sallami finds it paradoxical that tech giants are pouring resources into these tools. Meta is labeling content that passes existing fact-checkers, Google has launched a Gemini-based prototype, and X is using Grok to analyze information on its platform in real time.

"The arsenal is impressive, but what good is a system that boasts 95% accuracy in the lab but fails under real-life conditions, especially if it violates users' privacy, is biased against some media outlets, and can be weaponized to censor political opposition?" Sallami asked.

Effectiveness is typically measured against technical benchmarks under controlled conditions. It's a bit like judging a car by its top speed, without considering safety, affordability or emissions, she said.

Who decides what's true? Sallami points to another critical issue: the lack of consensus over what constitutes misinformation.

"To train a system to distinguish fact from fabrication, you have to feed it thousands of examples labeled true or false," she explained. "For simple tasks, like telling a cat from a dog, the labels aren't controversial. But when it comes to fake news, even experts disagree."

Sallami calls this the "ground truth problem."

"AI systems are trained using labels provided by fact-checking organizations, but their methods often lack transparency," she said. "Some are for-profit businesses, making the process even more opaque. The technological edifice is built on foundations that are shakier than they appear."

The rise of large language models—the technology behind ChatGPT and Gemini—also helps the creators of fake news mimic credible sources more easily than ever before. As a result, systems trained on misinformation strategies just a few months ago may be unable to detect the latest ruses.

Built-in bias...The biases embedded in AI fake-news detection systems are another major flaw, according to Sallami. She found that, when gendered language appears in texts, some models are more likely to consider women to be purveyors of disinformation. Others are prejudiced against non-Western sources or reproduce political and geographic biases. Sallami considers these biases particularly pernicious because they go largely unnoticed.

"While the industry fixates on improving accuracy, few researchers are examining the discrimination these systems can propagate," she said. "Equity shouldn't be an afterthought, secondary to performance; it must be an integral part of performance."

Her thesis proposes concrete methods for measuring and correcting bias, including CoALFake, a framework she developed that helps a detector trained in one area adapt to new domains—such as scientific or commercial disinformation—rather than starting from scratch.

To address all these issues, Sallami argues for a socially responsible evaluation framework.

"Instead of judging systems solely on accuracy, we must also consider equity, transparency, privacy and real-world usefulness for citizens," she said.

She also argues for giving user feedback greater weight, collaborating with journalists, social scientists and legal experts, and rejecting the false dichotomy between accuracy and social responsibility.

Aletheia: A new tool...In another paper based on her doctoral dissertation, Sallami noted that research has been focused on developing AI detection models, many of which are designed for people with technical expertise.

While these models are necessary, they aren't enough, she argues: we also need tools that are accessible to the end users.

Sallami wasn't content to simply point out the problem; she set out to solve it by designing Aletheia, a browser extension that lets users check online content themselves.

With a few clicks, users can verify the credibility of a news item, view fact-checks from trusted organizations and discuss with other users.

According to Sallami, what makes Aletheia different is its philosophy: instead of just labeling content "true" or "false," it explains why, presents evidence from available online sources, and lets users judge for themselves rather than blindly trusting the underlying model.

"The extension has three modules," Sallami explained. "VerifyIt, the core of the system, automatically consults external sources and delivers a verdict accompanied by plain-language explanations. Users can see the reasons why an item may be suspect and the sources on which the system is based."

In tests using claims verified by PolitiFact, an American non-profit operated by the Poynter Insitute, VerifyIt achieved about 85% reliability, outperforming many existing tools.

Aletheia also offers a live feed of recent fact checks and a forum where users can share their analyses and comment on those contributed by others.

"What we have presented here is only the tip of the iceberg," Sallami concluded. "AI must earn public trust, not just ace technical tests. Future efforts should resist the lure of fully automated fact-checking and instead develop systems that work with and for human judgment."

Provided by University of Montreal 

Sunday, March 8, 2026

 

SAMSUNG


Samsung’s 2nm Exynos 2700 chip is rushing to production

Leaked production timelines reveal Samsung is fast-tracking its 2nm Exynos 2700 chip to aggressively challenge Qualcomm’s grip on the flagship smartphone market. Analysts project a massive 50% adoption rate for next year's Galaxy S27 lineup, but earlier benchmarks beg the question: can the hardware live up to the hype?

As early as January this year, a Geekbench listing from a prominent tipster implied that Samsung had already begun testing its new Exynos chip. Naturally, that leak was treated with skepticism, but fresh reports now seem to provide the claim with some degree of legitimacy.

According to Yonhap News Agency, the architecture for the Exynos 2700 was already fully designed by late 2025. Testing is currently underway at Samsung MX with production-ready samples expected between May and June, well ahead of the next Galaxy S series launch.

At this point, it is an open secret that Samsung intends to reclaim market share from Qualcomm's Snapdragon processors which power a dominant 75% of the Galaxy S26 lineup.

To achieve these cost savings estimated at over $7.8 billion (11 trillion won), the tech giant is betting on the second-gen Samsung Foundry 2nm process (SF2P) to deliver the sort of yield and efficiency that industry heavyweights like TSMC are known for. The Exynos 2700 is also likely to improve on the heat management technology used in its predecessor. Consequently, Kiwoom Securities analyst Park Yu-ak projects that dependence on Qualcomm chipsets will shrink to 50% in the Samsung Galaxy S27 series.

But those are just financial targets. The only physical evidence of this chip in the wild remains an ERD board on Geekbench showcasing an unusual 10-core prototype with unimpressive OpenCL scores. Granted, it may as well be a spoofed listing, but until a new wave of leaks emerges, showcasing the Exynos 2700 actually pushing competitive clock speeds, the burden of proof rests entirely on Samsung. For now, Qualcomm has no reason to sweat.

According to information revealed by the South Korean news agency Yonhap, the first test units of the processor have already been sent from the chip division to the smartphone sector (Samsung MX).

Thus, the team working on the 2027 flagship phones can now begin testing the full processing power of the Exynos 2700, as well as adapting the body of the devices to improve cooling.

The same sources with access to Samsung's inner workings say that the company's forecast is to complete the development of the Exynos 2700 before June. In other words, something that would give plenty of time for mass production.

The new chipset will be manufactured using Samsung Foundry's own second-generation 2-nanometer lithography technology, known as SF2P.

Thus, the brand expects to provide significant leaps in terms of performance and, especially, energy efficiency.

Sources also guarantee that the Korean company should use HPB technology to enhance the cooling of heat generated by the CPU and RAM.

The real chances of the Galaxy S27 Ultra being presented in 2027 with the Exynos 2700 platform are high, since the current Exynos 2600 has proven quite competent in initial tests against the Snapdragon 8 Elite Gen 5 for Galaxy.

Of course, at the moment it is still too early to say with certainty that the Galaxy S27 line will be sold exclusively with Exynos.

This is because Moon Sung-hoon(Executive Samsung, MX Division) himself made it clear that the company will continue working with partners - namely Qualcomm - to "adopt the ideal chip when necessary".

Still, there are analysts who believe in a "mixed launch" with the S27 Ultra having Exynos in some selected markets.

by mundophone


DIGITAL LIFE


Microsoft Alert: fake AI extensions in Chrome and Edge Steal ChatGPT and DeepSeek conversations

On March 5, 2026, Microsoft published a security alert about malicious browser extensions that masquerade as legitimate artificial intelligence tools to steal the chat history of ChatGPT and DeepSeek users.

The AI ​​extensions identified by the Microsoft Defender team reached approximately 900,000 installations and were detected in more than 20,000 enterprise organizations.

Microsoft Defender has been investigating reports of malicious Chromium‑based browser extensions that impersonate legitimate AI assistant tools to harvest LLM chat histories and browsing data. Reporting indicates these extensions have reached approximately 900,000 installs. Microsoft Defender telemetry also confirms activity across more than 20,000 enterprise tenants, where users frequently interact with AI tools using sensitive inputs.

The extensions collected full URLs and AI chat content from platforms such as ChatGPT and DeepSeek, exposing organizations to potential leakage of proprietary code, internal workflows, strategic discussions, and other confidential data.

At scale, this activity turns a seemingly trusted productivity extension into a persistent data collection mechanism embedded in everyday enterprise browser usage, highlighting the growing risk browser extensions pose in corporate environments.

How malicious AI extensions work...The extensions were distributed through the Chrome Web Store with names and descriptions that mimicked legitimate AI assistant tools – including references to ChatGPT, DeepSeek, and Claude. Because Microsoft Edge supports Chrome Web Store extensions, a single listing allowed simultaneous distribution across both browsers without additional infrastructure.

After installation, the extensions collected two types of data in the background:

-Complete URLs visited by the user, including internal company websites

-Content of conversations with AI – prompts sent and responses received on platforms such as ChatGPT and DeepSeek

The data was stored locally in encrypted format and periodically sent to servers controlled by the attackers through the domains deepaichats[.]com and chatsaigpt[.]com, using HTTPS connections to blend in with normal browser traffic.

The detail that makes the attack more dangerous...Microsoft identified a deliberately deceptive consent mechanism: even if the user disabled data collection, subsequent updates to the extension automatically reactivated telemetry without clear notification.

Microsoft also recorded cases where browsers with agentic features installed the extensions automatically, without explicit user approval – a reflection of how convincing the names and descriptions presented were.

Persistence was ensured by the normal behavior of browser extensions: the extension automatically reloaded each time the browser started, without the need for elevated privileges or additional actions.

What may have been exposed...For individual users, the risk includes the exposure of private conversations with AI assistants – which may contain personal, financial, or professional information shared during work sessions.

For companies, the potential impact is more serious: proprietary code, internal workflows, strategic discussions, and confidential data shared with AI tools by employees may have been captured and exfiltrated.

What to do now...Microsoft recommends the following immediate actions(below):

Review the extensions installed in Chrome and Edge and remove any unknown or unused extensions – in Chrome: chrome://extensions / in Edge: edge://extensions

Check if any installed extension uses the IDs fnmihdojmnkclgjpcoonokmkhjpjechg or inhcgfpbfdjbjogdfjbclgolkmhnooop and remove it immediately

Block traffic to the domains chatsaigpt.com, deepaichats.com, chataigpt.pro and chatgptsidebar.pro

Install only verified extensions from known publishers with a proven track record

Enable Microsoft Defender SmartScreen in an enterprise environment

AI extensions: a growing attack vector...This incident underscores an emerging pattern: as users adopt AI tools in their browsers as part of their work routine, AI assistant extensions become an increasingly attractive attack vector. The trust placed in these tools – and the sensitive data routinely shared with them – makes them a high-value target for attackers willing to invest in compelling and well-distributed extensions.

The full Microsoft alert, with technical indicators of compromise and detection queries for security teams, is available on the Microsoft Security Blog.

mundophone

TECH How does the 'living computer' with 800,000 human neurons capable of playing video games work? A technological demonstration re...