Thursday, February 5, 2026

 

DOSSIER


TECH


How big tech companies learned to stop worrying and love the bombs

Until recently, many big tech companies opposed the militarization of AI, but that now seems like ancient history, as they are forging partnerships with arms companies. The prospect of generous Pentagon funding for AI is too tempting to refuse.

In any list of "known unknowns" that the world will face in 2026, artificial intelligence will certainly be among the top ones. Are predictions of widespread AI adoption, which will replace hundreds of millions of workers, about to come true? Will the AI ​​bubble burst? Will the United States or China win the race for "general artificial intelligence"?

Nick Srnicek's book, Silicon Empires, doesn't directly answer any of these questions, but, as the author states, it "offers a map of the terrain we must fight on." By carefully mapping the development of AI within its proper economic and geopolitical context, and encompassing analyses from both the US and China, Srnicek's AI guide can help us maintain a realistic, long-term perspective on the technology's likely trajectory.

Beyond bubbles and chatbots...It's no longer a fringe idea to say that there's an AI bubble, as this has been acknowledged even by important figures in the industry, such as Jeff Bezos and Bill Gates. OpenAI CEO Sam Altman seems to already be preparing his company for a state bailout. One estimate of the AI ​​bubble suggests it is seventeen times larger than the dot-com bubble and four times larger than the subprime real estate bubble that triggered the 2008 financial crisis. A crisis is clearly brewing.

It's no longer a fringe idea to say that there's an AI bubble, as this has been acknowledged even by important figures in the industry, such as Jeff Bezos and Bill Gates.

Srnicek's sober analysis encourages us to look beyond the bubble. The fact that AI is going through a difficult initial period is neither new nor surprising: the history of technological advancements is marked by struggles and difficulties before success. Furthermore, it is highly unlikely that any crisis will bring down the large technology companies, which are primarily responsible for AI development due to the strength of their market position and their intrinsic importance to the global digital infrastructure.

As Srnicek states: If an AI winter sets in, it is unlikely to be long-lasting. The technology's potential remains too high, and the importance of first-mover advantages is too great, for large technology companies to voluntarily relinquish control over the direction of AI development...thinking in terms of bubbles severely limits the view of AI's impact.

Renewed questions have arisen about the true potential of AI, with skeptics pointing to the slowdown in progress in the latest version of OpenAI's ChatGPT as a case study of the limitations of the "scaling" model that has brought generative AI to this point. For Srnicek, focusing on chatbots like ChatGPT is looking in the wrong direction. Investors place their hopes on the potential of industry-specific AI "agents" that can go far beyond simply answering a question and actually execute actions to achieve a goal—automating workflows across the economy. "Chatbots are an inadequate guide to where AI is heading, and both critics and opponents should ensure they have the right target in mind," he argues.

What is perhaps missing from Srnicek's analysis is an exploration of the macroeconomic conditions under which AI agents could be adopted across the economy. Economist Michael Roberts has convincingly argued that a mountain of "zombie" capitalist companies, kept afloat by cheap credit since 2008, is incapable of investing heavily in AI. The global economy would have to undergo a seismic process of "creative destruction" to forge the space in which new actors willing to fully embrace AI agents can emerge. The development of AI is ultimately conditioned by the dynamics of the capitalist political economy.

The AI ​​Strategies of Big Tech Companies...Srnicek's 2016 book, "Platform Capitalism," stood out by conceptualizing the breadth of digital platform business models that have begun to dominate nearly every sector, from "lean" platforms that outsource everything except core software, like Uber, to "industrial" platforms like Siemens, a company that builds digital hardware and software infrastructure in industry. Similarly, a major strength of "Silicon Empires" is the clarity with which it explains the different strategies that big tech companies are adopting in the field of AI. The differences in approach are significant and may ultimately determine which companies will win the race to dominate AI.

Artificial intelligence (AI), like the steam engine and electricity, is a general-purpose technology (GPT). All GPTs are characterized by their applicability across the economy, requiring widespread dissemination to develop. Typically, the value of technological advancements is captured later, when they are transformed into industry-specific products.

This is why states have historically been fundamental to R&D, as they can afford to invest in advances in global technologies without aiming for profit. This was the case with the internet and semiconductors. In the case of AI, large technology companies lead innovation, but they need to operate with business models that aim for profit. The attempt to reconcile these two aspects has led to the emergence of four strategies. First, the infrastructure strategy seeks to dominate the foundations of the AI ​​economy, upon which other companies can build. Amazon and Microsoft are key players in this scenario, consolidating their oligopolistic positions in cloud computing markets. For these companies, the huge investments in data centers represent an investment in the future growth of AI, as they prepare to receive cloud rents from industry-specific products that will depend on their infrastructure to operate.

For those who benefit from the infrastructure strategy, the more widespread AI is, the better. Microsoft CEO Satya Nadella praised the chatbot from the Chinese company DeepSeek, which has capabilities similar to ChatGPT but at a much lower cost, as a major step towards "ubiquitous" AI. Microsoft partnered with a US education nonprofit, offering free use of the chatbot to teachers "with the goal of integrating the American education system with Microsoft servers."

The second strategy is to lead at the frontiers of AI innovation. OpenAI, Anthropic, and DeepSeek are developers of cutting-edge AI models. For those adopting a pioneering strategy, staying one step ahead of the competition is essential to capturing value, as this innovation advantage is the only thing that can place the company's intellectual property at the center of a broader development ecosystem.

Everything and anything...The challenge faced by leading companies is that the costs of innovation are enormous due to the amount of computing power needed to drive AI innovation. Meanwhile, the task of commercializing these technological advances is fraught with difficulties, and when greater emphasis is placed on commercial implementation, research can be hampered.

Leading companies are betting on artificial general intelligence (AGI), the Holy Grail of AI that reporter Karen Hao discovered to be a flimsy excuse for OpenAI CEO Sam Altman to dismiss all criticism of his company's business practices. For Srnicek, we should simply understand AGI as an AI model that can be “applied across all sectors.” This would eliminate at once the difficulties that leading AI companies face in capturing value from their innovations due to the need for sector-specific tools. Srnicek describes the potential of AGI as “immense,” but it is important that we remain skeptical about its viability.

The conglomerate strategy, the third of its kind, represents an attempt to build sector-specific AI products, aiming to dominate the market in the same way as conglomerates of the past: through ownership and acquisitions. Google is at the forefront of this strategy, having developed as many fundamental AI models as its next three largest competitors (OpenAI, Microsoft, and Meta) combined.

Google's quest for AI dominance requires the company to possess capabilities across the entire AI value chain: positioned at the forefront of research, with a solid infrastructure base, and capable of building high-quality products for diverse sectors. The company's launch of a series of AI tools for the healthcare sector in recent years, from personal health to drug development, exemplifies how this strategy is being applied in practice. In China, Huawei is at the forefront of a group of large technology companies that are adopting this comprehensive approach to AI development.

Finally, there is the open strategy, with Meta in the United States and Alibaba and DeepSeek in China as the main implementers. As the name suggests, the open strategy involves making AI models available so that other developers can improve them. In the case of Meta's "Llama" models, this does not meet the open-source standard, as there is still a significant lack of transparency in the training data and the algorithms behind the models. Even so, the weights used in the modeling are publicly available, which facilitates access and modification of the models by others.

What advantage does Meta gain from the open strategy? Other large technology companies are building protective walls around their intellectual property, creating an exclusive zone of interaction with selected partners. Meta, on the other hand, manages to build a broad ecosystem around its intellectual property, which naturally attracts researchers and developers. These, in turn, will make their own improvements and discoveries, which "can then be easily incorporated into Meta's internal systems." This is a strategy that can significantly reduce Mark Zuckerberg's company's costs in the long term.

The Rise of the “Technological-Industrial Complex”...In his farewell address in January 2025, Joe Biden warned of the risks of a growing “technological-industrial complex” in the United States. This consciously echoed the words of Dwight Eisenhower as he left the White House in 1961, when he memorably expressed his fears about a “military-industrial complex” that could dominate American democracy.

Like the military-industrial complex, the technological-industrial complex combines powerful vested interests within the state, primarily the War Department, with the largest private market players, which today are the big tech companies. This is a class alliance that has consolidated very recently. As Srnicek points out, Google, Meta, OpenAI, and Anthropic opposed the use of AI tools for military purposes in early 2024. All of these companies changed their positions in less than a year, and some quickly formed partnerships with companies in the defense sector.

The drastic shift in posture is partly due to economic necessity. AI development is expensive, and the military offers the prospect of substantial, long-term funding. But the geopolitical turn has deeper roots. There has been a notable ideological shift among the tech elites in the US, moving away from what Srnicek calls the "Silicon Valley Consensus" toward "technonationalism."

The Silicon Valley Consensus was essentially a compromise among tech elites with US-led neoliberal globalization. Politicians and CEOs of tech companies shared a belief in technology's ability to create a world of borderless commerce and data, led by the United States. The lax regulation of the tech sector meant that Silicon Valley had little reason to worry about state interference. Abroad, Washington helped keep foreign economies open to American technology and limited the imposition of foreign taxes and regulations on large US technology companies, while value chains between all major technology companies stretched from China to the United States, keeping costs low.

What ended the Silicon Valley Consensus was the rise of China, which opened a new constellation of class and interest conflicts. Chinese tech giants began to become real competitors to their American rivals, shifting Silicon Valley's calculations. Meanwhile, since at least Donald Trump's first term, the state has prioritized American technological dominance at the expense of global interconnectedness. This continued during Biden's presidency, with the tightening of sanctions on critical technologies like semiconductors, and under Trump's second term, flourished in what Srnicek calls a "technonationalist vision of American supremacy and unrestricted innovation."

The level of integration between large technology companies and the state is now undeniable. A $9 billion Pentagon contract for a “joint cloud computing capability for military purposes” includes all the major US cloud players: Amazon, Google, Microsoft, and Oracle. Ties between tech companies and the military have rapidly increased. Srnicek notes that it is no coincidence that the emergence of the tech-industrial complex has coincided with the “war waged by big tech companies against their workers,” many of whom have sought to resist militarization.

The rise of technonationalism in the United States has found a parallel in China. Just as in the US, the elites of the Chinese Communist Party initially adopted a non-interventionist stance toward the emergence of large and powerful digital platforms in China, seeking to encourage the sector's growth. However, with increasing tensions with the United States, Chinese President Xi Jinping has begun to steer tech companies toward state priorities. This involved cracking down on many companies focused on facilitating consumption, such as the sharing economy platforms Meituan and DiDi, while simultaneously pressuring technology companies to contribute to industrial development, as this is the raison d'être of the Chinese Communist Party government.

Thus, in both the US and China, we observe the emergence of a potential new hegemonic order “due to the dismantling of class coalitions between the economic interests of the State, the security interests of the State, and the interests of platform capitalism.” Srnicek is cautious about the prospects for consolidating this new order, highlighting the opposing trends that move away from militarized technonationalism and the relative independence of large technology companies from the State. But the era of neoliberal globalization has clearly come to an end, and the consequent fusion between the State and large technology companies around a nationalist vision for AI entails extreme dangers for everyone.

What's next?...In a contest between the United States and China for supremacy in artificial intelligence, which country is more likely to emerge victorious? Srnicek's analysis leans toward the idea that China, despite exhibiting many weaknesses compared to the US, may very well win the technological race.

The reasoning is surprisingly simple: while the US tech industry focuses on innovation, China's priority is adoption, and adoption is likely to be decisive in the long run due to the need for a general-purpose technology like AI to spread throughout the economy to reach its full potential: In previous industrial revolutions, major power transitions led to these transitions not because one country monopolized the profits, but rather because one country excelled at adopting a new technology and used it to dramatically transform its entire economy in terms of productivity and growth. This widespread transformation of the entire economy—and not of a single leading sector—is what allows emerging major powers to overtake and surpass established hegemonies.

Regardless of the outcome of this dispute, it is unlikely that technology will decisively divide into two hemispheres, East and West, due to the complex interaction of international value chains. Instead, there will be an "overlap of different geopolitical [technological] layers," with the pursuit of a balance between American and Chinese power being a viable strategy for many countries, albeit challenging to implement.

Unlike Srnicek's 2015 book (co-authored with Alex Williams), *Inventing the Future*, which encouraged the left to embrace automation as part of a post-capitalist vision, *Silicon Empires* avoids developing left-wing policies for AI. Srnicek restricts himself to just two demands: no war between the United States and China, and large technology companies should not be allowed to dominate AI development.

These are useful starting points to guide the left regarding AI, but ultimately, a more ambitious agenda will be needed. Any self-respecting contemporary socialist program must be able to explain what role AI should play in the economy and society, how it should be governed, and what its relationship should be with the state and between states. Regardless of what happens in 2026 with the AI ​​bubble, the political challenges posed by this powerful technology will only increase over time.


Authors: Ben Wray is the co-author, along with Neil Davidson and James Foley, of Scotland After Britain: The Two Souls of Scottish Independence (Verso Books, 2022).


TECH


Octopus-inspired 'smart skin' uses 4D printing to morph on cue

Despite the prevalence of synthetic materials across different industries and scientific fields, most are developed to serve a limited set of functions. To address this inflexibility, researchers at Penn State, led by Hongtao Sun, assistant professor of industrial and manufacturing engineering (IME), have developed a fabrication method that can print multifunctional "smart synthetic skin"—configurable materials that can be used to encrypt or decrypt information, enable adaptive camouflage, power soft robotics and more.

Using their novel approach, the team made a programmable smart skin out of hydrogel—a water-rich, gel-like material. Compared to traditional synthetic materials with fixed properties, the smart skin enables enhanced multifunctionality, allowing researchers to adjust the gel's dynamic control of optical appearance, mechanical response, surface texture and shape morphing when exposed to external stimuli such as heat, solvents or mechanical stress.

The team detailed their work in a paper published in Nature Communications. Their paper was also featured in Editors' Highlights.

According to Sun, principal investigator on the project, the idea for the material was sparked by the natural biology of cephalopods, like the octopus, that can control their skin's appearance to camouflage themselves from predators or communicate with each other.

"Cephalopods use a complex system of muscles and nerves to exhibit dynamic control over the appearance and texture of their skin," Sun said. "Inspired by these soft organisms, we developed a 4D-printing system to capture that idea in a synthetic, soft material."

Sun, who holds additional affiliations in biomedical engineering, material science and engineering and the Materials Research Institute at Penn State, described the team's method as 4D-printing because it produces 3D objects that can reactively adjust based on changes in the environment.

How the halftone smart skin works...The team used a technique known as halftone-encoded printing—which translates image or texture data onto a surface in the form of binary ones and zeros—to encode digital information directly into the material, similarly to the dot patterns used in newspapers or photographs. This technique allows the team to essentially program their smart skin to change appearance or texture through exposure to stimuli.

These patterns control how different regions of the material respond to their environment, with some areas deswelling or softening more than others when exposed to changes in temperature, liquids or mechanical forces. By carefully designing the patterns, the team can decide how the material behaves overall.

"In simple terms, we're printing instructions into the material," Sun explained. "Those instructions tell the skin how to react when something changes around it."

The team used their new printing method to encode a photo of the Mona Lisa onto their "smart skin" material (left). The photo, which can initially appear hidden in the material, can be revealed by stretching, exposure to heat, exposure to liquid or by adjusting the material from a 2D to a 3D shape (right). Credit: Hongtao Sun

Hiding and revealing information on demand...According to Haoqing Yang, a doctoral candidate studying IME and first author of the paper, one of the most striking demonstrations of the smart skin is its ability to hide and reveal information. To showcase this feature, the team encoded a photo of the Mona Lisa onto the smart skin. When the film was washed with ethanol, the film appeared transparent, showing no visible image. However, the Mona Lisa became fully visible after immersion in ice water, or during gradual heating.

Although the Mona Lisa was used as a demonstration, Yang explained that the team's printing method allows them to encode any desired image onto the hydrogel.

"This behavior could be used for camouflage, where a surface blends into its environment, or for information encryption, where messages are hidden and only revealed under specific conditions," Yang said.

The team also showed that hidden patterns can be uncovered by gently stretching the material and measuring how it deforms via digital image correlation analysis. This means information can be revealed not just by sight, but also through mechanical deformation, adding another layer of security.

Shape-morphing functions and future uses...The material proved highly malleable—the smart skin could easily transform from a flat sheet into non-traditional, bio-inspired shapes with complex textures, according to Sun. Unlike many other shape-morphing materials, this effect does not require multiple layers or different materials. Rather, these shapes and textured surfaces—like those seen on cephalopod skin—can be controlled by the digitally printed halftone pattern within a single sheet.

Building on these shape-morphing capabilities, the researchers showed that the smart skin can combine multiple functions simultaneously. By carefully co-designing the halftone patterns, the team was able to encode the Mona Lisa image directly into flat films that later emerged as the material transformed into 3D shapes. As the flat sheets curved into dome-like structures, the hidden image information gradually became visible, demonstrating how changes in shape and appearance can be programmed together.

"Similar to how cephalopods coordinate body shape and skin patterning, the synthetic smart skin can simultaneously control what it looks like and how it deforms, all within a single, soft material," Sun said.

According to Sun, this work builds on previous efforts to 4D-print smart hydrogels, also published in Nature Communications. In that study, the team focused on the co-design of mechanical properties and programmable 2D-to-3D shape morphing. In the present work, the team developed a halftone-encoded 4D printing method to co-design more functions within a single smart hydrogel film.

Looking ahead, the team plans to develop a general and scalable platform that enables precise digital encoding of multiple functions into a single adaptive smart material system.

"This interdisciplinary research at the intersection of advanced manufacturing, intelligent materials and mechanics opens new opportunities with broad implications for stimulus-responsive systems, biomimetic engineering, advanced encryption technologies, biomedical devices and more," Sun said.

Provided by Pennsylvania State University

Wednesday, February 4, 2026


TECH


The common mistake of interpreting Matrix as a film about AI

When Matrix premiered in the late 1990s, audiences left the theater feeling like they had seen something revolutionary. The aesthetics, the pacing, and the iconic scenes marked a generation. Over time, the work came to be remembered as the great fable about machines dominating humanity. But this interpretation, while popular, misses what truly drives the story. Ultimately, Matrix is ​​less about technology—and much more about belief, choice, and voluntary submission.

Within the genre, Matrix never fit into the so-called "hard science fiction." There is no real interest in explaining how artificial intelligence works, nor any concern in making the technical functioning of the simulation plausible. What exists is something else: symbols, archetypes, and a profoundly spiritual narrative.

From the very first minutes, the Wachowski sisters' work is full of religious references. Prophecies, sacrifice, resurrection, betrayal, and redemption do not appear by chance. The human city is called Zion. The figure that guides the protagonist is called the Oracle. And the salvation of the world depends on belief in someone who may — or may not — be “the chosen one.”

In this context, technology is not the theme, but the setting. The Matrix functions as a modern stage to tell a much older story, closer to messianic myths and theological narratives than to treatises on computing.

Neo doesn't win because he understands the system, but because he believes... Neo's arc makes that clear. He doesn't defeat the machines by mastering codes or exploiting technical flaws. His true leap happens when he abandons logic and accepts something less rational: faith.

Victory doesn't come from knowledge, but from the absolute belief that he can transcend the rules. Even the names reinforce this. Trinity, the Oracle, Zion. And Cypher, whose betrayal echoes biblical stories more than programming flaws.

Matrix doesn't ask "how does this work?", but "what are you willing to believe in?". The central conflict was never between humans and artificial intelligence, but between accepting an imposed role or assuming a meaning that demands sacrifice.

The most important revelation of the saga happens when Neo meets the Architect. There, classic heroism dissolves. There is no unprecedented rupture of the system. There are cycles, reboots, and predictable choices.

The Architect's mistake is not technical, but human. His system fails because it tries to impose perfection. The solution comes from the Oracle, who understands something essential: humans tolerate control as long as they believe they are choosing freely.

That is the real workings of the Matrix. Not force, not violence, but the illusion of choice. People are not prisoners because they cannot escape, but because, for the most part, they do not want to.

A comfortable prison called normality...Seen in this way, the Matrix looks less like a conscious artificial intelligence and more like a social operating system. There are error corrections, agents that maintain order, programs that become obsolete, and others that develop their own desires.

The sequels complicated this structure, but did not negate it. The point remains: the prison is not sustained by fear, but by convenience. Most prefer that everything continues to function, even knowing that something is wrong.

This logic also appears in Zion, the last human city. There, the irony is complete: humans hate the machines, but depend on them to survive. Energy, air, heat — nothing works without technology. The dominance is not unilateral. It is a mutual dependence.

The real warning that Matrix left...Reread today, the saga doesn't seem like a prophecy about rebellious artificial intelligence. It seems like an uncomfortable portrait of the human willingness to delegate decisions, thought, and responsibility to systems that it doesn't understand, as long as it guarantees stability.

Matrix remains relevant not because it talked about machines that think, but because it talked about people who prefer not to think. And perhaps that was, from the beginning, the Wachowskis' real fear: not the revolt of the machines, but the ease with which we accept living within an illusion — as long as it is comfortable.

The mistake of interpreting Matrix purely as a film about Artificial Intelligence is reducing a work laden with social, identity-related, and philosophical allegories to a simple technological fable about "machines against humans."

Although AI is the driving force of the plot, it functions as a metaphor for real control systems. Here are the main points that this interpretation ignores:

1. The trans allegory...The directors themselves, Lana and Lilly Wachowski, confirmed that Matrix was conceived as an allegory for the transgender experience. The Matrix represents the imposed social norm (such as gender binarism). Neo experiences the conflict of inhabiting a body and an identity (Thomas Anderson) that do not correspond to his inner truth. The Red Pill symbolizes the awakening to one's own identity and the beginning of a transition that is often painful, but necessary.

2. The simulacrum and consumer society...The film is deeply inspired by the book "Simulacra and Simulation" by Jean Baudrillard. The flaw is not the technology itself, but the idea that we live in a hyper-reality where symbols and media images have replaced the real world.

Interpreting it solely as AI ignores the critique of capitalism and the corporate system (represented by Neo's office and the Agents), which transforms human beings into consumable resources (batteries) to keep the system running.

3. The philosophical awakening...The film is a modern reinterpretation of Plato's Allegory of the Cave. The focus is not on the "computer," but on human perception. The mistake is focusing on "who holds us captive" (the machines) instead of "how we free ourselves" (self-knowledge and willpower).

by mundophone


DIGITAL LIFE


Whack-a-mole: US academic fights to purge his AI deepfakes

As deepfake videos of John Mearsheimer multiplied across YouTube, the American academic rushed to have them taken down, embarking on a grueling fight that laid bare the challenges of combating AI-driven impersonation.

The international relations scholar spent months pressing the Google-owned platform to remove hundreds of deepfakes, an uphill battle that stands as a cautionary tale for professionals vulnerable to disinformation and identity theft in the age of AI.

In recent months, Mearsheimer's office at the University of Chicago identified 43 YouTube channels pushing AI fabrications using his likeness, some depicted him making contentious remarks about heated geopolitical rivalries.

One fabricated clip, which also surfaced on TikTok, purported to show the academic commenting on Japan's strained relations with China after Prime Minister Sanae Takaichi expressed support for Taiwan in November.

Another lifelike AI clip, featuring a Mandarin voiceover aimed at a Chinese audience, purported to show Mearsheimer claiming that American credibility and influence were weakening in Asia as Beijing surged ahead.

"This is a terribly disturbing situation, as these videos are fake, and they are designed to give viewers the sense that they are real," Mearsheimer told AFP.

"It undermines the notion of an open and honest discourse, which we need so much and which YouTube is supposed to facilitate."

Central to the struggle was what Mearsheimer's office described as a slow, cumbersome process that prevents channels from being reported for infringement unless the targeted individual's name or image featured in its title, description, or avatar.

As a result, his office was forced to submit individual takedown requests for every deepfake video, a laborious process that required a dedicated employee.

"AI scales fabrication"...Even then, the system failed to stem the spread. New AI channels continued sprouting, some slightly altering their names—such as calling themselves "Jhon Mearsheimer"—to evade scrutiny and removal.

"The biggest problem is that they (YouTube) are not preventing new channels dedicated to posting AI-generated videos of me from emerging," Mearsheimer said.

After months of back and forth—and what Mearsheimer described as a "herculean" effort—YouTube shut down 41 of the 43 identified channels.

But the takedowns came only after many deepfake clips gained significant traction, and the risk of their reappearance still lingers.

"AI scales fabrication itself. When anyone can generate a convincing image of you in seconds, the harm isn't just the image. It's the collapse of deniability. The burden of proof shifts to the victim," Vered Horesh, from the AI startup Bria, told AFP.

"Safety can't be a takedown process—it has to be a product requirement."...In its response, a YouTube spokesperson said it was committed to building "AI technology that empowers human creativity responsibly" and that it enforced its policies "consistently" for all creators, regardless of their use of AI.

In his recent annual letter outlining YouTube's priorities for 2026, CEO Neal Mohan wrote the platform is "actively building" on its systems to reduce the spread of "AI slop"—low-quality visual content—while it plans to dramatically expand AI tools for its creators.

"Major headache"...Mearsheimer's experience underscores a new, deception-filled internet, where rapid advancements in generative AI distort shared realities and empower anonymous scammers to target professionals with public-facing profiles.

Hoaxes produced with inexpensive AI tools can often slip past detection, deceiving unsuspecting viewers.

In recent months, doctors have been impersonated to sell bogus medical products, CEOs to peddle fraudulent financial advice, and academics to fabricate opinions for agenda-driven actors in geopolitical rivalries.

Mearsheimer said he planned to launch his own YouTube channel to help shield users from deepfakes impersonating him.

Mirroring that approach, Jeffrey Sachs, a US economist and Columbia University professor, recently announced the launch of his own channel in response to "the extraordinary proliferation of fake, AI-generated videos of me" on the platform.

"The YouTube process is difficult to navigate and generally is completely whack-a-mole," Sachs told AFP.

"There remains a proliferation of fakes, and it's not simple for my office to track them down, or even to notice them until they've been around for a while. This is a major, continuing headache," he added.

© 2026 AFP

Tuesday, February 3, 2026


DIGITAL LIFE


The network that no one sees, but decides battles — Starlink's role in the war in Ukraine

In modern conflicts, not everything explodes or makes noise. Some of the most critical decisions happen silently, far from the trenches. Since 2022, connectivity has become as vital as ammunition on the Ukrainian battlefield. At the center of this new chessboard is a constellation of private satellites, operated by a company that is not accountable to governments. And, recently, a statement reignited an uncomfortable debate about power, control, and war in the 21st century.

Since the beginning of the war, Ukraine has relied heavily on Starlink to maintain stable communications in regions where traditional infrastructure has been destroyed. The system has allowed for troop coordination, real-time information exchange, and long-distance drone operation.

What began as a civilian service quickly gained another function. In a conflict where seconds count, having reliable internet can define the success — or failure — of a mission. Therefore, when evidence emerged that Russian forces were using the same system, the problem ceased to be technical and became strategic.

According to Ukrainian authorities, Starlink terminals were being used to guide long-range Russian drones. The accusation sounded like a warning: the same infrastructure that helps defend the country could be being used against it.

 The Ukrainian Minister of Digital Transformation, Mykhailo Fedorov, publicly stated that Kiev was in direct contact with SpaceX to block the “unauthorized” use of the network. The message was clear: Western technology should not be used for attacks against civilians.

Shortly afterward, Elon Musk wrote on the X platform that the measures taken had worked. In a few words, he confirmed something unprecedented: a private company had actively interfered in the use of critical infrastructure during an ongoing war.

The statement raised more questions than answers. How was this blocking done? In which regions? With what criteria? What was implied, however, was even more relevant: the ability to turn internet access on or off in a war zone is not in the hands of a state.

A power that doesn't go through barracks...This isn't the first time Musk has found himself in the role of unwitting arbiter of conflict. In 2022, he had already acknowledged that SpaceX could limit Starlink's coverage in certain areas and that he chose to restrict its use to specific offensive operations.

This has transformed the network into something unprecedented: a private infrastructure with tactical veto power. In practice, decisions made by executives and engineers can directly affect military operations on the ground.

For Ukraine, the situation is paradoxical. Despite public disagreements between Musk and Ukrainian authorities throughout the conflict, the country remains highly dependent on Starlink. In the short term, there is no alternative with similar reach and resilience. Kyiv needs the network—but doesn't control it.

The episode reveals a structural shift. Contemporary wars are not fought solely by national armies, but by an ecosystem of satellites, software, communication platforms, and private services. Starlink doesn't fire weapons, but it decides who can communicate when they are fired.

This creates a precedent that is difficult to ignore. If a company can block a country's access to satellite internet in a conflict, inevitable questions arise: who regulates this power? Who defines when it should be used? And with what legitimacy?

For now, SpaceX claims that Russian use has been interrupted. But the discussion is far from over. What is clear is that, on the battlefield of the 21st century, some of the most important decisions are not made in military command rooms—but rather in corporate offices and in orbit around the Earth.

mundophone

 

DIGITAL LIFE


Understanding how feminist AI in Latin America works

Online spaces perpetuate stereotypes about who creates them and the data that is inserted into them — currently, predominantly men. This global phenomenon has been driving the construction of technological alternatives, such as the Feminist AI Network of Latin America and the Caribbean.

The technological literature is full of examples of gender bias. Image recognition systems have difficulty accurately identifying women, especially Black women, which has already resulted in misidentifications with serious consequences for law enforcement.

Voice assistants have long used exclusively female voices, reinforcing the stereotype that women are more suited to service roles.

In image generation, AIs often associate the term "CEO" with a man, while a search for "assistant" returns images of women.

— Artificial intelligence feeds on data that is not neutral: it reflects societies marked by historical inequalities and power relations. If a company wants to achieve fair results, it needs to analyze datasets, verify their representativeness, and actively intervene when this is not the case. Equity doesn't just appear on its own: it needs to be designed — Ivana Bartoletti, an international expert in AI governance and author of a Council of Europe study on artificial intelligence and gender, told the ANSA news agency.

For Bartoletti, the recent case of Grok — Elon Musk's AI that allowed the generation of fake images of naked women and minors, a function that was later discontinued — “shows what happens when the safety and rights of women are not considered in the design of systems.”

— If there are tools to undress women, they will be used. Deepfake nudes are a form of humiliation and control. The implicit message is dangerous: you are online, therefore you deserve this. This is how many women are silenced and abandon the digital space — she explained.

It is in this context that technological alternatives emerge to rethink artificial intelligence and transform it into a space of struggle and shared power.

Feminist AIs...In Latin America and the Caribbean, for example, the Feminist AI Network emerged, supporting dozens of projects focused on transparency and public policies. Tools like AymurAI, Arvage AI, and SofIA apply a gender perspective to legal analysis and expose the biases and discrimination inherent in algorithms.

Afrofeminism has also been reclaiming artificial intelligence as a space for self-determination, with assistants like AfroféminasGPT, trained based on the knowledge and voices of Black people.

— They demonstrate that we can organize ourselves to use AI for the benefit of all, share data collectively, and develop solutions centered on real needs. But the key remains power. The feminist issue in AI is a question of power: women need to have more power. Not on the margins, but at the top of companies and in the spaces where technological policies are decided. We need diversity in decision-making environments, not just among programmers. Artificial intelligence is not just technology, it's a choice about how we want to transform society — concluded Ivana Bartoletti.

by La Nacion — Buenos Aires

Monday, February 2, 2026

 

DIGITAL LIFE


Notepad++ confirms hijacked by Chinese state-sponsored hackers

Notepad++ reported that its built-in auto-update feature had been hijacked by Chinese state-sponsored hackers from June to September of 2025, and the credentials gathered by the bas actors enabled further exploits until December 2nd, 2025. In an effort to thwart similar issues moving forward, Notepad++ has moved to a hosting provider "with significantly stronger security practices", which has been in place since Notepad++ version 8.8.9. For users who happened to follow an auto-update prompt or started one through Notepad++ within the vulnerable timeframe though, you'll very much want to scan your system for malware.

For existing Notepad++ users, developers advise manually installing version v.8.9.1, which includes a secured WinGup updater for improved security, instead of auto-updating through your current version. As a Notepad++ user myself, I was able to install the new version of Notepad++ over my old installation without issue, and my system scanned clean before and after doing so. Notepad++ mentions that the compromise occurred at the hosting provider level rather than through vulnerabilities in Notepad++ code itself, but the application still received the aforementioned security upgrades after being moved to a more secure provider in hopes of preventing the recurrence of something similar in the future.

This isn't the only time Notepad++ and its users have been targeted by cybercriminals, but last time it was through "notepad.plus", a "fan" site that was actually used to host malicious advertising and attempt to infect those looking for the legitimate Notepad++. This time the attack was more direct, though the full scale of harm done remains unknown. Similar to recent DarkSpectre stories, it does raise concerns about how existing auto-update infrastructure can be exploited, even against applications that seem or are legitimate. At least Notepad++ was informed of the breach by its old hosting provider and was able to move to a more secure host.

Notepad++, a popular text editor among programmers and technology professionals, was the target of a sophisticated cyberattack that lasted six months. Between June and December 2025, hackers sponsored by the Chinese government managed to hijack the software's update mechanism to distribute malware to specific targets.

The attack involved infrastructure-level compromise that allowed malicious actors to intercept and redirect update traffic destined for the notepad-plus-plus[.]org website.

Don Ho, creator and maintainer of Notepad++, revealed full details of the incident on Monday. The information was shared after an investigation conducted in collaboration with external security experts and the shared hosting provider that was used at the time.

Highly targeted attack...The compromise occurred at the hosting provider level, not through vulnerabilities in the Notepad++ code itself. The attackers gained access to the shared hosting server and, from there, established the ability to selectively redirect update traffic from specific target users to servers controlled by them.

Instead of compromising all Notepad++ users at once, which would have been quickly detected, the hackers chose specific targets.

Traffic originating only from certain users was routed to malicious servers that delivered components disguised as legitimate updates, while other users continued to receive genuine updates normally.

Multiple independent security researchers have assessed that the threat actor is likely a Chinese state-sponsored group, identified as Violet Typhoon, also known as APT31. This group primarily targeted telecommunications and financial services organizations in East Asia.

How the attack was discovered...Security researcher Kevin Beaumont was the first to report, in early December 2025, that some organizations using Notepad++ were being targeted with malicious software updates.

According to Beaumont, hackers linked to China had exploited Notepad++ to gain initial access to the systems of telecommunications and financial services companies in East Asia.

The discovery prompted Don Ho to quickly release Notepad++ version 8.8.9 to address an issue that resulted in WinGUp traffic, the Notepad++ updater, occasionally being redirected to malicious domains.

Specifically, the problem stemmed from how the updater verified the integrity and authenticity of the downloaded update file, allowing an attacker capable of intercepting network traffic between the updater client and the update server to trick the tool into downloading a different binary.

According to the detailed statement provided by the hosting provider, the shared server where Notepad++ was hosted was compromised until September 2, 2025.

On this date, the server underwent scheduled maintenance where the kernel and firmware were updated. After this update, the suspicious patterns in the logs disappeared, indicating that the attackers lost direct access to the server.

Even though they lost direct access to the server after September 2, the attackers still retained access credentials to the provider's internal services that they had captured during the initial compromise period.

With these credentials, even without directly controlling the server, they were able to continue redirecting some of the traffic destined for the Notepad++ update address to their own malicious servers until December 2, 2025.

The hosting provider confirmed that it found no evidence that other clients on the shared server were targeted.

The attackers specifically searched for the notepad-plus-plus.org domain in order to intercept traffic, demonstrating prior knowledge of existing vulnerabilities in the Notepad++ update verification system.

Remedial measures implemented...To definitively resolve the problem, Don Ho took several concrete steps. First, he migrated the entire Notepad++ website to a new hosting provider with significantly more rigorous security practices.

Next, he enhanced WinGUp in version 8.8.9 to verify both the digital certificate and the signature of the downloaded installer. Digital certificates and signatures are cryptographic mechanisms that ensure that a file actually came from the person who claims to have created it and that it has not been altered.

Furthermore, the XML returned by the update server is now signed using XMLDSig, a digital signature standard for XML documents. Certificate and signature verification will be mandatory starting with the next version 8.9.2, scheduled to be released approximately one month after the announcement.

Supply chain attacks...This incident falls under a category of attacks known as software supply chain attacks. This type of attack does not directly target the program that users use, but rather the infrastructure that distributes updates to that program.

State-sponsored groups, especially from China, North Korea, and Russia, have shown increasing interest in compromising software supply chains as a way to gain access to target organizations.

Instead of attempting to directly infiltrate organizations' networks, attackers find vulnerable points in the supply chain and use this as an initial entry vector.

Don Ho published full details of the investigation and took public responsibility for what happened, even though technically the failure was at the hosting provider.

"I deeply apologize to all users affected by this hijacking," Ho wrote in the official statement. He concluded by saying he believed the situation had been completely resolved with all the changes and security reinforcements implemented, but maintained a humility appropriate for someone who had just dealt with a state-sponsored hacking attack.

Notepad++ users should ensure they are running at least version 8.8.9 of the software, which includes critical security fixes for the update mechanism.

mundophone

  DOSSIER TECH How big tech companies learned to stop worrying and love the bombs Until recently, many big tech companies opposed the milita...