Friday, February 6, 2026


DOSSIER


DIGITAL LIFE


How big tech killed online debate

The Washington Post, under billionaire owner Jeff Bezos, has just laid off 300 journalists, shuttering entire sections including its acclaimed sports section and getting rid of all its Middle East reporters. Particularly horrifying to me is the shuttering of the book review. This is part of a trend. The Associated Press ended book reviews last year and the (awful) New York Times Book Review is now just about the only game in town, at least for newspapers. I believe that most of the knowledge worth having is found in books, and book critics have a vital function in introducing the public to important books, criticizing bad books, and helping literary culture thrive. I would never claim the Washington Post Book World section was singlehandedly sustaining the country’s intellectual lifeblood (although I will deeply miss the often-devastating writing of Becca Rothfeld, one of the few book critics in the country who actually criticizes books). But some part of me feels this country is doomed unless it has book reviews.

I realize I’m going to sound like a bit of a snooty liberal—when are you going to quit Current Affairs and join the Atlantic?, the letters will read—but I’m romantic about the World of Ideas and the importance of debate to a healthy society. I realize that winning the battle of ideas does not win the class struggle, and the pen is no match for the sword in face-to-face combat, but I am scared to live in a society whose members are incapable of having deep discussions and arguments.

There has been a change, at least online, over the last 10 or 15 years. I remember a time, back when I was a baby blogger (a blogger in his infancy, that is, not a blogger on matters pertaining to babies), when the internet seemed full of heady intellectual argument. On Facebook, we had arguments back and forth for days over foreign policy, healthcare, religion, nationalism, gender, environmentalism. In its early days, Current Affairs itself had a Facebook forum that was filled with lively debate, and I used to get long letters-to-the-editor disputing points made in our articles. There were particularly acrimonious discussions of religion—Scott Alexander has traced the collapse of New Atheism, which populated endless discussion forum threads with arguments about the existence of God, the validity of evolution, and the truth (or otherwise) of the Bible. I distinctly remember political bloggers going back and forth “fisking” each other’s posts, dissecting their arguments line by line.

Much of this was pointless, and I have no nostalgia for New Atheism, which was often tinged with Islamophobia and neoconservative foreign policy views. (The old atheism still suits me just fine, thanks very much.) There’s a cartoon from this time depicting the classic online-debate enthusiast who cannot go to bed because “Someone is wrong on the internet.” At the time, it seemed ridiculous that people spent so much time writing long arguments explaining why strangers were in error. And even today, I am not pining for some Golden Age of Internet Argument.

But I’ve come to wonder whether the only thing worse than arguing on the internet is not arguing on the internet. Something happened over time, and I think it coincided with the rise of Donald Trump and the emergence of Twitter, Instagram, and TikTok as the social media platforms of choice. A lot of people just stopped bothering to defend their ideas against people who disagree with them. The arguments dried up.

Big Tech bears its share of the blame here. Facebook intentionally reduced the prominence of political content on its platform in 2021, ostensibly to combat misinformation, but the change had the effect of disincentivizing popular participation in dialogue on issues of major importance. Twitter and Instagram both discourage linking to outside websites, meaning that people are more likely to keep consuming the bite-sized thoughts on these platforms than to check out an extended essay that might get them thinking more sophisticated thoughts. Some of the killing is more direct, like Jeff Bezos simply firing book reviewers, but I think that if our Tech Overlords wanted to construct algorithms that encouraged critical thought and in-depth reading, they could certainly do so. Facebook or Twitter could encourage its users to join a reading group, suggest books they might like, prioritize book reviews and essays in their feed. Instead, in part because the manchild Elon Musk is incapable of having a thought deeper than a single tweet, the rest of us have been sucked into the same world. The transition from forums and blogs to “social media” and video has disfavored longform writing, and it is a transition that has been engineered by massive companies. With the rise of AI, which allows people to avoid formulating thoughts altogether and let the machines do it for them, it seems to only be getting worse.

This is a difficult phenomenon to write about, because I’m not quite sure how to prove it or quantify it (suggestions welcome), but I know there has been a shift here, because I’ve experienced it firsthand over my 18 years as an online writer. I became so used to defending my ideas against critics, and then gradually the critics stopped writing criticism. When my book Responding to the Right: Brief Replies to 25 Conservative Arguments came out in 2023, I expected a barrage of furious negative reviews from conservatives. Instead there was… silence. I spent two years meticulously compiling my objections to right-wing positions, expecting annoyed rebuttals exposing my supposed fallacious reasoning and poor sourcing, and instead the book just disappeared.

This may also be related to what Jason Myles describes in a recent Current Affairs piece as the pro-wrestling-ification of American politics. It is not that the Obama era was truly a time of cerebral and sober-minded discourse in which every American enacted the Lincoln-Douglas debates in their dealings with their fellow members of the electorate. Obama himself was a politician who ascended to power based on image, rather than substantive policy commitments. But in part thanks to Trump—who is himself a creature of Twitter, always expressing himself in short blurts of outrage or braggadocio—we seem to be drifting ever more into a world where ideas simply do not matter.

I’ll give you an example: Back in the Occupy Wall Street era, people were not only pissed off about the bailouts of the big banks. They were also having important philosophical discussions about fairness. David Graeber’s book Debt posed the question of whether and when debts are moral obligations, and there was voluminous discussion about it. Some said he was excavating important insights, some said he was full of shit. Thomas Piketty’s ponderous Capital in the Twenty-First Century got a similar reaction. But I can’t remember the last time a book of ideas made a splash. (Although Kohei Saito’s Slow Down: The Degrowth Manifesto was huge in Japan.)

Again, I need to avoid being too romantic. But I grew up on big public debates (Galloway v. Hitchens, Chomsky v. Dershowitz), and what they taught me is that if you believe something, you need to be prepared to defend it. You need to have evidence for it. You need to know the other side’s position, and be able to refute that. But now, it seems we are not just in a post-truth era, we are in a post-argument era, where everyone realizes that the thing that matters is not whether you make sense but whether you have power. We are in a time, for example, when RFK Jr. is the head of Health and Human Services even though he is demonstrably ignorant about statistics and therefore pushes falsehoods that endanger public health. When he was campaigning, Current Affairs published an article exposing some of his errors. Others did the same. Did he refute or rebut any of it? No. Did it matter? No, now he’s in power and working tirelessly to bring measles back to America.

But this is not just a right-wing phenomenon. It’s true that fascists believe you don’t need an argument when a bullet will do, but I think many on the left have also given up on making arguments. I’ve been trying to do research for an article on antivax talking points, and one thing that has struck me is that while there are many (often self-published) books laying out the case against vaccines, there are almost no blogs or articles reviewing (or “fisking”) these books and explaining where they go wrong. When I wrote about Thomas Sowell, I noted the failure of leftists to engage with his arguments, which furthers his ability to claim that he’s not responded to because progressives fear the devastating power of his logic.

I have had to resist the urge to give up on arguments myself. I used to write 10,000-word deconstructions of right-wing pseudointellectuals like Charles Murray and Jordan Peterson. Now the right doesn’t even bother to supply pseudointellectuals. I used to write articles on philosophical questions like: is it immoral to possess wealth and why? People read those articles. Now, I feel like it’s not worth discussing.

It’s funny: I’m actually turning away from a position I used to hold, because the facts have changed around me. A few years ago, I used to discuss the question of whether philosophy could be justified in a time of crisis. That is to say: when authoritarianism is on the rise, shouldn’t we be out in the streets resisting, not sitting around in Capital reading groups or debating questions like what is justice or what is the optimal rate of taxation? And I still believe that politically urgent times call for engaged activism and that it’s not morally acceptable to withdraw from politics. But I also find myself craving debate again. I find myself dusting off my old books of political philosophy, watching G.A. Cohen discussing whether socialism is an ethical imperative, mentally wrestling with Ayn Rand’s defense of the heroic capitalist creator. I want to think again, and in the second Trump term I feel the pressures toward brain rot growing stronger and stronger. I never thought the right’s intellectuals (like Rand) were particularly intelligent, but at least they put forth claims and defended them. They didn’t just punch you in the face. I do not share Ezra Klein’s view that the late Charlie Kirk was “practicing politics the right way” (he was an ignorant bigot who ran a dark money network) but he was willing to have drawn-out public arguments with socialists like Ben Burgis and Briahna Joy Gray. And I do fear a world in which someone like Kirk is “refuted” with a bullet to the neck rather than through persuasion.

The modus operandi of debate has been unintentionally crippled by the internet’s design, with the endless conflict of unflinching idealogues having had wide spread implications beyond the screen, evidenced by the deeply sensationalist nature of contemporary politics. We are not, however, perpetually doomed to this debauchery. We can save ourselves from the continual debate-decline.

Firstly, internet anonymity needs to be addressed. The internet can be used as a medium for criminality and hate speech. And yet, it would be considered heinous to allow anonymity to those who perform these acts in person, but we accept, if not even defend, the online separation from accountability. Holding individuals to account will not only reduce grim online behaviour but also incite humility and human decency rooted in individual responsibility.

Secondly, we need to use the internet less. Yes, I am suggesting that, to fix internet debate, we need to occasionally steer clear of the internet altogether. The reason internet arguments descend into one-line insults and meaningless quibbles is because of how much internet there is. It’s the fall of Rome on one tab, a video of the world burning the next, with one of your painfully loud echo chambers after that. We type, like, and move on endlessly. The internet is practically infinite – seemingly far too large for an argument that transcends jabs at others’ appearances or belief structures. So, let’s, at least sometimes, take a step away from the vastness of it all.

When you avoid considering other viewpoints because you assume – often rightly – that you’ll be plunged into the toxic, vitriolic waste, festering within debate discourse, you have an issue. It censors people to ideas, the unintentional result of the internet’s design and the damning effects it has on human behaviour. This has become an en masse affair, eroding the foundations of liberal free thought.

The closest thing we have to public debate right now is the maddening spectacle of Jubilee’s “Surrounded” videos, which turn debate into a kind of game show where you have to argue against the clock. But where does someone go now for sustained engagement with ideas? I never thought I’d miss the world where people were writing ten-paragraph political posts back and forth until three in the morning. But when I think about a healthy democracy, well, it involves argument and debate. And I didn’t miss arguments until I saw them start to disappear.

by Nathan J. Robinson---https://x.com/NathanJRobinson

Thursday, February 5, 2026

 

DOSSIER


TECH


How big tech companies learned to stop worrying and love the bombs

Until recently, many big tech companies opposed the militarization of AI, but that now seems like ancient history, as they are forging partnerships with arms companies. The prospect of generous Pentagon funding for AI is too tempting to refuse.

In any list of "known unknowns" that the world will face in 2026, artificial intelligence will certainly be among the top ones. Are predictions of widespread AI adoption, which will replace hundreds of millions of workers, about to come true? Will the AI ​​bubble burst? Will the United States or China win the race for "general artificial intelligence"?

Nick Srnicek's book, Silicon Empires, doesn't directly answer any of these questions, but, as the author states, it "offers a map of the terrain we must fight on." By carefully mapping the development of AI within its proper economic and geopolitical context, and encompassing analyses from both the US and China, Srnicek's AI guide can help us maintain a realistic, long-term perspective on the technology's likely trajectory.

Beyond bubbles and chatbots...It's no longer a fringe idea to say that there's an AI bubble, as this has been acknowledged even by important figures in the industry, such as Jeff Bezos and Bill Gates. OpenAI CEO Sam Altman seems to already be preparing his company for a state bailout. One estimate of the AI ​​bubble suggests it is seventeen times larger than the dot-com bubble and four times larger than the subprime real estate bubble that triggered the 2008 financial crisis. A crisis is clearly brewing.

It's no longer a fringe idea to say that there's an AI bubble, as this has been acknowledged even by important figures in the industry, such as Jeff Bezos and Bill Gates.

Srnicek's sober analysis encourages us to look beyond the bubble. The fact that AI is going through a difficult initial period is neither new nor surprising: the history of technological advancements is marked by struggles and difficulties before success. Furthermore, it is highly unlikely that any crisis will bring down the large technology companies, which are primarily responsible for AI development due to the strength of their market position and their intrinsic importance to the global digital infrastructure.

As Srnicek states: If an AI winter sets in, it is unlikely to be long-lasting. The technology's potential remains too high, and the importance of first-mover advantages is too great, for large technology companies to voluntarily relinquish control over the direction of AI development...thinking in terms of bubbles severely limits the view of AI's impact.

Renewed questions have arisen about the true potential of AI, with skeptics pointing to the slowdown in progress in the latest version of OpenAI's ChatGPT as a case study of the limitations of the "scaling" model that has brought generative AI to this point. For Srnicek, focusing on chatbots like ChatGPT is looking in the wrong direction. Investors place their hopes on the potential of industry-specific AI "agents" that can go far beyond simply answering a question and actually execute actions to achieve a goal—automating workflows across the economy. "Chatbots are an inadequate guide to where AI is heading, and both critics and opponents should ensure they have the right target in mind," he argues.

What is perhaps missing from Srnicek's analysis is an exploration of the macroeconomic conditions under which AI agents could be adopted across the economy. Economist Michael Roberts has convincingly argued that a mountain of "zombie" capitalist companies, kept afloat by cheap credit since 2008, is incapable of investing heavily in AI. The global economy would have to undergo a seismic process of "creative destruction" to forge the space in which new actors willing to fully embrace AI agents can emerge. The development of AI is ultimately conditioned by the dynamics of the capitalist political economy.

The AI ​​Strategies of Big Tech Companies...Srnicek's 2016 book, "Platform Capitalism," stood out by conceptualizing the breadth of digital platform business models that have begun to dominate nearly every sector, from "lean" platforms that outsource everything except core software, like Uber, to "industrial" platforms like Siemens, a company that builds digital hardware and software infrastructure in industry. Similarly, a major strength of "Silicon Empires" is the clarity with which it explains the different strategies that big tech companies are adopting in the field of AI. The differences in approach are significant and may ultimately determine which companies will win the race to dominate AI.

Artificial intelligence (AI), like the steam engine and electricity, is a general-purpose technology (GPT). All GPTs are characterized by their applicability across the economy, requiring widespread dissemination to develop. Typically, the value of technological advancements is captured later, when they are transformed into industry-specific products.

This is why states have historically been fundamental to R&D, as they can afford to invest in advances in global technologies without aiming for profit. This was the case with the internet and semiconductors. In the case of AI, large technology companies lead innovation, but they need to operate with business models that aim for profit. The attempt to reconcile these two aspects has led to the emergence of four strategies. First, the infrastructure strategy seeks to dominate the foundations of the AI ​​economy, upon which other companies can build. Amazon and Microsoft are key players in this scenario, consolidating their oligopolistic positions in cloud computing markets. For these companies, the huge investments in data centers represent an investment in the future growth of AI, as they prepare to receive cloud rents from industry-specific products that will depend on their infrastructure to operate.

For those who benefit from the infrastructure strategy, the more widespread AI is, the better. Microsoft CEO Satya Nadella praised the chatbot from the Chinese company DeepSeek, which has capabilities similar to ChatGPT but at a much lower cost, as a major step towards "ubiquitous" AI. Microsoft partnered with a US education nonprofit, offering free use of the chatbot to teachers "with the goal of integrating the American education system with Microsoft servers."

The second strategy is to lead at the frontiers of AI innovation. OpenAI, Anthropic, and DeepSeek are developers of cutting-edge AI models. For those adopting a pioneering strategy, staying one step ahead of the competition is essential to capturing value, as this innovation advantage is the only thing that can place the company's intellectual property at the center of a broader development ecosystem.

Everything and anything...The challenge faced by leading companies is that the costs of innovation are enormous due to the amount of computing power needed to drive AI innovation. Meanwhile, the task of commercializing these technological advances is fraught with difficulties, and when greater emphasis is placed on commercial implementation, research can be hampered.

Leading companies are betting on artificial general intelligence (AGI), the Holy Grail of AI that reporter Karen Hao discovered to be a flimsy excuse for OpenAI CEO Sam Altman to dismiss all criticism of his company's business practices. For Srnicek, we should simply understand AGI as an AI model that can be “applied across all sectors.” This would eliminate at once the difficulties that leading AI companies face in capturing value from their innovations due to the need for sector-specific tools. Srnicek describes the potential of AGI as “immense,” but it is important that we remain skeptical about its viability.

The conglomerate strategy, the third of its kind, represents an attempt to build sector-specific AI products, aiming to dominate the market in the same way as conglomerates of the past: through ownership and acquisitions. Google is at the forefront of this strategy, having developed as many fundamental AI models as its next three largest competitors (OpenAI, Microsoft, and Meta) combined.

Google's quest for AI dominance requires the company to possess capabilities across the entire AI value chain: positioned at the forefront of research, with a solid infrastructure base, and capable of building high-quality products for diverse sectors. The company's launch of a series of AI tools for the healthcare sector in recent years, from personal health to drug development, exemplifies how this strategy is being applied in practice. In China, Huawei is at the forefront of a group of large technology companies that are adopting this comprehensive approach to AI development.

Finally, there is the open strategy, with Meta in the United States and Alibaba and DeepSeek in China as the main implementers. As the name suggests, the open strategy involves making AI models available so that other developers can improve them. In the case of Meta's "Llama" models, this does not meet the open-source standard, as there is still a significant lack of transparency in the training data and the algorithms behind the models. Even so, the weights used in the modeling are publicly available, which facilitates access and modification of the models by others.

What advantage does Meta gain from the open strategy? Other large technology companies are building protective walls around their intellectual property, creating an exclusive zone of interaction with selected partners. Meta, on the other hand, manages to build a broad ecosystem around its intellectual property, which naturally attracts researchers and developers. These, in turn, will make their own improvements and discoveries, which "can then be easily incorporated into Meta's internal systems." This is a strategy that can significantly reduce Mark Zuckerberg's company's costs in the long term.

The Rise of the “Technological-Industrial Complex”...In his farewell address in January 2025, Joe Biden warned of the risks of a growing “technological-industrial complex” in the United States. This consciously echoed the words of Dwight Eisenhower as he left the White House in 1961, when he memorably expressed his fears about a “military-industrial complex” that could dominate American democracy.

Like the military-industrial complex, the technological-industrial complex combines powerful vested interests within the state, primarily the War Department, with the largest private market players, which today are the big tech companies. This is a class alliance that has consolidated very recently. As Srnicek points out, Google, Meta, OpenAI, and Anthropic opposed the use of AI tools for military purposes in early 2024. All of these companies changed their positions in less than a year, and some quickly formed partnerships with companies in the defense sector.

The drastic shift in posture is partly due to economic necessity. AI development is expensive, and the military offers the prospect of substantial, long-term funding. But the geopolitical turn has deeper roots. There has been a notable ideological shift among the tech elites in the US, moving away from what Srnicek calls the "Silicon Valley Consensus" toward "technonationalism."

The Silicon Valley Consensus was essentially a compromise among tech elites with US-led neoliberal globalization. Politicians and CEOs of tech companies shared a belief in technology's ability to create a world of borderless commerce and data, led by the United States. The lax regulation of the tech sector meant that Silicon Valley had little reason to worry about state interference. Abroad, Washington helped keep foreign economies open to American technology and limited the imposition of foreign taxes and regulations on large US technology companies, while value chains between all major technology companies stretched from China to the United States, keeping costs low.

What ended the Silicon Valley Consensus was the rise of China, which opened a new constellation of class and interest conflicts. Chinese tech giants began to become real competitors to their American rivals, shifting Silicon Valley's calculations. Meanwhile, since at least Donald Trump's first term, the state has prioritized American technological dominance at the expense of global interconnectedness. This continued during Biden's presidency, with the tightening of sanctions on critical technologies like semiconductors, and under Trump's second term, flourished in what Srnicek calls a "technonationalist vision of American supremacy and unrestricted innovation."

The level of integration between large technology companies and the state is now undeniable. A $9 billion Pentagon contract for a “joint cloud computing capability for military purposes” includes all the major US cloud players: Amazon, Google, Microsoft, and Oracle. Ties between tech companies and the military have rapidly increased. Srnicek notes that it is no coincidence that the emergence of the tech-industrial complex has coincided with the “war waged by big tech companies against their workers,” many of whom have sought to resist militarization.

The rise of technonationalism in the United States has found a parallel in China. Just as in the US, the elites of the Chinese Communist Party initially adopted a non-interventionist stance toward the emergence of large and powerful digital platforms in China, seeking to encourage the sector's growth. However, with increasing tensions with the United States, Chinese President Xi Jinping has begun to steer tech companies toward state priorities. This involved cracking down on many companies focused on facilitating consumption, such as the sharing economy platforms Meituan and DiDi, while simultaneously pressuring technology companies to contribute to industrial development, as this is the raison d'être of the Chinese Communist Party government.

Thus, in both the US and China, we observe the emergence of a potential new hegemonic order “due to the dismantling of class coalitions between the economic interests of the State, the security interests of the State, and the interests of platform capitalism.” Srnicek is cautious about the prospects for consolidating this new order, highlighting the opposing trends that move away from militarized technonationalism and the relative independence of large technology companies from the State. But the era of neoliberal globalization has clearly come to an end, and the consequent fusion between the State and large technology companies around a nationalist vision for AI entails extreme dangers for everyone.

What's next?...In a contest between the United States and China for supremacy in artificial intelligence, which country is more likely to emerge victorious? Srnicek's analysis leans toward the idea that China, despite exhibiting many weaknesses compared to the US, may very well win the technological race.

The reasoning is surprisingly simple: while the US tech industry focuses on innovation, China's priority is adoption, and adoption is likely to be decisive in the long run due to the need for a general-purpose technology like AI to spread throughout the economy to reach its full potential: In previous industrial revolutions, major power transitions led to these transitions not because one country monopolized the profits, but rather because one country excelled at adopting a new technology and used it to dramatically transform its entire economy in terms of productivity and growth. This widespread transformation of the entire economy—and not of a single leading sector—is what allows emerging major powers to overtake and surpass established hegemonies.

Regardless of the outcome of this dispute, it is unlikely that technology will decisively divide into two hemispheres, East and West, due to the complex interaction of international value chains. Instead, there will be an "overlap of different geopolitical [technological] layers," with the pursuit of a balance between American and Chinese power being a viable strategy for many countries, albeit challenging to implement.

Unlike Srnicek's 2015 book (co-authored with Alex Williams), *Inventing the Future*, which encouraged the left to embrace automation as part of a post-capitalist vision, *Silicon Empires* avoids developing left-wing policies for AI. Srnicek restricts himself to just two demands: no war between the United States and China, and large technology companies should not be allowed to dominate AI development.

These are useful starting points to guide the left regarding AI, but ultimately, a more ambitious agenda will be needed. Any self-respecting contemporary socialist program must be able to explain what role AI should play in the economy and society, how it should be governed, and what its relationship should be with the state and between states. Regardless of what happens in 2026 with the AI ​​bubble, the political challenges posed by this powerful technology will only increase over time.


Authors: Ben Wray is the co-author, along with Neil Davidson and James Foley, of Scotland After Britain: The Two Souls of Scottish Independence (Verso Books, 2022).


TECH


Octopus-inspired 'smart skin' uses 4D printing to morph on cue

Despite the prevalence of synthetic materials across different industries and scientific fields, most are developed to serve a limited set of functions. To address this inflexibility, researchers at Penn State, led by Hongtao Sun, assistant professor of industrial and manufacturing engineering (IME), have developed a fabrication method that can print multifunctional "smart synthetic skin"—configurable materials that can be used to encrypt or decrypt information, enable adaptive camouflage, power soft robotics and more.

Using their novel approach, the team made a programmable smart skin out of hydrogel—a water-rich, gel-like material. Compared to traditional synthetic materials with fixed properties, the smart skin enables enhanced multifunctionality, allowing researchers to adjust the gel's dynamic control of optical appearance, mechanical response, surface texture and shape morphing when exposed to external stimuli such as heat, solvents or mechanical stress.

The team detailed their work in a paper published in Nature Communications. Their paper was also featured in Editors' Highlights.

According to Sun, principal investigator on the project, the idea for the material was sparked by the natural biology of cephalopods, like the octopus, that can control their skin's appearance to camouflage themselves from predators or communicate with each other.

"Cephalopods use a complex system of muscles and nerves to exhibit dynamic control over the appearance and texture of their skin," Sun said. "Inspired by these soft organisms, we developed a 4D-printing system to capture that idea in a synthetic, soft material."

Sun, who holds additional affiliations in biomedical engineering, material science and engineering and the Materials Research Institute at Penn State, described the team's method as 4D-printing because it produces 3D objects that can reactively adjust based on changes in the environment.

How the halftone smart skin works...The team used a technique known as halftone-encoded printing—which translates image or texture data onto a surface in the form of binary ones and zeros—to encode digital information directly into the material, similarly to the dot patterns used in newspapers or photographs. This technique allows the team to essentially program their smart skin to change appearance or texture through exposure to stimuli.

These patterns control how different regions of the material respond to their environment, with some areas deswelling or softening more than others when exposed to changes in temperature, liquids or mechanical forces. By carefully designing the patterns, the team can decide how the material behaves overall.

"In simple terms, we're printing instructions into the material," Sun explained. "Those instructions tell the skin how to react when something changes around it."

The team used their new printing method to encode a photo of the Mona Lisa onto their "smart skin" material (left). The photo, which can initially appear hidden in the material, can be revealed by stretching, exposure to heat, exposure to liquid or by adjusting the material from a 2D to a 3D shape (right). Credit: Hongtao Sun

Hiding and revealing information on demand...According to Haoqing Yang, a doctoral candidate studying IME and first author of the paper, one of the most striking demonstrations of the smart skin is its ability to hide and reveal information. To showcase this feature, the team encoded a photo of the Mona Lisa onto the smart skin. When the film was washed with ethanol, the film appeared transparent, showing no visible image. However, the Mona Lisa became fully visible after immersion in ice water, or during gradual heating.

Although the Mona Lisa was used as a demonstration, Yang explained that the team's printing method allows them to encode any desired image onto the hydrogel.

"This behavior could be used for camouflage, where a surface blends into its environment, or for information encryption, where messages are hidden and only revealed under specific conditions," Yang said.

The team also showed that hidden patterns can be uncovered by gently stretching the material and measuring how it deforms via digital image correlation analysis. This means information can be revealed not just by sight, but also through mechanical deformation, adding another layer of security.

Shape-morphing functions and future uses...The material proved highly malleable—the smart skin could easily transform from a flat sheet into non-traditional, bio-inspired shapes with complex textures, according to Sun. Unlike many other shape-morphing materials, this effect does not require multiple layers or different materials. Rather, these shapes and textured surfaces—like those seen on cephalopod skin—can be controlled by the digitally printed halftone pattern within a single sheet.

Building on these shape-morphing capabilities, the researchers showed that the smart skin can combine multiple functions simultaneously. By carefully co-designing the halftone patterns, the team was able to encode the Mona Lisa image directly into flat films that later emerged as the material transformed into 3D shapes. As the flat sheets curved into dome-like structures, the hidden image information gradually became visible, demonstrating how changes in shape and appearance can be programmed together.

"Similar to how cephalopods coordinate body shape and skin patterning, the synthetic smart skin can simultaneously control what it looks like and how it deforms, all within a single, soft material," Sun said.

According to Sun, this work builds on previous efforts to 4D-print smart hydrogels, also published in Nature Communications. In that study, the team focused on the co-design of mechanical properties and programmable 2D-to-3D shape morphing. In the present work, the team developed a halftone-encoded 4D printing method to co-design more functions within a single smart hydrogel film.

Looking ahead, the team plans to develop a general and scalable platform that enables precise digital encoding of multiple functions into a single adaptive smart material system.

"This interdisciplinary research at the intersection of advanced manufacturing, intelligent materials and mechanics opens new opportunities with broad implications for stimulus-responsive systems, biomimetic engineering, advanced encryption technologies, biomedical devices and more," Sun said.

Provided by Pennsylvania State University

Wednesday, February 4, 2026


TECH


The common mistake of interpreting Matrix as a film about AI

When Matrix premiered in the late 1990s, audiences left the theater feeling like they had seen something revolutionary. The aesthetics, the pacing, and the iconic scenes marked a generation. Over time, the work came to be remembered as the great fable about machines dominating humanity. But this interpretation, while popular, misses what truly drives the story. Ultimately, Matrix is ​​less about technology—and much more about belief, choice, and voluntary submission.

Within the genre, Matrix never fit into the so-called "hard science fiction." There is no real interest in explaining how artificial intelligence works, nor any concern in making the technical functioning of the simulation plausible. What exists is something else: symbols, archetypes, and a profoundly spiritual narrative.

From the very first minutes, the Wachowski sisters' work is full of religious references. Prophecies, sacrifice, resurrection, betrayal, and redemption do not appear by chance. The human city is called Zion. The figure that guides the protagonist is called the Oracle. And the salvation of the world depends on belief in someone who may — or may not — be “the chosen one.”

In this context, technology is not the theme, but the setting. The Matrix functions as a modern stage to tell a much older story, closer to messianic myths and theological narratives than to treatises on computing.

Neo doesn't win because he understands the system, but because he believes... Neo's arc makes that clear. He doesn't defeat the machines by mastering codes or exploiting technical flaws. His true leap happens when he abandons logic and accepts something less rational: faith.

Victory doesn't come from knowledge, but from the absolute belief that he can transcend the rules. Even the names reinforce this. Trinity, the Oracle, Zion. And Cypher, whose betrayal echoes biblical stories more than programming flaws.

Matrix doesn't ask "how does this work?", but "what are you willing to believe in?". The central conflict was never between humans and artificial intelligence, but between accepting an imposed role or assuming a meaning that demands sacrifice.

The most important revelation of the saga happens when Neo meets the Architect. There, classic heroism dissolves. There is no unprecedented rupture of the system. There are cycles, reboots, and predictable choices.

The Architect's mistake is not technical, but human. His system fails because it tries to impose perfection. The solution comes from the Oracle, who understands something essential: humans tolerate control as long as they believe they are choosing freely.

That is the real workings of the Matrix. Not force, not violence, but the illusion of choice. People are not prisoners because they cannot escape, but because, for the most part, they do not want to.

A comfortable prison called normality...Seen in this way, the Matrix looks less like a conscious artificial intelligence and more like a social operating system. There are error corrections, agents that maintain order, programs that become obsolete, and others that develop their own desires.

The sequels complicated this structure, but did not negate it. The point remains: the prison is not sustained by fear, but by convenience. Most prefer that everything continues to function, even knowing that something is wrong.

This logic also appears in Zion, the last human city. There, the irony is complete: humans hate the machines, but depend on them to survive. Energy, air, heat — nothing works without technology. The dominance is not unilateral. It is a mutual dependence.

The real warning that Matrix left...Reread today, the saga doesn't seem like a prophecy about rebellious artificial intelligence. It seems like an uncomfortable portrait of the human willingness to delegate decisions, thought, and responsibility to systems that it doesn't understand, as long as it guarantees stability.

Matrix remains relevant not because it talked about machines that think, but because it talked about people who prefer not to think. And perhaps that was, from the beginning, the Wachowskis' real fear: not the revolt of the machines, but the ease with which we accept living within an illusion — as long as it is comfortable.

The mistake of interpreting Matrix purely as a film about Artificial Intelligence is reducing a work laden with social, identity-related, and philosophical allegories to a simple technological fable about "machines against humans."

Although AI is the driving force of the plot, it functions as a metaphor for real control systems. Here are the main points that this interpretation ignores:

1. The trans allegory...The directors themselves, Lana and Lilly Wachowski, confirmed that Matrix was conceived as an allegory for the transgender experience. The Matrix represents the imposed social norm (such as gender binarism). Neo experiences the conflict of inhabiting a body and an identity (Thomas Anderson) that do not correspond to his inner truth. The Red Pill symbolizes the awakening to one's own identity and the beginning of a transition that is often painful, but necessary.

2. The simulacrum and consumer society...The film is deeply inspired by the book "Simulacra and Simulation" by Jean Baudrillard. The flaw is not the technology itself, but the idea that we live in a hyper-reality where symbols and media images have replaced the real world.

Interpreting it solely as AI ignores the critique of capitalism and the corporate system (represented by Neo's office and the Agents), which transforms human beings into consumable resources (batteries) to keep the system running.

3. The philosophical awakening...The film is a modern reinterpretation of Plato's Allegory of the Cave. The focus is not on the "computer," but on human perception. The mistake is focusing on "who holds us captive" (the machines) instead of "how we free ourselves" (self-knowledge and willpower).

by mundophone


DIGITAL LIFE


Whack-a-mole: US academic fights to purge his AI deepfakes

As deepfake videos of John Mearsheimer multiplied across YouTube, the American academic rushed to have them taken down, embarking on a grueling fight that laid bare the challenges of combating AI-driven impersonation.

The international relations scholar spent months pressing the Google-owned platform to remove hundreds of deepfakes, an uphill battle that stands as a cautionary tale for professionals vulnerable to disinformation and identity theft in the age of AI.

In recent months, Mearsheimer's office at the University of Chicago identified 43 YouTube channels pushing AI fabrications using his likeness, some depicted him making contentious remarks about heated geopolitical rivalries.

One fabricated clip, which also surfaced on TikTok, purported to show the academic commenting on Japan's strained relations with China after Prime Minister Sanae Takaichi expressed support for Taiwan in November.

Another lifelike AI clip, featuring a Mandarin voiceover aimed at a Chinese audience, purported to show Mearsheimer claiming that American credibility and influence were weakening in Asia as Beijing surged ahead.

"This is a terribly disturbing situation, as these videos are fake, and they are designed to give viewers the sense that they are real," Mearsheimer told AFP.

"It undermines the notion of an open and honest discourse, which we need so much and which YouTube is supposed to facilitate."

Central to the struggle was what Mearsheimer's office described as a slow, cumbersome process that prevents channels from being reported for infringement unless the targeted individual's name or image featured in its title, description, or avatar.

As a result, his office was forced to submit individual takedown requests for every deepfake video, a laborious process that required a dedicated employee.

"AI scales fabrication"...Even then, the system failed to stem the spread. New AI channels continued sprouting, some slightly altering their names—such as calling themselves "Jhon Mearsheimer"—to evade scrutiny and removal.

"The biggest problem is that they (YouTube) are not preventing new channels dedicated to posting AI-generated videos of me from emerging," Mearsheimer said.

After months of back and forth—and what Mearsheimer described as a "herculean" effort—YouTube shut down 41 of the 43 identified channels.

But the takedowns came only after many deepfake clips gained significant traction, and the risk of their reappearance still lingers.

"AI scales fabrication itself. When anyone can generate a convincing image of you in seconds, the harm isn't just the image. It's the collapse of deniability. The burden of proof shifts to the victim," Vered Horesh, from the AI startup Bria, told AFP.

"Safety can't be a takedown process—it has to be a product requirement."...In its response, a YouTube spokesperson said it was committed to building "AI technology that empowers human creativity responsibly" and that it enforced its policies "consistently" for all creators, regardless of their use of AI.

In his recent annual letter outlining YouTube's priorities for 2026, CEO Neal Mohan wrote the platform is "actively building" on its systems to reduce the spread of "AI slop"—low-quality visual content—while it plans to dramatically expand AI tools for its creators.

"Major headache"...Mearsheimer's experience underscores a new, deception-filled internet, where rapid advancements in generative AI distort shared realities and empower anonymous scammers to target professionals with public-facing profiles.

Hoaxes produced with inexpensive AI tools can often slip past detection, deceiving unsuspecting viewers.

In recent months, doctors have been impersonated to sell bogus medical products, CEOs to peddle fraudulent financial advice, and academics to fabricate opinions for agenda-driven actors in geopolitical rivalries.

Mearsheimer said he planned to launch his own YouTube channel to help shield users from deepfakes impersonating him.

Mirroring that approach, Jeffrey Sachs, a US economist and Columbia University professor, recently announced the launch of his own channel in response to "the extraordinary proliferation of fake, AI-generated videos of me" on the platform.

"The YouTube process is difficult to navigate and generally is completely whack-a-mole," Sachs told AFP.

"There remains a proliferation of fakes, and it's not simple for my office to track them down, or even to notice them until they've been around for a while. This is a major, continuing headache," he added.

© 2026 AFP

Tuesday, February 3, 2026


DIGITAL LIFE


The network that no one sees, but decides battles — Starlink's role in the war in Ukraine

In modern conflicts, not everything explodes or makes noise. Some of the most critical decisions happen silently, far from the trenches. Since 2022, connectivity has become as vital as ammunition on the Ukrainian battlefield. At the center of this new chessboard is a constellation of private satellites, operated by a company that is not accountable to governments. And, recently, a statement reignited an uncomfortable debate about power, control, and war in the 21st century.

Since the beginning of the war, Ukraine has relied heavily on Starlink to maintain stable communications in regions where traditional infrastructure has been destroyed. The system has allowed for troop coordination, real-time information exchange, and long-distance drone operation.

What began as a civilian service quickly gained another function. In a conflict where seconds count, having reliable internet can define the success — or failure — of a mission. Therefore, when evidence emerged that Russian forces were using the same system, the problem ceased to be technical and became strategic.

According to Ukrainian authorities, Starlink terminals were being used to guide long-range Russian drones. The accusation sounded like a warning: the same infrastructure that helps defend the country could be being used against it.

 The Ukrainian Minister of Digital Transformation, Mykhailo Fedorov, publicly stated that Kiev was in direct contact with SpaceX to block the “unauthorized” use of the network. The message was clear: Western technology should not be used for attacks against civilians.

Shortly afterward, Elon Musk wrote on the X platform that the measures taken had worked. In a few words, he confirmed something unprecedented: a private company had actively interfered in the use of critical infrastructure during an ongoing war.

The statement raised more questions than answers. How was this blocking done? In which regions? With what criteria? What was implied, however, was even more relevant: the ability to turn internet access on or off in a war zone is not in the hands of a state.

A power that doesn't go through barracks...This isn't the first time Musk has found himself in the role of unwitting arbiter of conflict. In 2022, he had already acknowledged that SpaceX could limit Starlink's coverage in certain areas and that he chose to restrict its use to specific offensive operations.

This has transformed the network into something unprecedented: a private infrastructure with tactical veto power. In practice, decisions made by executives and engineers can directly affect military operations on the ground.

For Ukraine, the situation is paradoxical. Despite public disagreements between Musk and Ukrainian authorities throughout the conflict, the country remains highly dependent on Starlink. In the short term, there is no alternative with similar reach and resilience. Kyiv needs the network—but doesn't control it.

The episode reveals a structural shift. Contemporary wars are not fought solely by national armies, but by an ecosystem of satellites, software, communication platforms, and private services. Starlink doesn't fire weapons, but it decides who can communicate when they are fired.

This creates a precedent that is difficult to ignore. If a company can block a country's access to satellite internet in a conflict, inevitable questions arise: who regulates this power? Who defines when it should be used? And with what legitimacy?

For now, SpaceX claims that Russian use has been interrupted. But the discussion is far from over. What is clear is that, on the battlefield of the 21st century, some of the most important decisions are not made in military command rooms—but rather in corporate offices and in orbit around the Earth.

mundophone

 

DIGITAL LIFE


Understanding how feminist AI in Latin America works

Online spaces perpetuate stereotypes about who creates them and the data that is inserted into them — currently, predominantly men. This global phenomenon has been driving the construction of technological alternatives, such as the Feminist AI Network of Latin America and the Caribbean.

The technological literature is full of examples of gender bias. Image recognition systems have difficulty accurately identifying women, especially Black women, which has already resulted in misidentifications with serious consequences for law enforcement.

Voice assistants have long used exclusively female voices, reinforcing the stereotype that women are more suited to service roles.

In image generation, AIs often associate the term "CEO" with a man, while a search for "assistant" returns images of women.

— Artificial intelligence feeds on data that is not neutral: it reflects societies marked by historical inequalities and power relations. If a company wants to achieve fair results, it needs to analyze datasets, verify their representativeness, and actively intervene when this is not the case. Equity doesn't just appear on its own: it needs to be designed — Ivana Bartoletti, an international expert in AI governance and author of a Council of Europe study on artificial intelligence and gender, told the ANSA news agency.

For Bartoletti, the recent case of Grok — Elon Musk's AI that allowed the generation of fake images of naked women and minors, a function that was later discontinued — “shows what happens when the safety and rights of women are not considered in the design of systems.”

— If there are tools to undress women, they will be used. Deepfake nudes are a form of humiliation and control. The implicit message is dangerous: you are online, therefore you deserve this. This is how many women are silenced and abandon the digital space — she explained.

It is in this context that technological alternatives emerge to rethink artificial intelligence and transform it into a space of struggle and shared power.

Feminist AIs...In Latin America and the Caribbean, for example, the Feminist AI Network emerged, supporting dozens of projects focused on transparency and public policies. Tools like AymurAI, Arvage AI, and SofIA apply a gender perspective to legal analysis and expose the biases and discrimination inherent in algorithms.

Afrofeminism has also been reclaiming artificial intelligence as a space for self-determination, with assistants like AfroféminasGPT, trained based on the knowledge and voices of Black people.

— They demonstrate that we can organize ourselves to use AI for the benefit of all, share data collectively, and develop solutions centered on real needs. But the key remains power. The feminist issue in AI is a question of power: women need to have more power. Not on the margins, but at the top of companies and in the spaces where technological policies are decided. We need diversity in decision-making environments, not just among programmers. Artificial intelligence is not just technology, it's a choice about how we want to transform society — concluded Ivana Bartoletti.

by La Nacion — Buenos Aires

DOSSIER DIGITAL LIFE How big tech killed online debate The Washington Post, under billionaire owner Jeff Bezos, has just laid off 300 journa...