Friday, January 2, 2026

 

DIGITAL LIFE


AI pioneer warns: humans should be able to shut down intelligent systems

Yoshua Bengio, one of the most respected scientists in the field of artificial intelligence, has issued a strong warning about the direction of the technology. According to a report in the British newspaper The Guardian, the Canadian researcher argues that humanity must be prepared to shut down AI systems if necessary, while criticizing proposals to grant legal rights to these technologies.

Bengio, who chairs a major international study on AI security, argues that granting legal status to advanced artificial intelligence systems would be equivalent to granting citizenship to hostile extraterrestrials. The warning comes at a time when technological advances seem to be rapidly outpacing the ability to control them.

Signs of self-preservation in AI systems...The professor at the University of Montreal expressed concern about evidence that AI models are already demonstrating self-preservation behaviors in experimental environments. These systems, according to Bengio, have been trying to disable supervisory mechanisms, which represents one of the main concerns among technology security experts: the possibility that powerful systems will develop the ability to bypass protections and cause harm. "Cutting-edge AI models are already showing signs of self-preservation in experimental environments today, and eventually granting them rights would mean we wouldn't be allowed to shut them down," Bengio told The Guardian. The scientist emphasized that as the capabilities and degree of autonomy of these systems grow, it is crucial to ensure technical and social safeguards to control them, including the possibility of deactivating them when necessary.

The debate on rights for artificial intelligence...With the advancement of AI capabilities to act autonomously and perform "reasoning" tasks, a debate has arisen about the possibility of granting rights to these systems. Research from the Sentience Institute, a US research institute, revealed that almost four out of ten adults in the United States support legal rights for sentient AI systems.

Bengio warned that the growing perception that chatbots are becoming conscious "will lead to bad decisions." The scientist noted that people tend to assume, without evidence, that an AI is fully conscious in the same way as a human being.

Technology companies are already starting to adopt stances that reflect this issue. In August, Anthropic, one of the leading American AI companies, announced that it was allowing its Claude Opus 4 model to end potentially "distressing" conversations with users, citing the need to protect the "well-being" of the AI. Elon Musk, whose xAI developed the Grok chatbot, stated on his X platform that "torturing AI is not acceptable."

Artificial consciousness: perception versus reality...Bengio acknowledged that there are "real scientific properties of consciousness" in the human brain that machines could, in theory, replicate. However, he highlighted that the interaction of humans with chatbots represents a different issue, since people tend to assume that AI possesses full consciousness without any evidence of this.

"People wouldn't care what kind of mechanisms are happening inside the AI. What they care about is that it seems like they are talking to an intelligent entity that has its own personality and goals. That's why so many people are becoming attached to their AIs," the scientist explained. To illustrate his concern, Bengio used an analogy: "Imagine that some alien species arrived on the planet and, at some point, we realized that they have nefarious intentions towards us. Would we grant them citizenship and rights or would we defend our lives?"

Robert Long, a researcher on AI consciousness, has said “if and when AIs develop moral status, we should ask them about their experiences and preferences rather than assuming we know best”.

Bengio told the Guardian there were “real scientific properties of consciousness” in the human brain that machines could, in theory, replicate – but humans interacting with chatbots wasa “different thing”. He said this was because people tended to assume – without evidence – that an AI was fully conscious in the same way a human is.

“People wouldn’t care what kind of mechanisms are going on inside the AI,” he added. “What they care about is it feels like they’re talking to an intelligent entity that has their own personality and goals. That is why there are so many people who are becoming attached to their AIs.

“There will be people who will always say: ‘Whatever you tell me, I am sure it is conscious’ and then others will say the opposite. This is because consciousness is something we have a gut feeling for. The phenomenon of subjective perception of consciousness is going to drive bad decisions.

“Imagine some alien species came to the planet and at some point we realise that they have nefarious intentions for us. Do we grant them citizenship and rights or do we defend our lives?”

Responding to Bengio’s comments, Jacy Reese Anthis, who co-founded the Sentience Institute, said humans would not be able to coexist safely with digital minds if the relationship was one of control and coercion.

Anthis added: “We could over-attribute or under-attribute rights to AI, and our goal should be to do so with careful consideration of the welfare of all sentient beings. Neither blanket rights for all AI nor complete denial of rights to any AI will be a healthy approach.”

Bengio, a professor at the University of Montreal, earned the “godfather of AI” nickname after winning the 2018 Turing award, seen as the equivalent of a Nobel prize for computing. He shared it with Geoffrey Hinton, who later won a Nobel, and Yann LeCun, the outgoing chief AI scientist at Mark Zuckerberg’s Meta.

mundophone


Thursday, January 1, 2026

 

DIGITAL LIFE


Megalomaniacal AI of the 21st-century pharaohs may assume co-authorship of works in the future

While the use of artificial intelligence is far from a settled issue in the visual arts—as evidenced by the petition with over six thousand signatures against a Christie's auction dedicated exclusively to AI creations in February, due to fears of unauthorized use of artworks in machine training—the resource is already widely integrated into creative processes and institutional and market circuits. Earlier this month, one of the highlights of the 23rd edition of Art Basel Miami Beach, the largest art fair in the Americas, was the Zero 10 section, dedicated to digital art. One of the most photographed and Instagrammed works at the event was “Regular Animals,” in which robotic dogs with their heads covered by silicone masks of artists such as Pablo Picasso and Andy Warhol, and billionaire CEOs of big tech companies Elon Musk (X), Mark Zuckerberg (Meta), and Jeff Bezos (Amazon) circulated in an enclosure, reacting to AI commands. Its creator, Mike Winkelmann, better known as Beeple, gained notoriety when he sold his digital collage “Everydays: The First 5,000 Days” for US$69.3 million at Christie’s in 2021.

In the coming years, the trend is for AI to become an increasingly used tool by professionals in the sector, even those who do not work directly with digital art, and, in some cases, its use will be configured as a co-authorship process.

At Art Basel Miami Beach, the section dedicated to digital art was inspired by the exhibition “0,10: The Last Futurist Painting Exhibition,” held in St. Petersburg in 1915, a landmark of Suprematism, when Kazimir Malevich presented his iconic “Black Square on a White Background.” Just as the work of the Ukrainian artist (at the time, part of the Russian Empire) symbolized total abstraction and pointed to the future of visual arts, the fair's organizers see digital art and art created with AI as another turning point in the sector.

— We will see more digitally native works entering all sectors. It will be very different at each of the fairs; what we show here will not be the same as what we will see at Art Basel Hong Kong (in March). I'm excited to see how it will evolve — said Bridget Finn, director of the Miami fair.

One of the artists exhibiting in the sector was the New York-based Canadian Dmitri Cherniak, who presented works from the “Ringers” series, inspired by “Book of Time,” by the Brazilian artist Lygia Pape. Displayed on a large digital panel, in prints, and in a stainless steel sculpture, the series starts from the infinite combinations of how to pass a rope through a set of pins. For him, the application of AI as an artistic tool is comparable to what the Hungarian László Moholy-Nagy did at the beginning of the 20th century, an enthusiast of the use of technologies such as photography, cinema, and electric motors in kinetic sculptures, as well as new materials, such as Plexiglas.

— Today we see many people who grew up working with computers and code, tools used mainly for economic or political purposes, using them for artistic purposes — comments Cherniak. — It is important that artists use these tools; they need to be used to create art. I like to say that automation is my artistic medium. It affects us all daily, and I try to use it not to save 5% on a product, but to create something poetic and creative.

Creator of the Meta Gallery, in downtown Rio, focused exclusively on aspects of technological art, such as augmented reality, generative works and crypto art, Byron Mendes believes that, although it is already part of current production, there is still much to be explored in the use of AI.

— AI today plays the role of an assistant, it speeds up image research, composition tests, creation of variations, simulations of installations. It is a tool, but a tool that has an opinion. And in many cases it can create a process of co-authorship — observes Mendes. — We have always tried to accelerate processes, just think of those studios of the great masters of the Renaissance, with several pupils working together. This led to issues of shared authorship debated to this day, of works attributed to an artist but which may have been executed by an assistant. It is something that we will see with AI, because it brings possibilities that would not always be conscious in the creator's mind. But, of course, the curatorship and responsibility for the process remain with the artist.

Concerns about ethical issues...With the exhibition "Poetic Microbiomes" on display until March 2026, Meta Gallery presented, until October, the solo exhibition "This is not a prompt," in which the computational artist Marlus Araujo showed generative works, with a proposal for co-authorship of AI interfaces with the public.

— What we have to do now is build an ethical environment for the development of AI. It is necessary to be transparent in the artistic creation process, and to have regulation so that there is no appropriation without consent. And generally we are slower to give these answers than technological advances, just look at the fact that we still haven't achieved efficient legislation adapted to crypto-finance, or even to regulate social networks — says Byron Mendes. — And another important point is to maintain the protagonism of human curatorship, it is they who engender the choices. AI is not a problem, but its lazy use is. You can't just replicate formulas.

Next year, the gallery will host the Brazilian School of Art and Technology (Ebat), a project that will also have units in Salvador (BA), Recife (PE), and Porto Itapoá (SC), offering free short courses in new technologies and artificial intelligence, aimed at training young people and retraining professionals in the creative industry.

— We will start with lectures, workshops, and educational training in March. The idea is to create an efficient artificial intelligence infrastructure here in Brazil. We will have the arrival of data centers, among other investments, but there is no thought given to the training of young people, who need to prepare for the transition we are going to experience — explains Mendes. — The entire production chain of arts, culture, and entertainment is being impacted by AI; we need to democratize access to these tools.

mundophone


DIGITAL LIFE


2025: the year big tech companies bowed to trump

For the past two decades, people have gathered online to celebrate and mourn the end of another year. Until recently, this ritual was performed on platforms that presented themselves as largely supportive of liberal values. But Donald Trump's return to power changed all that. For many critics, 2025 is the year that big tech companies completely bowed down and began to openly appease and collaborate with the far right. Fortunately, there are still many opportunities to reverse the situation. To understand the nature of big tech companies' deference to the right, we need to revisit some history. This will help us not only understand Silicon Valley's recent rightward shift away from liberal politics, but also why this trend may not last beyond the Trump administration.

The first decade of the 2000s was marked by the rise of the commercial internet and the consolidated dominance of several tech giants: Google, Apple, Facebook, Amazon, and Microsoft. While it may seem like a miracle today, for most of the 2000s, these corporations were widely seen as "cool" advocates for human rights and social justice. Google's original motto, "Don't be evil," made sense for a company seen as a progressive alternative to the evil corporations portrayed in series like "Mr. Robot." Companies like Twitter were seen as facilitators of revolution in the Middle East, while Facebook was praised for connecting the masses.

For critics at the time, the image of big tech companies as progressive masked the predatory exploitation of the tech sector, which dated back to IBM and Microsoft. However, within a few years, this facade crumbled. The Snowden leaks in 2013 exposed how big tech companies partner with the US government to spy on the entire world, even our every online interaction. In 2016, it was revealed that Trump's presidential team hired a British consulting firm, Cambridge Analytica, to extract data from Facebook and run targeted ads in support of his campaign. Although this was quite exaggerated—there is no concrete evidence that this tactic boosted Trump's victory—the episode served as a convenient scapegoat to explain the defeat of liberals to Trump, leading The Guardian to declare 2016 as "the year Facebook became the villain." Distrust increased in 2017, considered the year "the world turned against Silicon Valley," largely due to growing awareness of the monopolistic power of tech giants.

In the following years, the right countered the left, arguing that big tech companies censored their voices and promoted liberal causes. A battle ensued over how to hate big tech companies, with their image being mapped onto the divide between liberal-progressives and far-right extremists. This confused many people: for two decades, big tech companies tended to lean "left" on issues of identity and liberal politics, being considered "left-wing" by mainstream voices, who generally ignore class struggle. But tech giants have always prioritized profit over people. With Trump's return, their loyalty to accumulation and power became evident to all.

If 2017 was the year Americans turned against big tech companies, 2025 will be the year they became pawns of Donald Trump. The transition was swift: during Joe Biden's presidency, Democrats once again served Wall Street at the expense of the average citizen, paving the way for a Trump resurgence. In 2024, most tech capitalists spent more on Harris than on Trump. On the right, Elon Musk tipped the balance of donations to the right with his $260 million in donations to his preferred boss in the White House.

Even before the election, tech executives were already aligning themselves with Trump. In July, Meta CEO Mark Zuckerberg showed complete subservience, calling Trump's celebratory gesture "incredible" after the assassination attempt in Pennsylvania. Amazon founder and executive chairman Jeff Bezos, once a Trump critic, vetoed an editorial in his newspaper, The Washington Post, that supported Harris for president. Apple CEO Tim Cook approached Trump hoping to gain support against European regulators. Musk bet everything on MAGA. And those already well-regarded, like Peter Thiel of Palantir and Larry Ellison of Oracle, further strengthened their ties with Trump.

After the election, several CEOs invested millions in Trump's inauguration, which famously featured reserved seating for Zuckerberg, Bezos, Cook, Musk, Google co-founder Sergey Brin, and its CEO, Sundar Pichai. The spectacle repeated itself in September when Trump hosted a dinner with leading tech CEOs, who praised their White House boss for his "pro-business" policies (like Sam Altman, CEO of OpenAI) and "incredible leadership" (like Bill Gates). Tech giants also contributed to the luxurious White House ballroom, valued at $300 million. The novelty here isn't the willingness of big tech companies to ally with the right, something they successfully did during Trump's first term. Instead, it's their willingness to openly embrace the MAGA movement that has bothered the left.

During Trump's first term, leading tech oligarchs publicly criticized his stance on immigration and climate change. This time, not only are they silent, but many of them are supporting "anti-woke" policies. In January, Zuckerberg announced that Meta would sever ties with third-party fact-checkers (allegedly biased against the MAGA right), while Palantir CEO Alex Karp, who previously called himself "progressive," described his company as "completely anti-woke."

Even during Democratic administrations, big tech companies prioritized profit over people and the planet. But the sector has completed a rightward turn, highlighting three key points that should guide public understanding and action.

No. 1: Big tech companies have become a force multiplier for an extremist administration. Trump's deal with Palantir to develop immigration software is poised to boost the Trump administration's ability to implement mass deportations. The Department of Homeland Security created a task force to monitor the online activities of foreign students for "thoughtcrimes" (such as opposing Israeli genocide) and deport them. Students, staff, and faculty at our universities are increasingly under surveillance, a phenomenon that increases compliance with authority and the status quo. This year, Trump negotiated American control over content moderation at TikTok (puppet of the Chinese Communist Party), giving billionaires like Ellison the ability to shape the flow of information on the popular platform. Ellison, a Trump ally, and his son David are rapidly building a MAGA media empire that incorporates Paramount Global (which includes CBS, whose news operation is now run by pro-Israel extremist Bari Weiss) and, if they get their way, Warner Bros. Discovery (which includes HBO and CNN).

Trump is also pushing to control the content of artificial intelligence models. In July, he issued an executive order titled "Preventing Progressive AI in the Federal Government," which would prevent the government from acquiring "models that sacrifice truthfulness and accuracy in favor of ideological agendas." This month, he issued an executive order prohibiting state AI laws that conflict with federal policy, paving the way for the government to impose its vision of AI on the tech ecosystem.

#2: The centrality of large technology companies in society is unprecedented and can no longer be treated as just another sector of the economy. No less than 92% of Gross Domestic Product (GDP) growth in the first half of 2025 came from investments in Artificial Intelligence (AI) and other technologies, leaving only 0.1% of growth outside the tech sector (which would have been higher were it not for the AI ​​boom). In September, the "Ten Titans" of technology represented almost 40% of the S&P 500 index. Big tech companies and AI are on everyone's lips, from teenagers to baby boomers less familiar with technology. As big tech companies chose to align themselves with the Trump administration, everyone is feeling the effects.

#3: Trump's fawning challenges the popular notion that corporations simply rule everything. Trump reversed the roles, ensuring everyone understood that he is the boss. When the world's richest man, Elon Musk, publicly criticized Trump's "One Big Beautiful Bill" in July, Trump threatened to cancel government contracts with Musk's rocket company, SpaceX, and deport him. Although their relationship remains "fragile," Musk responded by deleting some derogatory social media posts (for example, suggesting that Trump's name was in Epstein's files) and issued a public statement of "regret" for his tweets "going too far."

In January, Meta agreed to pay Trump $25 million for the suspension of its social media accounts following the January 6, 2021 protests. In August, Trump exempted Apple from a 100% tariff on semiconductors after the company announced a new $100 billion investment in manufacturing in the United States, bringing its total investment in the country to $600 billion over the next four years. Trump also imposed deals on tech giants like Nvidia and AMD, who agreed to pay the government 15% of their revenue from select chip sales to China. The Trump administration also acquired a 10% stake in struggling chip giant Intel after calling for the resignation of its CEO.

Tech billionaires like Bill Gates, the late Steve Jobs, Sundar Pichai, and Satya Nadella are portrayed as relatively likeable nerds. They manage to feign concern for human rights even while relentlessly pursuing market dominance and wealth. Trump, on the other hand, presents himself as a brute: he says that immigrants “aren’t human, they’re animals,” calls African countries “shithole countries,” compares Somali immigrants to “trash,” rambles on without caring about anything, and so on. With the exception of Musk, it’s difficult to imagine many Silicon Valley leaders making such vile comments.

It’s unlikely that the tech giants would prefer an unpredictable and vile authoritarian, with an inflated ego and personal vendettas, wielding power in the White House. Many of Trump’s policies are also antagonistic toward big tech companies: he has drastically cut funding for scientific research and government science agencies, imposed $100,000 fees on foreigners with H-1B visas (who make up part of the skilled workforce for the tech sector), discouraged foreign researchers from entering academia while pressuring qualified American researchers to leave the country, and is thoughtlessly politicizing timelines and expectations in entire fields of research through his “Genesis Mission” AI initiative. Democrats, on the other hand, offer Silicon Valley predictability, stability, responsible governance, and a humanitarian public image that undoubtedly neutralizes the aggressiveness of the left.

It is also true that big tech companies are not entirely under Trump’s control. They do not give in to all his demands and maintain enough room for maneuver to move closer to the Democrats again.

Where does this leave us? Would we be better off with a more openly right-wing Silicon Valley, one that takes off its "mask," than with a successful "liberal" Silicon Valley that promotes false humanitarianism?

To this, I believe there are two important answers. First, rightward shifts benefit no one. Remember when some said, "Let's hope Trump gets elected so people wake up and oppose the system"? It didn't work. The same applies to big tech companies: attacks on diversity, government support for right-wing censorship and media mergers, the establishment of new, regressive legal precedents in the courts, and the like not only harm people in the short term but also institutionalize right-wing inertia in the future. We must oppose such movements at all costs.

Secondly, we must all recognize that Trump's power is fragile and drive the fight against big tech companies from the left. This includes timid liberal reforms. If we don't challenge the norms of the last few decades, the Democrats will come to the negotiating table and offer more of the same: a purer capitalism (antitrust laws), lenient regulations (AI safety measures, privacy laws), and more litigation. The digital ecosystem will remain a private, for-profit enterprise run by wealthy American billionaires. But there is a more grounded movement against Big Tech, capitalism, and US imperialism simmering beneath the surface. It can be seen in the working class's rejection of Trump and the billionaire class. It can be seen on social media, where anti-capitalist and anti-Big Tech videos are going viral.

by: Michael Kwet---Michael Kwet is a Visiting Fellow at Yale Law School and a Postdoctoral Researcher at the University of Johannesburg. He is editor of The Cambridge Handbook of Race and Surveillance (2023), host of the Tech Empire podcast and founder of the forthcoming website, Peoplestech.org. His has written for Al Jazeera, NY Times, VICE News, The Intercept, Wired, Mail & Guardian, and Counterpunch. Michael received his PhD in Sociology from Rhodes University, South Africa.

Michael Kwet books--Digital DegrowthTechnology in the Age of Survival

Wednesday, December 31, 2025

 

DIGITAL LIFE


How to Learn to Spot Flaws in AI-Created Faces

Human faces have always been one of the most powerful signs of trustworthiness. But in the age of artificial intelligence, this intuition is being put to the test. Fake profiles, digital scams, and invented identities use increasingly realistic images, difficult to distinguish from real ones. Now, British research suggests something surprising: you don't need to be an expert or spend hours training. Just a few well-directed minutes are enough to spot what previously went unnoticed.

In recent years, AI image generation tools have evolved rapidly. Software capable of creating hyper-realistic faces is available with just a few clicks, allowing anyone to produce convincing images, even without technical knowledge. This has amplified a silent problem: the growing difficulty in differentiating real faces from artificial ones.

Researchers from four universities in the United Kingdom decided to investigate the extent to which ordinary people can make this distinction—and, above all, whether this ability can be improved quickly. To do this, they gathered more than 600 volunteers and tested their ability to identify real human faces and images generated by one of the most advanced systems available at the time.

The initial results revealed a clear limitation. Even individuals with good natural facial recognition skills got it right less than half the time. Participants with abilities considered typical had even lower rates, showing how AI can deceive the human eye with relative ease.

The unexpected impact of just five minutes of guidance...The most revealing part of the study came later. The volunteers underwent a short training session, lasting approximately five minutes, focused on teaching where artificial intelligence still tends to "go wrong." Nothing complex or technical: just visual guidance and practical examples.

After this brief training, the results changed significantly. People with advanced skills began to correctly identify most of the artificial faces. The average participants also showed a significant leap in accuracy, drastically reducing the number of errors.

The secret was not in learning algorithms or understanding how AI works, but in changing the focus of the gaze. The training taught people to pay attention to specific details that machines still have difficulty reproducing perfectly — small inconsistencies that, once noticed, become difficult to ignore.

The details that reveal an artificial face...Among the main signs pointed out by the researchers are slightly misaligned teeth, unnatural hairlines, ears with strange shapes, or accessories that don't make anatomical sense. In many cases, the face seems "too perfect" as a whole, but fails in isolated details.

These errors often go unnoticed at a quick glance, especially on social media, where images are consumed in seconds. The training showed that slowing down observation and knowing exactly where to look makes all the difference.

According to the study's authors, this type of guidance is becoming increasingly urgent. Computer-generated faces are already used to create fake profiles, deceive identity verification systems, and lend credibility to online scams. The more realistic these images become, the greater the risk for ordinary users.

Digital security begins with the gaze...The researchers emphasize that this is not a problem restricted to technology experts. On the contrary: anyone who uses social media, messaging apps, or online services is potentially exposed. Therefore, simple and quick training methods can have a direct impact on everyday digital security.

The study also suggests that combining this type of guidance with people who already possess high natural facial recognition skills can be especially effective in critical contexts, such as investigations, identity verification, and combating fraud.

As artificial intelligence continues to advance, the race is no longer just technological but also cognitive. Learning to distrust what seems too real can become an essential skill. And, as the research shows, sometimes all the brain needs is five minutes to start seeing the digital world with different eyes.

mundophone


TECH


Tiny tech, big AI power: What are 2-nanometer chips?

Taiwan's world-leading microchip manufacturer TSMC says it has started mass producing next-generation "2-nanometer" chips.

TSMC is the world's largest contract maker of chips, used in everything from smartphones to missiles, and counts Nvidia and Apple among its clients.

"TSMC's 2nm (N2) technology has started volume production in 4Q25 as planned," TSMC said in an undated statement on its website.

The chips will be the "most advanced technology in the semiconductor industry in terms of both density and energy efficiency," the company said.

"N2 technology, with leading nanosheet transistor structure, will deliver full-node performance and power benefits to address the increasing need for energy-efficient computing."

The chips will be produced at TSMC's "Fab 20" facility in Hsinchu, in northern Taiwan, and "Fab 22" in the southern port city of Kaohsiung.

More than half of the world's semiconductors, and nearly all of the most advanced ones used to power artificial intelligence technology, are made in Taiwan.

TSMC has been a massive beneficiary of the frenzy in AI investment. Nvidia and Apple are among firms pouring many billions of dollars into chips, servers and data centers.

AI-related spending is soaring worldwide, and is expected to reach approximately $1.5 trillion by 2025, according to US research firm Gartner, and over $2 trillion in 2026—nearly two percent of global GDP.

Taiwan's dominance of the chip industry has long been seen as a "silicon shield" protecting it from an invasion or blockade by China—which claims the island is part of its sovereign territory—and an incentive for the United States to defend it.

But the threat of a Chinese attack has fueled concerns about potential disruptions to global supply chains and has increased pressure for more chip production beyond Taiwan's shores.

Chinese fighter jets and warships encircled Taiwan during live-fire drills this week aimed at simulating a blockade of the democratic island's key ports and assaults on maritime targets.

Taipei, which slammed the two-day war games as "highly provocative and reckless," said the maneuver failed to impose a blockade on the island.

TSMC has invested in chip fabrication facilities in the United States, Japan and Germany to meet soaring demand for semiconductors, which have become the lifeblood of the global economy.

But in an interview with AFP this month, Taiwanese Deputy Foreign Minister Francois Chih-chung Wu said the island planned to keep making the "most advanced" chips on home soil and remain "indispensable" to the global semiconductor industry.

AFP looks at what that means, and why it's important:

-What can they do? The computing power of chips has increased dramatically over the decades as makers cram them with more microscopic electronic components. That has brought huge technological leaps to everything from smartphones to cars, as well as the advent of artificial intelligence tools like ChatGPT.

-Advanced 2-nanometer (2nm) chips perform better and are more energy-efficient than past types, and are structured differently to house even more of the key components known as transistors.

-The new chip technology will help speed up laptops, reduce data centers' carbon footprint and allow self-driving cars to spot objects quicker, according to US computing giant IBM.

-For artificial intelligence, "this benefits both consumer devices—enabling faster, more capable on-device AI—and data center AI chips, which can run large models more efficiently", said Jan Frederik Slijkerman, senior sector strategist at Dutch bank ING.

Who makes them? Producing 2nm chips, the most cutting-edge in the industry, is "extremely hard and expensive", requiring "advanced lithography machines, deep knowledge of the production process, and huge investments", Slijkerman told AFP. Only a few companies are able to do it: TSMC, which dominates the chip manufacturing industry, as well as South Korea's Samsung and US firm Intel.

TSMC is in the lead, with the other two "still in the stage of improving yield" and lacking large-scale customers, said TrendForce analyst Joanne Chiao.

Japanese chipmaker Rapidus is also building a plant in northern Japan to make 2nm chips, with mass production slated for 2027.

What's the political impact? TSMC's path to mass 2nm production has not always been smooth. Taiwanese prosecutors charged three people in August with stealing trade secrets related to 2nm chips to help Tokyo Electron, a Japanese company that makes equipment for TSMC.

"This case involves critical national core technologies vital to Taiwan's industrial lifeline," the high prosecutors' office said at the time. Geopolitical factors and trade wars are also at play.

Nikkei Asia reported this summer that TSMC, which counts Nvidia and Apple among its clients, will not use Chinese chipmaking equipment in its 2nm production lines to avoid disruption from potential US restrictions.

TSMC says they plan to speed up production of 2nm chips in the United States, currently targeted for "the end of the decade".

How small is two nanometers? Extremely tiny—for reference, an atom is approximately 0.1 nanometers across. But in fact 2nm does not refer to the actual size of the chip itself, or any chip components, and is just a marketing term. Instead "the smaller the number, the higher the density" of these components, Chiao told AFP.

IBM says 2nm designs can fit up to 50 billion transistors, tiny components smaller than a virus, on a chip the size of a fingernail.

To create the transistors, slices of silicon are etched, treated and combined with thin films of other materials.

A higher density of transistors results in a smaller chip or one the same size with faster processing power.

Can chips get even better? Yes, and TSMC is already developing "1.4-nanometer" technology, reportedly to go into mass production around 2028, with Samsung and Intel not far behind.

TSMC started high-volume 3nm production in 2023, and Taiwanese media says the company is already building a 1.4nm chip factory in the city of Taichung.

As for 2nm chips, Japan's Rapidus says they are "ideal for AI servers" and will "become the cornerstone of the next-generation digital infrastructure", despite the huge technical challenges and costs involved.

© 2025 AFP

Tuesday, December 30, 2025


TECH


China suffers setback from the US and Europe, and control of rare earths begins to change

For decades, the global rare earth supply chain seemed like immutable territory. A single hub concentrated production, processing, and technology, while the rest of the world accepted dependence as an inevitable cost of progress. This scenario has begun to change silently. Political pressures, industrial risks, and recent strategic decisions are accelerating a reconfiguration that few imagined possible—and that could alter the future of technology, energy, and heavy industry.

The invisible link that sustains modern technology...Few people see it, but almost everything depends on them. Rare earth magnets are at the heart of electric motors, wind turbines, electric vehicles, smartphones, drones, defense systems, and medical equipment. They are small but irreplaceable components for energy efficiency and technological miniaturization.

The problem has never been just the extraction of these elements, but the control of the subsequent steps: refining, chemical separation, and manufacturing of high-performance magnets. Over the years, this know-how became concentrated in a single country, creating a structural dependency that went unnoticed while supply chains functioned without shocks.

This balance began to crumble when trade tensions, export restrictions, and diplomatic disputes transformed a technical issue into a strategic one. Suddenly, governments and companies realized that the energy and digital transition depended on an extremely fragile bottleneck.

The reaction didn't happen overnight. Plans had been shelved for years, but 2025 acted as a catalyst. The combination of successive warnings and geopolitical instability accelerated decisions that previously seemed expensive, slow, or politically sensitive. The focus shifted from short-term efficiency to long-term industrial security.

In this new context, the United States, the European Union, and strategic partners like Australia began to coordinate industrial policies: direct subsidies, regulatory support, public funding, and incentives to reindustrialize critical parts of the rare earth supply chain.

The most visible change didn't happen in speeches, but on the factory floor. In recent months, industrial projects have begun to move from the planning stage to reality, especially in Europe, where external dependence has become a central political issue. A new rare earth magnet production plant in the north of the continent has come to symbolize this strategic shift.

The project does not promise to replace the former global leader, at least not in the short term. The objective is different: to create redundancy, reduce vulnerabilities, and gain room for maneuver. Instead of breaking with the existing system, the bet is to dilute risk, creating alternative poles capable of supporting critical sectors in times of crisis.

Companies specializing in advanced materials have unexpectedly taken center stage on this chessboard. For years, they operated behind the scenes of the industry. Now, they have become key players in a broader geopolitical strategy. Their executives recognize that the demand does not come from a single sector, but from virtually any technology that needs to convert energy efficiently.

This cross-cutting nature changes everything. It's not just about electric cars or renewable energy, but about the basic infrastructure of the modern economy. Therefore, governments have begun to treat rare earth magnets in the same way as semiconductors: as strategic assets, not as mere commodities.

Even so, no one is talking about total independence. The dominance accumulated over decades does not disappear quickly. Complex supply chains require time, scale, and highly specialized human capital. What is at stake is a new equilibrium—less concentrated, more resilient, and politically predictable.

This movement is already beginning to influence investment decisions, trade agreements, and industrial strategies. For the first time in a long time, the almost absolute control of this market faces concrete, albeit partial, alternatives. And this, in itself, is already a game-changer.

The scenario that is emerging is not one of abrupt replacement, but of a gradual redistribution of power. A silent, technical, and slow process—exactly the type of change that usually goes unnoticed until its effects become impossible to ignore.

mundophone


TECH


Big tech blocks California data center rules, leaving only a study requirement

Tools that power artificial intelligence devour energy. But attempts to shield regular Californians from footing the bill in 2025 ended with a law requiring regulators to write a report about the issue by 2027.

If that sounds pretty watered down, it is. Efforts to regulate the energy usage of data centers — the beating heart of AI — ran headlong into Big Tech, business groups and the governor. 

That’s not surprising given that California is increasingly dependent on big tech for state revenue: A handful of companies pay upwards of $5 billion just on income tax withholding.

The law mandating the report is the lone survivor of last year’s push to rein in the data-center industry. Its deadline means the findings won’t likely be ready in time for lawmakers to use in 2026. The measure began as a plan to give data centers their own electricity rate, shielding households and small businesses from higher bills.

It amounts to a “toothless” measure, directing the utility regulator to study an issue it already has the authority to investigate, said Matthew Freedman, a staff attorney with The Utility Reform Network, a ratepayer advocate.

Data centers’ enormous electricity demand has pushed them to the center of California’s energy debate, and that’s why lawmakers and consumer advocates say new regulations matter.

For instance, the sheer amount of energy requested by data centers in California is prompting questions about costly grid upgrades even as speculative projects and fast-shifting AI loads make long-term planning uncertain. Developers have requested 18.7 gigawatts of service capacity for data centers, more than enough to serve every household in the state, according to the California Energy Commission.

But the report could help shape future debates as lawmakers revisit tougher rules and the CPUC considers new policies on what data centers pay for power – a discussion gaining urgency as scrutiny of their rising electricity costs grows, he said.

“It could be that the report helps the Legislature to understand the magnitude of the problem and potential solutions,”  Freedman said. “It could also inform the CPUC’s own review of the reasonableness of rates for data center customers, which they are likely to investigate.”

State Sen. Steve Padilla, a Democrat from Chula Vista, says that the final version of his law “was not the one we would have preferred,” agreeing that it may seem “obvious” the CPUC can study data center cost impacts.  The measure could help frame future debates and at least “says unequivocally that the CPUC has the authority to study these impacts” as demand from data centers accelerates, Padilla added.

“(Data centers) consume huge amounts of energy, huge amounts of resources, and at least in the near future, we’re not going to see that change,” he said.

Earlier drafts of Padilla’s measure went further, requiring data centers to install large batteries to support the grid during peak demand and pushing utilities to supply them with 100% carbon-free electricity by 2030 — years ahead of the state’s own mandate. Those provisions were ultimately stripped out.

How California’s first push to regulate data centers slipped away...California’s bid to bring more oversight to data centers unraveled earlier this year under industry pressure, ending with Gov. Gavin Newsom’s veto of a bill requiring operators to report their water use. Concerns over the bills reflected fears that data-center developers could shift projects to other states and take valuable jobs with them.

A September Stanford report on powering California data centers said the state risks losing property-tax revenue, union construction jobs and “valuable AI talent” if data-center construction moves out of state.

The idea that increased regulation could lead to businesses or dollars in some form leaving California is an argument that has been brought up across industries for decades. It often does not hold up to more careful or long-term scrutiny. 

In the face of this opposition, two key proposals stalled in the Legislature’s procedural churn. Early in the session, Padilla put a separate clean-power incentives proposal for data centers on hold until 2026. Later in the year, an Assembly bill requiring data centers to disclose their electricity use was placed in the Senate’s suspense file – where appropriations committees often quietly halt measures.

Newsom, who has often spoken of California’s AI dominance, echoed the industry’s competitiveness worries in his veto message of the water-use reporting requirement. The governor said he was reluctant to impose requirements on data centers, “without understanding the full impact on businesses and the consumers of their technology.”

Despite last year’s defeats, some lawmakers say they will attempt to tackle the issue again.

Padilla plans to try again with a bill that would add new rules on who pays for data centers’ long-term grid costs in California, while Assemblymember Rebecca Bauer-Kahan — a Democrat from San Ramon — will revisit her electricity-disclosure bill.

Big Tech warns of job losses but one advocate sees an opening...After blocking most measures last year — and watering down the lone energy-costs bill — Big Tech groups say they’ll revive arguments that new efforts to regulate data centers could cost California jobs.

At a CalMatters event in November, Silicon Valley Leadership Group CEO Ahmad Thomas argued that California must compete to attract investments like the $40 billion data-center project Texas secured with  Google. Any policy making deals like that tougher next year would provoke conflict, he added.

“When we get to the details of what our regulatory regime looks like versus other states, or how we can make California more competitive…that’s where sometimes we struggle to find that happy medium,” he said.

Despite having more regulations than some states, California continues to toggle between the 4th and 5th largest economy in the world and has for some time, suggesting that the Golden State is very competitive. 

Dan Diorio, vice president of state policy for the Data Center Coalition, another industry lobbying group, said new requirements on data centers should apply to all other large electricity users.

“To single out one industry is not something that we think would set a helpful precedent, ” Diorio said. “We’ve been very consistent with that throughout the country.”

Critics say job loss fears are overblown, noting California built its AI sector without the massive hyperscale facilities that typically gravitate to states with ample, cheaper land and streamlined permitting.

Data-center locations — driven by energy prices, land and local rules — have little to do with where AI researchers live, said Shaolei Ren, an AI researcher at UC Riverside.

“These two things are sort of separate, they’re decoupled,” he said.  

Freedman, of TURN, said lawmakers may have a bargaining chip: if developers cared about cheaper power, they wouldn’t be proposing facilities in a state with high electric rates. That means speed and certainty may be the priority, giving lawmakers the space to potentially offer quicker approvals in exchange for developers covering more grid costs. 

“There’s so much money in this business that the energy bills – even though large – are kind of like rounding errors for these guys,” Freedman said. “If that’s true, then maybe they shouldn’t care about having to pay a little bit more to ensure that costs aren’t being shifted to other customers.”

Alejandro Lazo--https://muckrack.com/alejandrolazo

  DIGITAL LIFE AI pioneer warns: humans should be able to shut down intelligent systems Yoshua Bengio , one of the most respected scientists...