Friday, December 5, 2025

 

DIGITAL LIFE


China is increasing its use of artificial intelligence to strengthen social control mechanisms, expanding the reach of censorship and surveillance over the population

This conclusion is part of a new report from the Australian Strategic Policy Institute (ASPI), an Australian institution that details how the Beijing government has integrated cutting-edge technologies into the country's digital monitoring system and judicial apparatus.

According to the study, artificial intelligence is being used to make the tracking and suppression of politically sensitive content more efficient. This practice, already a hallmark of the Chinese censorship apparatus, gains speed and precision with systems capable of scanning large volumes of data, identifying keywords, and reducing the dissemination of messages critical of the government or leader Xi Jinping.

Nathan Attrill, senior analyst at ASPI and co-author of the research, told the Washington Post that the technology does not inaugurate a new model of repression, but intensifies already established methods. According to him, AI allows the Chinese Communist Party to monitor "more people, more closely, and with less effort," deepening control patterns previously executed mostly by human teams.

The technological dispute with the United States also influences this scenario. While the US and China compete for global leadership in artificial intelligence, Beijing is expanding the domestic use of the technology, relying on the collaboration of large national companies such as Baidu, Tencent, and ByteDance. These companies receive access to immense sets of government data, which accelerates the development of more advanced models.

The report highlights that the companies act as "assistant sheriffs," responsible for moderating content that goes beyond the scope normally adopted by platforms in the West. While social networks in other countries only remove illegal material, such as pornography, Chinese firms also need to eliminate content that could irritate the central government. Tools such as Tencent's content security audit system, for example, assign risk scores to users and track repeat offenders across various platforms.

This surveillance industry has also become a business. Companies like Baidu market automated moderation systems to other companies, expanding the reach of censorship and, according to researchers, "storing market principles in service of authoritarianism." Despite increasing automation, the adopted model is hybrid: human teams remain essential to interpret political nuances, identify codes used to circumvent supervision, and compensate for technical flaws.

Surveillance is even more intense over ethnic minorities, such as Uyghurs and Tibetans, targets of expanded monitoring in recent years. Because language barriers hindered tracking, the government is investing in the development of language models specific to regional languages. A laboratory created in Beijing in 2023 works with languages ​​such as Mongolian, Tibetan, and Uyghur, with the aim of analyzing public opinion and promoting what the government calls "ethnic unity."

The report also details how AI is being incorporated into the criminal justice system. Technology is already appearing in the identification of suspects at protests through facial recognition, in the screening of judicial documents by "smart courts," and even in prisons capable of predicting the emotional state of detainees. Researchers who had access to one of these systems in Shanghai warned that the tools could compromise judicial impartiality by introducing multiple "black boxes" that are impossible to audit.

Experts point out that the use of AI in the Chinese justice system has gone through distinct phases: it began with enthusiasm and exaggerated expectations, followed by a period of caution, and is now experiencing a stage of reflection on limitations and risks. The accelerated adoption of the technology, encouraged by guidelines from the central government, often leads local authorities and companies to exaggerate its capabilities to obtain contracts, making it difficult to measure the real impact of these systems.

The report concludes that, despite advances and efficiency in some processes, the expansion of AI in China raises profound concerns about privacy, transparency, and discrimination. For researchers, the lack of clarity about how the models work and the risk of inherent bias make the ecosystem particularly dangerous, even more so because Chinese companies have global ambitions and export these systems to other countries.

Key developments include:

Accelerated Monitoring: AI allows the Chinese government to scan and analyze vast volumes of digital content in real time, identifying and suppressing politically sensitive material much faster than manual methods.

Predictive Control: Authorities are using algorithms to analyze patterns of online behavior and sentiment, aiming to anticipate and neutralize dissent or protests before they occur, which experts describe as "preventive repression."

Minority Surveillance: Reports indicate that the government is developing specific AI tools to deepen the monitoring of ethnic minorities, such as Uyghurs and Tibetans, including through language models in their native languages, both inside and outside China.

Integration into the Judicial System: AI is being implemented in courts and prisons to assist in processes, from drafting documents to recommending verdicts and sentences, raising serious questions about impartiality and accountability.

Multimodal Censorship: In addition to text, new Chinese AI systems are capable of censoring politically sensitive images and videos, adding a new layer to the country's "Great Firewall."

Collaboration with Technology Companies: Large Chinese technology companies, such as Tencent, Baidu, and ByteDance, are developing and selling AI-based censorship platforms to other organizations, creating a domestic market for these control tools.

These actions have led to discussions and restrictions on the use of Chinese technologies in other countries, such as the United States and the European Union, due to concerns about privacy and alignment with the values ​​of the Chinese Communist Party. China, in turn, advocates for the creation of a global organization for AI governance, but emphasizes the need for the technology to respect "fundamental socialist values."

mundophone


TECH


Meta changed Its name for the metaverse - now cutting 30% of It

Remember when Facebook changed its name to Meta to signal its big bet on the metaverse? After the hype of 2021, the promise of a technological "revolution" lost momentum, and enthusiasm gave way to skepticism about the idea of ​​migrating life to a virtual environment. Now, those who fueled the boom are backing down: Meta is preparing cuts of up to 30% in its initiatives related to the metaverse.

According to Bloomberg, Facebook's parent company – which once treated the metaverse as the company's future – has decided to significantly reduce the area's budget next year. The cut affects projects such as the Horizon Worlds virtual world and the Quest virtual reality division.

Sources interviewed for the report say that a reduction of this magnitude could include layoffs as early as January 2026, although there is no final decision yet.

The cuts are part of Meta's budget planning for 2026. In meetings held last month, Mark Zuckerberg reportedly instructed executives to seek 10% reductions across all areas – a request that, according to Bloomberg sources, has been repeated in similar budget cycles in recent years.

"The metaverse area has been asked to make deeper cuts this year, as Meta has not seen the level of competition for the technology in the industry that it expected," Bloomberg wrote, citing people familiar with the matter.

Reuters reported that Meta's augmented reality unit has already burned through more than $60 billion since 2020.

While reducing its bet on the metaverse, the company is accelerating investments in artificial intelligence, developing models, chatbots, and a range of products, including Meta's Ray-Ban smart glasses. In 2025, the company launched the Superintelligence Lab after investing $14.3 billion to acquire 49% of Scale AI. As part of the agreement, Alexandr Wang, founder and CEO of the startup, left his position at Scale to lead Meta's new lab, dedicated to advanced AI initiatives.

Meta is expected to further reduce its investments in building the metaverse, according to a Bloomberg report published on Thursday (4). Sources close to the matter say that the budget planned for 2026 includes cuts of up to 30% in projects linked to the initiative.

Part of this reduction is expected to manifest itself in layoffs. According to the report, the first layoffs could occur as early as January, especially affecting teams involved in virtual reality and augmented reality work. Meta, however, did not comment on the matter.

A large part of the projects related to the metaverse is the responsibility of the Meta Reality Labs division, which takes care of both the Quest virtual reality glasses and the Ray-Ban smartglasses. This was the area that concentrated the company's biggest bet when the metaverse gained momentum in 2021 — to the point of justifying the change of the name Facebook to Meta.

Since then, however, the topic has lost relevance in the media and the market. The initiative has started to register losses in the billions, while the company has turned its attention to research and development in artificial intelligence, now present in virtually the entire Meta ecosystem.

Although the original report doesn't mention changes to the release schedule, it's possible that the reduction will impact plans related to Meta Quest 4. Since 2024, there have been rumors that the new line would be released in different versions.

According to Bloomberg, the prospect of a slowdown in metaverse investments was well received by investors: Meta's shares rose about 4% after the announcement of the possible strategic change.

Thursday, December 4, 2025

 

TECH


Software platform helps users find the best hearing protection

The world is loud. A walk down the street bombards one's ears with the sound of engines revving, car horns blaring, and the steady beeps of pedestrian crossings. While smartphone alerts to excessive sound and public awareness of noise exposure grows, few tools help people take protective action.

The Hearing Protection Optimization Tool (HPOT) is a software platform developed by researchers at Applied Research Associates, Inc. to help users select the most appropriate hearing protection device (HPD) for their specific noise environment. It moves beyond traditional Noise Reduction Ratings (NRR) to provide a more personalized, data-driven selection process.

How the HPOT Works...The tool simplifies complex acoustic and psychoacoustic factors into clear, visual information that helps users compare different HPDs. It incorporates the following steps:

Environmental Assessment: Users input information about their specific noise environment, such as sound intensity and exposure duration. If exact measurements are unavailable, the platform can estimate exposure levels based on user descriptions.

Algorithmic Analysis: The software uses algorithmic analyses of different HPD benefits to match users with suitable, regulatory-approved devices from a database.

Customization: Users can adjust inputs for factors like communication needs, mobility, cost, and power requirements to visualize trade-offs and optimize their selection based on personal preferences.

Benefits and Applications...While initially developed for military use, the creators envision the HPOT being useful for a wide range of applications, from workplace safety to personal use (e.g., concerts).

Key benefits include:

Personalized Selection: It helps users find the optimal fit and protection level for their unique needs, rather than relying solely on a generic NRR.

Improved Understanding: By translating complex science into usable information, it empowers users to make smarter decisions about their hearing health.

Integration with Fit Testing: The concept aligns with the growing recommendation from organizations like the National Institute for Occupational Safety and Health (NIOSH) to use individual, quantitative fit testing (QNFT) to measure a Personal Attenuation Rating (PAR) for each worker, ensuring the device actually works as expected in the real world.

Introducing a new hearing protection tool...To address this gap, Santino Cozza and a team from Applied Research Associates, Inc. developed the Hearing Protection Optimization Tool (HPOT). HPOT was designed to move beyond traditional noise reduction ratings and highlight performance characteristics that matter in real-world conditions.

This user-friendly software platform, which draws on years of research and operational insight, helps people select the appropriate hearing protection device (HPD) for their specific environment.

Cozza presented the software at the Sixth Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan, running Dec. 1–5 in Honolulu, Hawaii.

                     The HPOT platform in use. Credit: Shebly Wrather

How HPOT works and its benefits..."The underlying science of how humans perceive sound is complex, drawing from acoustics, psychology, and physiology," said Cozza. "We designed HPOT to translate that into something usable, empowering smarter, more personalized hearing protection."

HPOT asks users to share basic information about their noise environment, such as sound intensity and exposure duration. If measurements aren't available, the platform estimates exposure levels based on users' descriptions of their setting.

By combining noise exposure levels with algorithmic analyses of the benefits of different HPDs, HPOT matches users with a database of suitable, regulatory-approved HPDs. It translates complex acoustic and psychoacoustic factors and calculations, like insertion loss, speech intelligibility, and sound localization, into clear visuals that help users directly compare HPDs.

Users can toggle inputs for communication needs, mobility, cost, and power requirements to visualize trade-offs and optimize HPD selection for their preferences.

Expanding applications and future updates...While HPOT was initially developed to support military hearing protection decisions, Cozza sees its utility as reaching far beyond that.

"Whether you're a hearing conservationist protecting workers, an audiologist trying to stay current with new technologies, or just someone choosing earplugs for a concert, HPOT was built to help," he said.

The team is currently developing advanced updates for the platform to widen its relevance, including support for impulse noise environments and integrating double hearing protection.

"HPOT is a blueprint for modernizing how personal protective equipment is selected," Cozza said. "We envision a future where intuitive, data-driven tools exist across all categories. Our goal is to simplify those processes using the same science-to-software approach that powers HPOT."

Provided by Acoustical Society of America 


DIGITAL LIFE


To make AI more fair, tame complexity, suggest researchers

In April, OpenAI's popular ChatGPT hit a milestone of a billion active weekly users, as artificial intelligence continued its explosion in popularity.

But with that popularity has come a dark side. Biases in AI's models and algorithms can actively harm some of its users and promote social injustice. Documented biases have led to different medical treatments due to patients' demographics and corporate hiring tools that discriminate against female and Black candidates.

New research from Texas McCombs suggests both a previously unexplored source of AI biases and some ways to correct for them: complexity.

The study, "Algorithmic Social Injustice: Antecedents and Mitigations" is published in MIS Quarterly.

"There's a complex set of issues that the algorithm has to deal with, and it's infeasible to deal with those issues well," says Hüseyin Tanriverdi, associate professor of information, risk, and operations management. "Bias could be an artifact of that complexity rather than other explanations that people have offered."

With John-Patrick Akinyemi, a McCombs Ph.D. candidate at IROM, Tanriverdi studied a set of 363 algorithms that researchers and journalists had identified as biased. The algorithms came from a repository called AI Algorithmic and Automation Incidents and Controversies.

The researchers compared each problematic algorithm with one that was similar in nature but had not been called out for bias. They examined not only the algorithms but also the organizations that created and used them.

Prior research has assumed that bias can be reduced by making algorithms more accurate. But that assumption, Tanriverdi found, did not tell the whole story. He found three additional factors, all related to a similar problem: not properly modeling for complexity.

Ground truth. Some algorithms are asked to make decisions when there's no established ground truth: the reference against which the algorithm's outcomes are evaluated. An algorithm might be asked to guess the age of a bone from an X-ray image, even though in medical practice, there's no established way for doctors to do so.

In other cases, AI may mistakenly treat opinions as objective truths—for example, when social media users are evenly split on whether a post constitutes hate speech or protected free speech.

AI should only automate decisions for which ground truth is clear, Tanriverdi says. "If there is not a well-established ground truth, then the likelihood that bias will emerge significantly increases."

Real-world complexity. AI models inevitably simplify the situations they describe. Problems can arise when they miss important components of reality.

Tanriverdi points to a case in which Arkansas replaced home visits by nurses with automated rulings on Medicaid benefits. It had the effect of cutting off disabled people from assistance with eating and showering.

"If a nurse goes and walks around to the house, they will be able to understand more about what kind of support this person needs," he says. "But algorithms were using only a subset of those variables, because data was not available on everything.

"Because of omission of the relevant variables in the model, that model was no longer a good enough representation of reality."

Stakeholder involvement. When a model serving a diverse population is designed mostly by members of a single demographic, it becomes more susceptible to bias. One way to counter this risk is to ensure that all stakeholder groups have a voice in the development process.

By involving stakeholders who may have conflicting goals and expectations, an organization can determine whether it's possible to meet them all. If it's not, Tanriverdi says, "It may be feasible to reach compromise solutions that everyone is OK with."

The research concludes that taming AI bias involves much more than making algorithms more accurate. Developers need to open up their black boxes to account for real-world complexities, input from diverse groups, and ground truths.

"The factors we focus on have a direct effect on the fairness outcome," Tanriverdi says. "These are the missing pieces that data scientists seem to be ignoring."

Provided by University of Texas at Austin

Wednesday, December 3, 2025

 

DIGITAL LIFE


Passkeys Vs Passwords: What's the difference and which offers better security?

Since the inception of the internet, website and app developers have relied heavily on passwords as a means of protecting user accounts. As hackers continue to develop more sophisticated techniques to circumvent security guardrails, however, it has become easier for passwords to be cracked, especially with the help of powerful GPUs and AI assistance. A recent study reported that some GPUs could crack passwords with as many as 10 characters in a second or less. The vulnerability of user accounts protected by passwords has motivated many companies to explore alternative methods for protecting user accounts. One such alternative is passkeys. But do passkeys really offer better security?

The Difference Between Passkeys And Passwords...Passkeys are different than passwords. Unlike passwords, which enable users to authenticate via a set of numbers, letters, and special characters, or a combination thereof, passkeys allow users to access accounts using a PIN, face recognition or fingerprint authentication. So you do not need to memorize any string of characters.

Both passwords and passkeys can incorporate multi-factor authentication (MFA). Passkeys protect users with built-in MFA, which requires you to prove at least two things. First, that you can access a device where your private key is stored, and second, that you can unlock the device or account with your biometric information or PIN. Depending on the design, the use of passwords sometimes requires MFA, which typically prompts users to input a code that is automatically sent via email, SMS, or an authentication app.

Do Passkeys Really Offer Better Security? Which is safer, password MFA or passkey MFA? Let's say you've been lured to access a fake website that mimics the interface of one of your social media accounts. If you input your password, malicious actors can steal it and capture your MFA code in real time and use these credentials to access your actual account. This can be done relatively easily, for experienced hackers. We have reported how hackers circumvent MFA restrictions by using sophisticated malware to create an illusion of a normal login process.

However, with a passkey, the outcome is different. Even if a bad actor successfully lures you into using your PIN, facial ID or fingerprint on a fake website, your device will detect that the site is fake, making it difficult or even impossible to steal your credentials.

To understand how passkeys identify fake sites, it's helpful to know about a process developers call "domain binding." When you create a passkey for a site, a public and private key are generated and bound to that site's domain. Unlike humans who may sometimes fail to differentiate between URLs such as Hothadware.com and Hothardware.com, your device will never release the private key needed for passkey authentication if the URL is not exactly the same. The public key is typically stored on a server, and the private key is usually kept locally on your device. In the event of a data breach on a company's server, hackers can successfully access your public key; however, this will be useless for them since they will also need to access the private key, which is safely stored on your device. As such, if a company suffers a data breach, it will not compromise your account.

passkey login microsoft authenticator deleting passwords august news

Although a PIN can be used to activate a passkey, it serves a different purpose than a traditional password. When you use a PIN to authenticate with a passkey, it simply unlocks your private key, which is then combined with the public key to complete the authentication process. Unlike passwords, which are stored on servers as hashed values that can be exposed in the event of a cyberattack, your PIN and private keys are never stored on a company's server. Only your device knows your private key; it will remain unknown to everyone, including you, so it's incredibly difficult for it to be compromised. 

Final Thoughts: Passkeys vs. Passwords...Companies like Google, Apple, and Microsoft have embraced and promoted the use of passkeys. In April 2025, Microsoft optimized its login experience for the use of passkeys. While we are not suggesting that Passkeys are 100% secure, it is clear that they are generally safer than passwords, as they protect users from common social engineering techniques deployed by hackers.

by Victor Awogbemila

 

DIGITAL LIFE


Tech moguls enter the media to control the narrative

Heads of the world's largest technology companies have become frequent figures in podcasts and programs favorable to them. Some companies have even started their own blogs and channels as a way to project a positive image.

This trend was noted by the British newspaper The Guardian in an article published on Saturday (November 29). "Heads of the largest technology companies, including Mark Zuckerberg, Elon Musk, Sam Altman, Satya Nadella and others, have participated in long and comfortable interviews in recent months," notes reporter Nick Robins-Early.

These appearances usually yield headlines that highlight the disruptive nature of the current wave of artificial intelligence. Satya Nadella, CEO of Microsoft, predicted that AI agents will replace SaaS in an interview for the BG2 Pod.

The BG2 Pod is presented by two venture capital investors. Brad Gerstner is CEO and founder of Alttimer Capital, one of OpenAI's investors and a shareholder in Meta and Nvidia, while Bill Gurley is a partner at Benchmark, which funds startups founded by former OpenAI executives.

Speaking of OpenAI, CEO Sam Altman also gave his opinion that Generation Z is privileged to live in the AI ​​age on the Huge If True podcast, which defines itself as "an optimistic show about science and technology" and "an antidote to sadness and pessimism."

Big tech companies are betting on their own blogs and magazines...In some cases, big tech companies and investors are cutting out the middlemen. Andreessen Horowitz, one of the largest venture capital firms in Silicon Valley, launched its blog on Substack, where it presents itself as an "independent voice" building a direct relationship with the public.

Palantir, a technology company that develops security solutions, founded a magazine called Republic. According to The Guardian, it mimics the style of academic publications like Foreign Affairs.

“Many people who shouldn’t have a platform do. And many who should, don’t,” says an editorial signed by company executives. Examples of Republic's content include articles against copyright laws and in favor of cooperation with the military.

As the Guardian notes, these initiatives echo a sentiment among technology companies: specialized magazines and websites have become increasingly harsh and critical in their coverage of the sector.

And, of course, we can't ignore Elon Musk, who bought Twitter, which became known as X. It remains open to any user, but there are some symptomatic episodes: the AI ​​Grok, which works in an integrated way with the social network, considers its owner to have the intelligence of Leonardo da Vinci and the physical conditioning of LeBron James.

Americans disapprove of big tech, CEOs and AI...Meanwhile, research shows that the United States public has predominantly negative views of technology companies, social networks and artificial intelligence.

According to data from the Pew Research Center, 78% of Americans believe that social media companies have more power and influence in politics than they should, and 64% believe that the platforms have had a negative impact on the country.

Pessimism reappears when the subject is AI. Among Americans, 53% believe that this technology will harm creativity. The view is also unfavorable regarding relationships, difficult decisions, and problem-solving—in this last aspect, there is some optimism, with 29% indicating that AI will improve this skill.

The Tech Oversight Project, in turn, reveals that the CEOs of big tech companies are personally disapproved of by the population. The worst case is Mark Zuckerberg: 74% of those interviewed have a negative opinion of him, 59 percentage points more than his approval rating.

Even in times of political polarization, the difference between the opinions of Democratic and Republican voters isn't that great—although Donald Trump's supporters are less reticent about technology, there's also significant rejection within this group.

Strategy doesn't always work...These numbers provide some context for initiatives to create communication channels and talk to interviewers who don't ask tough questions.

Alex Karp, CEO of Palantir, recently gave a podcast interview where he talked about his childhood pet dog and answered questions like "If you were a cupcake, which cupcake would you be?". Meanwhile, there were no questions about privacy and human rights controversies in which the startup has been involved.

But even these situations can have the opposite result to what was expected. In some cases, the content attracts comments that show the public's dissatisfaction.

In Altman's interview with Huge If True, users joked about the lack of content in the conversation. "Now I understand why ChatGPT is like that. This guy can talk for hours without answering a single question," says one viewer.

Another is more critical and states that it's "crazy" to say that a 22-year-old recent graduate is lucky to live in the AI ​​age, considering that technology is destroying junior-level jobs.

Recently, Adam Mosseri, CEO of Instagram, participated in a video on the Track Star channel. The page does a quiz to guess songs with celebrities and ordinary people, mixing game show and interview.

The reaction on Instagram was negative: users took the opportunity to criticize changes to the platform, scam advertisements, accounts banned without reason, and choices to supposedly addict users.

Not even Track Star itself was spared. "Honestly, this account was more fun when it was random people on the street trying to guess the songs," says the most liked comment on the video.

Reporter: Giovanni Santa Rosa https://www.linkedin.com/in/giosantarosa//

Tuesday, December 2, 2025

 

TECH


AI is growing, but can the planet handle it? The hidden cost of the digital revolution

Artificial intelligence is emerging as a hero in the fight against climate change, but its accelerated growth hides an environmental cost that almost no one sees. Between explosive energy consumption, thirsty data centers, and a new avalanche of electronic waste, the technology that can save us also threatens to worsen the crisis it is trying to solve.

Artificial intelligence is now at the center of major solutions to the planet's challenges. It anticipates catastrophes, optimizes energy systems, creates more sustainable materials, and revolutionizes science. However, behind this promising image, an invisible physical structure is growing that requires gigantic volumes of energy, water, and natural resources. As AI expands, its own environmental footprint is beginning to raise a global alarm.

Training large AI models requires colossal amounts of electricity. To give you an idea, large-scale text generation systems have already consumed volumes of energy equivalent to the annual consumption of dozens of homes. Today, with daily use on a global scale, this consumption is even greater.

This growth has led tech giants to seek their own energy sources, including contracts with nuclear power plants, to ensure the stable operation of their data centers. The problem is structural: the architecture of current computers constantly transfers data between memory and processor, generating heat and wasting energy. With physical limits approaching, efficiency is no longer growing at the same rate as demand.

Water, minerals, and waste: the invisible chain of AI...The impact of artificial intelligence goes beyond the electricity bill. Data centers use enormous volumes of potable water to cool equipment that operates non-stop. On average, each kilowatt-hour consumes liters of clean water, a resource that is increasingly scarce in many regions of the world.

In addition, the manufacture of chips requires rare minerals, often extracted under controversial social and environmental conditions. At the end of the life cycle of this equipment, another problem arises: the explosion of electronic waste, which is difficult to recycle and highly polluting.

Artificial intelligence lives in a profound contradiction. While contributing to the energy transition, precision agriculture, climate monitoring, and disaster prevention, it also increases the pressure on the planet's own resources.

Studies indicate that AI can help advance many global sustainability goals, but it can also delay several of them if its growth goes unchecked. It's not about rejecting the technology, but about recognizing that its impact is ambiguous.

The solutions are not just in more efficient software. The real leap will come from hardware. Research is advancing in in-memory computing, which eliminates unnecessary energy shifts, in memristors that process and store simultaneously, in photonic chips that use light instead of electricity, and in analog systems inspired by the workings of the human brain.

These technologies promise to drastically reduce energy consumption and device heat generation.

Governance: the decisive factor for the future of AI...The sustainability of artificial intelligence depends not only on technical innovation, but also on public policies, transparency, and corporate responsibility. "Green algorithm" programs and environmental requirements are already beginning to emerge in some countries.

AI has the potential to profoundly transform the world. But this transformation will only be truly positive if the technology itself is designed not to become yet another threat to the planet's climate balance.

The growth of artificial intelligence (AI) represents a significant challenge to the planet's sustainability due to its high consumption of energy and water resources, especially in the data centers that support it. However, AI also offers potential for energy efficiency solutions.

The Hidden Cost of AI

-Massive Energy Consumption: Data centers, the heart of AI infrastructure, already consume about 1% to 2% of global electricity. Intensive AI use could increase this consumption dramatically; some estimates predict that data center electricity consumption could more than double by 2030. This increase in demand often relies on non-renewable energy sources, contributing to greenhouse gas emissions.

-Intensive Water Use: The cooling systems needed to prevent servers from overheating consume vast amounts of water. Large data centers can use millions of gallons per day, the equivalent of the consumption of a medium-sized city. It is estimated that 20 to 30 questions to a generative AI can consume half a liter of water.

-Resource Extraction and Electronic Waste: The production of AI hardware requires the extraction of valuable minerals, such as lithium and cobalt, which have significant environmental impacts. The rapid planned obsolescence of this equipment also exacerbates the problem of electronic waste.

The Potential of AI for Sustainability...Despite the costs, AI can also be a powerful tool in the search for climate solutions:

-Energy Optimization: AI can optimize energy consumption in buildings, transportation networks, and data center management, increasing operational efficiency and reducing waste.

-Climate Forecasting and Resource Management: AI models can help predict climate patterns, manage water resources more efficiently, and better integrate renewable energy sources into the electricity grid.

mundophone

  DIGITAL LIFE China is increasing its use of artificial intelligence to strengthen social control mechanisms, expanding the reach of censor...