Saturday, December 6, 2025

 

DIGITAL LIFE


Mass layoffs in global companies aided by AI

Artificial intelligence (AI) is a technological milestone for humanity, with significant advances in various sectors, but the progress of this tool may be the cause of mass layoffs in technology companies, according to analysts. Corporations such as Google, Microsoft, and Amazon have announced reductions in their workforce in the last two years, citing the need to reallocate resources, including jobs, to AI-related initiatives.

Amazon confirmed in October that it plans to reduce its global workforce by "approximately 14,000 positions." The decision fueled a long-standing concern: that artificial intelligence (AI) is beginning to replace workers. Hewlett-Packard (HP) announced in late November that it intends to lay off between 4,000 and 6,000 employees—about 10% of its current workforce—by the end of 2028, in an AI adoption plan aimed at increasing productivity.

Other companies in the sector, such as Chegg, Salesforce, and United Parcel Service (UPS), have announced that they are cutting or will cut significant numbers of employees, showing a pattern in the market. The logistics company UPS, for example, has laid off 48,000 people since last year. Chegg, in the education sector, will reduce its workforce by 45%.

Economist and professor at the University of Brasília (UnB), César Bergo, believes that, in the next five years, some sectors will be "drastically affected," especially those that depend on intellectual production. "There will be an impact in the field of consulting, in design, especially industrial design, also in architecture and engineering. Basically, jobs that depend on intellectual production will suffer a direct impact, because AI will facilitate and speed up this production," he explained.

In the academic's assessment, AI is a revolution that is here to stay. "There's no point in crying about it; it's really necessary to seek ways to improve and acquire knowledge related to this area, because there will be other activities that can be performed without a significant influence from artificial intelligence," he advised.

The CEO of Inteligência Comercial, Luciano Bravo, also believes that, in the next five years, the sectors most affected will be based on routine, standardized, and highly digitalizable tasks, significantly changing the job market, such as customer service, telemarketing, and technical support.

For the executive director of the Budget Lab, an economics research center at Yale University in the US, Martha Gimbel, extrapolating executive statements during cuts is "possibly the worst way" to determine the effects of AI on jobs, as the dynamics of each company tend to influence these movements.

In Bravo's assessment, the replacement of workers by AI is, to a large extent, alarmism. For him, AI tends to redefine and complement human work rather than eliminate entire jobs. "Historically, disruptive technologies create new jobs, increase productivity, and displace functions rather than completely destroying them, and this is likely to happen again, requiring adaptation, training, and reorganization of tasks," he explained. 

According to him, the State should guarantee a just transition, creating robust retraining programs and incentives for technological education. The Ministry of Labor and Employment (MTE) was contacted but did not comment on the matter.

by Caetano Yamamoto, Brazil

 

DIGITAL LIFE


Big tech, AI, and digital colonialism: from the fable of progress to the nightmare of global inequality

If the Third Industrial Revolution, in the second half of the 20th century, represented the advent of the Internet and computers, the Fourth would be characterized by the virtually borderless projection of the Internet and the establishment of Artificial Intelligence. Big Tech is the clearest expression of the so-called Industry 4.0 – a revolutionary stage that, being so recent, we are still understanding and learning to deal with. Once again, the fable that has been told to us is very much related to the one about the technological revolution and globalization presented by Milton Santos. Industry 4.0 is presented to us as the pinnacle of human progress and technological neutrality, as well as a tool for emancipation and freedom. It thus erases the representation of technology as the result of social relations with political and economic purposes to maintain a capitalist structure, while replacing living labor with objectified labor, distancing the worker, more and more, from the appropriation of the fruits of their labor.

“Outsourcing, for example, prevalent in 18th and 19th century England, whereby the working class labored at home, outside the factory space, without any rights and under conditions of unlimited exploitation, has now become the pompous crowdsourcing, also devoid of protective legislation, adulterating the arduous global history of work.”

Big Tech companies have a close link to political power in the United States and serve as a tool for maintaining that country's hegemony. Far from the technical neutrality that the fable of the Fourth Industrial Revolution portrays, they play an important role in preserving the class system and in the (neo)colonial dominance of the United States in the periphery. As expected, they prospered from public resources while privatizing the profits. The US government was the first major investor in Silicon Valley during the Cold War, in an attempt to contain Soviet technological advancement. The Department of Defense was responsible for research and the production of the Internet, computers, and GPS. The Trump administration merely made explicit the public-private relationship with Big Tech. In his 2024 election campaign, he received substantial resources from these companies, which were promptly reciprocated in grants for the technological modernization of government institutions, such as the Pentagon itself.

It was within this intricate relationship between the public and private sectors that OpenAI donated US$1 million to the Trump administration. One of the world's most valuable companies, when created in 2015, spread the tech fable that predicted the use of Artificial Intelligence and technological advances for the benefit of all humanity. In a document presented on March 13, 2025, to the Trump administration, however, the fable is unveiled by demonstrating the use of this same technology to maintain US hegemony and leadership.

It may seem strange that a technology giant would defend the use of this same technology for political, economic, and even military purposes. In fact, we are witnessing, simultaneously, the modernization of US defense technology and the militarization of Big Tech companies in service of the former. Meta, Google, and OpenAI have removed clauses from their corporate policies that prohibited the use of Artificial Intelligence in weaponry. In April 2025, an executive order from Trump established a technological modernization plan for the Defense sector (for example, drone technology) with an investment of US$1 billion—largely directed to Silicon Valley. 

In August 2025, in Virginia, executives from Meta, OpenAI, Thinking Machines Lab, Palantir, among others, were appointed lieutenant colonels of the newly created Technical Innovation Unit (Detachment 201) of the Army. As lieutenant colonels, they swore allegiance to the USA in a formal ceremony. Then, in September 2025, the CEOs of Meta, Apple, Google, Microsoft, and OpenAI participated in a dinner with Trump, where the US$1 trillion investment in the US by Trump was agreed upon.

Friday, December 5, 2025

 

DIGITAL LIFE


China is increasing its use of artificial intelligence to strengthen social control mechanisms, expanding the reach of censorship and surveillance over the population

This conclusion is part of a new report from the Australian Strategic Policy Institute (ASPI), an Australian institution that details how the Beijing government has integrated cutting-edge technologies into the country's digital monitoring system and judicial apparatus.

According to the study, artificial intelligence is being used to make the tracking and suppression of politically sensitive content more efficient. This practice, already a hallmark of the Chinese censorship apparatus, gains speed and precision with systems capable of scanning large volumes of data, identifying keywords, and reducing the dissemination of messages critical of the government or leader Xi Jinping.

Nathan Attrill, senior analyst at ASPI and co-author of the research, told the Washington Post that the technology does not inaugurate a new model of repression, but intensifies already established methods. According to him, AI allows the Chinese Communist Party to monitor "more people, more closely, and with less effort," deepening control patterns previously executed mostly by human teams.

The technological dispute with the United States also influences this scenario. While the US and China compete for global leadership in artificial intelligence, Beijing is expanding the domestic use of the technology, relying on the collaboration of large national companies such as Baidu, Tencent, and ByteDance. These companies receive access to immense sets of government data, which accelerates the development of more advanced models.

The report highlights that the companies act as "assistant sheriffs," responsible for moderating content that goes beyond the scope normally adopted by platforms in the West. While social networks in other countries only remove illegal material, such as pornography, Chinese firms also need to eliminate content that could irritate the central government. Tools such as Tencent's content security audit system, for example, assign risk scores to users and track repeat offenders across various platforms.

This surveillance industry has also become a business. Companies like Baidu market automated moderation systems to other companies, expanding the reach of censorship and, according to researchers, "storing market principles in service of authoritarianism." Despite increasing automation, the adopted model is hybrid: human teams remain essential to interpret political nuances, identify codes used to circumvent supervision, and compensate for technical flaws.

Surveillance is even more intense over ethnic minorities, such as Uyghurs and Tibetans, targets of expanded monitoring in recent years. Because language barriers hindered tracking, the government is investing in the development of language models specific to regional languages. A laboratory created in Beijing in 2023 works with languages ​​such as Mongolian, Tibetan, and Uyghur, with the aim of analyzing public opinion and promoting what the government calls "ethnic unity."

The report also details how AI is being incorporated into the criminal justice system. Technology is already appearing in the identification of suspects at protests through facial recognition, in the screening of judicial documents by "smart courts," and even in prisons capable of predicting the emotional state of detainees. Researchers who had access to one of these systems in Shanghai warned that the tools could compromise judicial impartiality by introducing multiple "black boxes" that are impossible to audit.

Experts point out that the use of AI in the Chinese justice system has gone through distinct phases: it began with enthusiasm and exaggerated expectations, followed by a period of caution, and is now experiencing a stage of reflection on limitations and risks. The accelerated adoption of the technology, encouraged by guidelines from the central government, often leads local authorities and companies to exaggerate its capabilities to obtain contracts, making it difficult to measure the real impact of these systems.

The report concludes that, despite advances and efficiency in some processes, the expansion of AI in China raises profound concerns about privacy, transparency, and discrimination. For researchers, the lack of clarity about how the models work and the risk of inherent bias make the ecosystem particularly dangerous, even more so because Chinese companies have global ambitions and export these systems to other countries.

Key developments include:

Accelerated Monitoring: AI allows the Chinese government to scan and analyze vast volumes of digital content in real time, identifying and suppressing politically sensitive material much faster than manual methods.

Predictive Control: Authorities are using algorithms to analyze patterns of online behavior and sentiment, aiming to anticipate and neutralize dissent or protests before they occur, which experts describe as "preventive repression."

Minority Surveillance: Reports indicate that the government is developing specific AI tools to deepen the monitoring of ethnic minorities, such as Uyghurs and Tibetans, including through language models in their native languages, both inside and outside China.

Integration into the Judicial System: AI is being implemented in courts and prisons to assist in processes, from drafting documents to recommending verdicts and sentences, raising serious questions about impartiality and accountability.

Multimodal Censorship: In addition to text, new Chinese AI systems are capable of censoring politically sensitive images and videos, adding a new layer to the country's "Great Firewall."

Collaboration with Technology Companies: Large Chinese technology companies, such as Tencent, Baidu, and ByteDance, are developing and selling AI-based censorship platforms to other organizations, creating a domestic market for these control tools.

These actions have led to discussions and restrictions on the use of Chinese technologies in other countries, such as the United States and the European Union, due to concerns about privacy and alignment with the values ​​of the Chinese Communist Party. China, in turn, advocates for the creation of a global organization for AI governance, but emphasizes the need for the technology to respect "fundamental socialist values."

mundophone


TECH


Meta changed Its name for the metaverse - now cutting 30% of It

Remember when Facebook changed its name to Meta to signal its big bet on the metaverse? After the hype of 2021, the promise of a technological "revolution" lost momentum, and enthusiasm gave way to skepticism about the idea of ​​migrating life to a virtual environment. Now, those who fueled the boom are backing down: Meta is preparing cuts of up to 30% in its initiatives related to the metaverse.

According to Bloomberg, Facebook's parent company – which once treated the metaverse as the company's future – has decided to significantly reduce the area's budget next year. The cut affects projects such as the Horizon Worlds virtual world and the Quest virtual reality division.

Sources interviewed for the report say that a reduction of this magnitude could include layoffs as early as January 2026, although there is no final decision yet.

The cuts are part of Meta's budget planning for 2026. In meetings held last month, Mark Zuckerberg reportedly instructed executives to seek 10% reductions across all areas – a request that, according to Bloomberg sources, has been repeated in similar budget cycles in recent years.

"The metaverse area has been asked to make deeper cuts this year, as Meta has not seen the level of competition for the technology in the industry that it expected," Bloomberg wrote, citing people familiar with the matter.

Reuters reported that Meta's augmented reality unit has already burned through more than $60 billion since 2020.

While reducing its bet on the metaverse, the company is accelerating investments in artificial intelligence, developing models, chatbots, and a range of products, including Meta's Ray-Ban smart glasses. In 2025, the company launched the Superintelligence Lab after investing $14.3 billion to acquire 49% of Scale AI. As part of the agreement, Alexandr Wang, founder and CEO of the startup, left his position at Scale to lead Meta's new lab, dedicated to advanced AI initiatives.

Meta is expected to further reduce its investments in building the metaverse, according to a Bloomberg report published on Thursday (4). Sources close to the matter say that the budget planned for 2026 includes cuts of up to 30% in projects linked to the initiative.

Part of this reduction is expected to manifest itself in layoffs. According to the report, the first layoffs could occur as early as January, especially affecting teams involved in virtual reality and augmented reality work. Meta, however, did not comment on the matter.

A large part of the projects related to the metaverse is the responsibility of the Meta Reality Labs division, which takes care of both the Quest virtual reality glasses and the Ray-Ban smartglasses. This was the area that concentrated the company's biggest bet when the metaverse gained momentum in 2021 — to the point of justifying the change of the name Facebook to Meta.

Since then, however, the topic has lost relevance in the media and the market. The initiative has started to register losses in the billions, while the company has turned its attention to research and development in artificial intelligence, now present in virtually the entire Meta ecosystem.

Although the original report doesn't mention changes to the release schedule, it's possible that the reduction will impact plans related to Meta Quest 4. Since 2024, there have been rumors that the new line would be released in different versions.

According to Bloomberg, the prospect of a slowdown in metaverse investments was well received by investors: Meta's shares rose about 4% after the announcement of the possible strategic change.

Thursday, December 4, 2025

 

TECH


Software platform helps users find the best hearing protection

The world is loud. A walk down the street bombards one's ears with the sound of engines revving, car horns blaring, and the steady beeps of pedestrian crossings. While smartphone alerts to excessive sound and public awareness of noise exposure grows, few tools help people take protective action.

The Hearing Protection Optimization Tool (HPOT) is a software platform developed by researchers at Applied Research Associates, Inc. to help users select the most appropriate hearing protection device (HPD) for their specific noise environment. It moves beyond traditional Noise Reduction Ratings (NRR) to provide a more personalized, data-driven selection process.

How the HPOT Works...The tool simplifies complex acoustic and psychoacoustic factors into clear, visual information that helps users compare different HPDs. It incorporates the following steps:

Environmental Assessment: Users input information about their specific noise environment, such as sound intensity and exposure duration. If exact measurements are unavailable, the platform can estimate exposure levels based on user descriptions.

Algorithmic Analysis: The software uses algorithmic analyses of different HPD benefits to match users with suitable, regulatory-approved devices from a database.

Customization: Users can adjust inputs for factors like communication needs, mobility, cost, and power requirements to visualize trade-offs and optimize their selection based on personal preferences.

Benefits and Applications...While initially developed for military use, the creators envision the HPOT being useful for a wide range of applications, from workplace safety to personal use (e.g., concerts).

Key benefits include:

Personalized Selection: It helps users find the optimal fit and protection level for their unique needs, rather than relying solely on a generic NRR.

Improved Understanding: By translating complex science into usable information, it empowers users to make smarter decisions about their hearing health.

Integration with Fit Testing: The concept aligns with the growing recommendation from organizations like the National Institute for Occupational Safety and Health (NIOSH) to use individual, quantitative fit testing (QNFT) to measure a Personal Attenuation Rating (PAR) for each worker, ensuring the device actually works as expected in the real world.

Introducing a new hearing protection tool...To address this gap, Santino Cozza and a team from Applied Research Associates, Inc. developed the Hearing Protection Optimization Tool (HPOT). HPOT was designed to move beyond traditional noise reduction ratings and highlight performance characteristics that matter in real-world conditions.

This user-friendly software platform, which draws on years of research and operational insight, helps people select the appropriate hearing protection device (HPD) for their specific environment.

Cozza presented the software at the Sixth Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan, running Dec. 1–5 in Honolulu, Hawaii.

                     The HPOT platform in use. Credit: Shebly Wrather

How HPOT works and its benefits..."The underlying science of how humans perceive sound is complex, drawing from acoustics, psychology, and physiology," said Cozza. "We designed HPOT to translate that into something usable, empowering smarter, more personalized hearing protection."

HPOT asks users to share basic information about their noise environment, such as sound intensity and exposure duration. If measurements aren't available, the platform estimates exposure levels based on users' descriptions of their setting.

By combining noise exposure levels with algorithmic analyses of the benefits of different HPDs, HPOT matches users with a database of suitable, regulatory-approved HPDs. It translates complex acoustic and psychoacoustic factors and calculations, like insertion loss, speech intelligibility, and sound localization, into clear visuals that help users directly compare HPDs.

Users can toggle inputs for communication needs, mobility, cost, and power requirements to visualize trade-offs and optimize HPD selection for their preferences.

Expanding applications and future updates...While HPOT was initially developed to support military hearing protection decisions, Cozza sees its utility as reaching far beyond that.

"Whether you're a hearing conservationist protecting workers, an audiologist trying to stay current with new technologies, or just someone choosing earplugs for a concert, HPOT was built to help," he said.

The team is currently developing advanced updates for the platform to widen its relevance, including support for impulse noise environments and integrating double hearing protection.

"HPOT is a blueprint for modernizing how personal protective equipment is selected," Cozza said. "We envision a future where intuitive, data-driven tools exist across all categories. Our goal is to simplify those processes using the same science-to-software approach that powers HPOT."

Provided by Acoustical Society of America 


DIGITAL LIFE


To make AI more fair, tame complexity, suggest researchers

In April, OpenAI's popular ChatGPT hit a milestone of a billion active weekly users, as artificial intelligence continued its explosion in popularity.

But with that popularity has come a dark side. Biases in AI's models and algorithms can actively harm some of its users and promote social injustice. Documented biases have led to different medical treatments due to patients' demographics and corporate hiring tools that discriminate against female and Black candidates.

New research from Texas McCombs suggests both a previously unexplored source of AI biases and some ways to correct for them: complexity.

The study, "Algorithmic Social Injustice: Antecedents and Mitigations" is published in MIS Quarterly.

"There's a complex set of issues that the algorithm has to deal with, and it's infeasible to deal with those issues well," says Hüseyin Tanriverdi, associate professor of information, risk, and operations management. "Bias could be an artifact of that complexity rather than other explanations that people have offered."

With John-Patrick Akinyemi, a McCombs Ph.D. candidate at IROM, Tanriverdi studied a set of 363 algorithms that researchers and journalists had identified as biased. The algorithms came from a repository called AI Algorithmic and Automation Incidents and Controversies.

The researchers compared each problematic algorithm with one that was similar in nature but had not been called out for bias. They examined not only the algorithms but also the organizations that created and used them.

Prior research has assumed that bias can be reduced by making algorithms more accurate. But that assumption, Tanriverdi found, did not tell the whole story. He found three additional factors, all related to a similar problem: not properly modeling for complexity.

Ground truth. Some algorithms are asked to make decisions when there's no established ground truth: the reference against which the algorithm's outcomes are evaluated. An algorithm might be asked to guess the age of a bone from an X-ray image, even though in medical practice, there's no established way for doctors to do so.

In other cases, AI may mistakenly treat opinions as objective truths—for example, when social media users are evenly split on whether a post constitutes hate speech or protected free speech.

AI should only automate decisions for which ground truth is clear, Tanriverdi says. "If there is not a well-established ground truth, then the likelihood that bias will emerge significantly increases."

Real-world complexity. AI models inevitably simplify the situations they describe. Problems can arise when they miss important components of reality.

Tanriverdi points to a case in which Arkansas replaced home visits by nurses with automated rulings on Medicaid benefits. It had the effect of cutting off disabled people from assistance with eating and showering.

"If a nurse goes and walks around to the house, they will be able to understand more about what kind of support this person needs," he says. "But algorithms were using only a subset of those variables, because data was not available on everything.

"Because of omission of the relevant variables in the model, that model was no longer a good enough representation of reality."

Stakeholder involvement. When a model serving a diverse population is designed mostly by members of a single demographic, it becomes more susceptible to bias. One way to counter this risk is to ensure that all stakeholder groups have a voice in the development process.

By involving stakeholders who may have conflicting goals and expectations, an organization can determine whether it's possible to meet them all. If it's not, Tanriverdi says, "It may be feasible to reach compromise solutions that everyone is OK with."

The research concludes that taming AI bias involves much more than making algorithms more accurate. Developers need to open up their black boxes to account for real-world complexities, input from diverse groups, and ground truths.

"The factors we focus on have a direct effect on the fairness outcome," Tanriverdi says. "These are the missing pieces that data scientists seem to be ignoring."

Provided by University of Texas at Austin

Wednesday, December 3, 2025

 

DIGITAL LIFE


Passkeys Vs Passwords: What's the difference and which offers better security?

Since the inception of the internet, website and app developers have relied heavily on passwords as a means of protecting user accounts. As hackers continue to develop more sophisticated techniques to circumvent security guardrails, however, it has become easier for passwords to be cracked, especially with the help of powerful GPUs and AI assistance. A recent study reported that some GPUs could crack passwords with as many as 10 characters in a second or less. The vulnerability of user accounts protected by passwords has motivated many companies to explore alternative methods for protecting user accounts. One such alternative is passkeys. But do passkeys really offer better security?

The Difference Between Passkeys And Passwords...Passkeys are different than passwords. Unlike passwords, which enable users to authenticate via a set of numbers, letters, and special characters, or a combination thereof, passkeys allow users to access accounts using a PIN, face recognition or fingerprint authentication. So you do not need to memorize any string of characters.

Both passwords and passkeys can incorporate multi-factor authentication (MFA). Passkeys protect users with built-in MFA, which requires you to prove at least two things. First, that you can access a device where your private key is stored, and second, that you can unlock the device or account with your biometric information or PIN. Depending on the design, the use of passwords sometimes requires MFA, which typically prompts users to input a code that is automatically sent via email, SMS, or an authentication app.

Do Passkeys Really Offer Better Security? Which is safer, password MFA or passkey MFA? Let's say you've been lured to access a fake website that mimics the interface of one of your social media accounts. If you input your password, malicious actors can steal it and capture your MFA code in real time and use these credentials to access your actual account. This can be done relatively easily, for experienced hackers. We have reported how hackers circumvent MFA restrictions by using sophisticated malware to create an illusion of a normal login process.

However, with a passkey, the outcome is different. Even if a bad actor successfully lures you into using your PIN, facial ID or fingerprint on a fake website, your device will detect that the site is fake, making it difficult or even impossible to steal your credentials.

To understand how passkeys identify fake sites, it's helpful to know about a process developers call "domain binding." When you create a passkey for a site, a public and private key are generated and bound to that site's domain. Unlike humans who may sometimes fail to differentiate between URLs such as Hothadware.com and Hothardware.com, your device will never release the private key needed for passkey authentication if the URL is not exactly the same. The public key is typically stored on a server, and the private key is usually kept locally on your device. In the event of a data breach on a company's server, hackers can successfully access your public key; however, this will be useless for them since they will also need to access the private key, which is safely stored on your device. As such, if a company suffers a data breach, it will not compromise your account.

passkey login microsoft authenticator deleting passwords august news

Although a PIN can be used to activate a passkey, it serves a different purpose than a traditional password. When you use a PIN to authenticate with a passkey, it simply unlocks your private key, which is then combined with the public key to complete the authentication process. Unlike passwords, which are stored on servers as hashed values that can be exposed in the event of a cyberattack, your PIN and private keys are never stored on a company's server. Only your device knows your private key; it will remain unknown to everyone, including you, so it's incredibly difficult for it to be compromised. 

Final Thoughts: Passkeys vs. Passwords...Companies like Google, Apple, and Microsoft have embraced and promoted the use of passkeys. In April 2025, Microsoft optimized its login experience for the use of passkeys. While we are not suggesting that Passkeys are 100% secure, it is clear that they are generally safer than passwords, as they protect users from common social engineering techniques deployed by hackers.

by Victor Awogbemila

  DIGITAL LIFE Mass layoffs in global companies aided by AI Artificial intelligence (AI) is a technological milestone for humanity, with sig...