Thursday, December 4, 2025

 

TECH


Software platform helps users find the best hearing protection

The world is loud. A walk down the street bombards one's ears with the sound of engines revving, car horns blaring, and the steady beeps of pedestrian crossings. While smartphone alerts to excessive sound and public awareness of noise exposure grows, few tools help people take protective action.

The Hearing Protection Optimization Tool (HPOT) is a software platform developed by researchers at Applied Research Associates, Inc. to help users select the most appropriate hearing protection device (HPD) for their specific noise environment. It moves beyond traditional Noise Reduction Ratings (NRR) to provide a more personalized, data-driven selection process.

How the HPOT Works...The tool simplifies complex acoustic and psychoacoustic factors into clear, visual information that helps users compare different HPDs. It incorporates the following steps:

Environmental Assessment: Users input information about their specific noise environment, such as sound intensity and exposure duration. If exact measurements are unavailable, the platform can estimate exposure levels based on user descriptions.

Algorithmic Analysis: The software uses algorithmic analyses of different HPD benefits to match users with suitable, regulatory-approved devices from a database.

Customization: Users can adjust inputs for factors like communication needs, mobility, cost, and power requirements to visualize trade-offs and optimize their selection based on personal preferences.

Benefits and Applications...While initially developed for military use, the creators envision the HPOT being useful for a wide range of applications, from workplace safety to personal use (e.g., concerts).

Key benefits include:

Personalized Selection: It helps users find the optimal fit and protection level for their unique needs, rather than relying solely on a generic NRR.

Improved Understanding: By translating complex science into usable information, it empowers users to make smarter decisions about their hearing health.

Integration with Fit Testing: The concept aligns with the growing recommendation from organizations like the National Institute for Occupational Safety and Health (NIOSH) to use individual, quantitative fit testing (QNFT) to measure a Personal Attenuation Rating (PAR) for each worker, ensuring the device actually works as expected in the real world.

Introducing a new hearing protection tool...To address this gap, Santino Cozza and a team from Applied Research Associates, Inc. developed the Hearing Protection Optimization Tool (HPOT). HPOT was designed to move beyond traditional noise reduction ratings and highlight performance characteristics that matter in real-world conditions.

This user-friendly software platform, which draws on years of research and operational insight, helps people select the appropriate hearing protection device (HPD) for their specific environment.

Cozza presented the software at the Sixth Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan, running Dec. 1–5 in Honolulu, Hawaii.

                     The HPOT platform in use. Credit: Shebly Wrather

How HPOT works and its benefits..."The underlying science of how humans perceive sound is complex, drawing from acoustics, psychology, and physiology," said Cozza. "We designed HPOT to translate that into something usable, empowering smarter, more personalized hearing protection."

HPOT asks users to share basic information about their noise environment, such as sound intensity and exposure duration. If measurements aren't available, the platform estimates exposure levels based on users' descriptions of their setting.

By combining noise exposure levels with algorithmic analyses of the benefits of different HPDs, HPOT matches users with a database of suitable, regulatory-approved HPDs. It translates complex acoustic and psychoacoustic factors and calculations, like insertion loss, speech intelligibility, and sound localization, into clear visuals that help users directly compare HPDs.

Users can toggle inputs for communication needs, mobility, cost, and power requirements to visualize trade-offs and optimize HPD selection for their preferences.

Expanding applications and future updates...While HPOT was initially developed to support military hearing protection decisions, Cozza sees its utility as reaching far beyond that.

"Whether you're a hearing conservationist protecting workers, an audiologist trying to stay current with new technologies, or just someone choosing earplugs for a concert, HPOT was built to help," he said.

The team is currently developing advanced updates for the platform to widen its relevance, including support for impulse noise environments and integrating double hearing protection.

"HPOT is a blueprint for modernizing how personal protective equipment is selected," Cozza said. "We envision a future where intuitive, data-driven tools exist across all categories. Our goal is to simplify those processes using the same science-to-software approach that powers HPOT."

Provided by Acoustical Society of America 


DIGITAL LIFE


To make AI more fair, tame complexity, suggest researchers

In April, OpenAI's popular ChatGPT hit a milestone of a billion active weekly users, as artificial intelligence continued its explosion in popularity.

But with that popularity has come a dark side. Biases in AI's models and algorithms can actively harm some of its users and promote social injustice. Documented biases have led to different medical treatments due to patients' demographics and corporate hiring tools that discriminate against female and Black candidates.

New research from Texas McCombs suggests both a previously unexplored source of AI biases and some ways to correct for them: complexity.

The study, "Algorithmic Social Injustice: Antecedents and Mitigations" is published in MIS Quarterly.

"There's a complex set of issues that the algorithm has to deal with, and it's infeasible to deal with those issues well," says Hüseyin Tanriverdi, associate professor of information, risk, and operations management. "Bias could be an artifact of that complexity rather than other explanations that people have offered."

With John-Patrick Akinyemi, a McCombs Ph.D. candidate at IROM, Tanriverdi studied a set of 363 algorithms that researchers and journalists had identified as biased. The algorithms came from a repository called AI Algorithmic and Automation Incidents and Controversies.

The researchers compared each problematic algorithm with one that was similar in nature but had not been called out for bias. They examined not only the algorithms but also the organizations that created and used them.

Prior research has assumed that bias can be reduced by making algorithms more accurate. But that assumption, Tanriverdi found, did not tell the whole story. He found three additional factors, all related to a similar problem: not properly modeling for complexity.

Ground truth. Some algorithms are asked to make decisions when there's no established ground truth: the reference against which the algorithm's outcomes are evaluated. An algorithm might be asked to guess the age of a bone from an X-ray image, even though in medical practice, there's no established way for doctors to do so.

In other cases, AI may mistakenly treat opinions as objective truths—for example, when social media users are evenly split on whether a post constitutes hate speech or protected free speech.

AI should only automate decisions for which ground truth is clear, Tanriverdi says. "If there is not a well-established ground truth, then the likelihood that bias will emerge significantly increases."

Real-world complexity. AI models inevitably simplify the situations they describe. Problems can arise when they miss important components of reality.

Tanriverdi points to a case in which Arkansas replaced home visits by nurses with automated rulings on Medicaid benefits. It had the effect of cutting off disabled people from assistance with eating and showering.

"If a nurse goes and walks around to the house, they will be able to understand more about what kind of support this person needs," he says. "But algorithms were using only a subset of those variables, because data was not available on everything.

"Because of omission of the relevant variables in the model, that model was no longer a good enough representation of reality."

Stakeholder involvement. When a model serving a diverse population is designed mostly by members of a single demographic, it becomes more susceptible to bias. One way to counter this risk is to ensure that all stakeholder groups have a voice in the development process.

By involving stakeholders who may have conflicting goals and expectations, an organization can determine whether it's possible to meet them all. If it's not, Tanriverdi says, "It may be feasible to reach compromise solutions that everyone is OK with."

The research concludes that taming AI bias involves much more than making algorithms more accurate. Developers need to open up their black boxes to account for real-world complexities, input from diverse groups, and ground truths.

"The factors we focus on have a direct effect on the fairness outcome," Tanriverdi says. "These are the missing pieces that data scientists seem to be ignoring."

Provided by University of Texas at Austin

Wednesday, December 3, 2025

 

DIGITAL LIFE


Passkeys Vs Passwords: What's the difference and which offers better security?

Since the inception of the internet, website and app developers have relied heavily on passwords as a means of protecting user accounts. As hackers continue to develop more sophisticated techniques to circumvent security guardrails, however, it has become easier for passwords to be cracked, especially with the help of powerful GPUs and AI assistance. A recent study reported that some GPUs could crack passwords with as many as 10 characters in a second or less. The vulnerability of user accounts protected by passwords has motivated many companies to explore alternative methods for protecting user accounts. One such alternative is passkeys. But do passkeys really offer better security?

The Difference Between Passkeys And Passwords...Passkeys are different than passwords. Unlike passwords, which enable users to authenticate via a set of numbers, letters, and special characters, or a combination thereof, passkeys allow users to access accounts using a PIN, face recognition or fingerprint authentication. So you do not need to memorize any string of characters.

Both passwords and passkeys can incorporate multi-factor authentication (MFA). Passkeys protect users with built-in MFA, which requires you to prove at least two things. First, that you can access a device where your private key is stored, and second, that you can unlock the device or account with your biometric information or PIN. Depending on the design, the use of passwords sometimes requires MFA, which typically prompts users to input a code that is automatically sent via email, SMS, or an authentication app.

Do Passkeys Really Offer Better Security? Which is safer, password MFA or passkey MFA? Let's say you've been lured to access a fake website that mimics the interface of one of your social media accounts. If you input your password, malicious actors can steal it and capture your MFA code in real time and use these credentials to access your actual account. This can be done relatively easily, for experienced hackers. We have reported how hackers circumvent MFA restrictions by using sophisticated malware to create an illusion of a normal login process.

However, with a passkey, the outcome is different. Even if a bad actor successfully lures you into using your PIN, facial ID or fingerprint on a fake website, your device will detect that the site is fake, making it difficult or even impossible to steal your credentials.

To understand how passkeys identify fake sites, it's helpful to know about a process developers call "domain binding." When you create a passkey for a site, a public and private key are generated and bound to that site's domain. Unlike humans who may sometimes fail to differentiate between URLs such as Hothadware.com and Hothardware.com, your device will never release the private key needed for passkey authentication if the URL is not exactly the same. The public key is typically stored on a server, and the private key is usually kept locally on your device. In the event of a data breach on a company's server, hackers can successfully access your public key; however, this will be useless for them since they will also need to access the private key, which is safely stored on your device. As such, if a company suffers a data breach, it will not compromise your account.

passkey login microsoft authenticator deleting passwords august news

Although a PIN can be used to activate a passkey, it serves a different purpose than a traditional password. When you use a PIN to authenticate with a passkey, it simply unlocks your private key, which is then combined with the public key to complete the authentication process. Unlike passwords, which are stored on servers as hashed values that can be exposed in the event of a cyberattack, your PIN and private keys are never stored on a company's server. Only your device knows your private key; it will remain unknown to everyone, including you, so it's incredibly difficult for it to be compromised. 

Final Thoughts: Passkeys vs. Passwords...Companies like Google, Apple, and Microsoft have embraced and promoted the use of passkeys. In April 2025, Microsoft optimized its login experience for the use of passkeys. While we are not suggesting that Passkeys are 100% secure, it is clear that they are generally safer than passwords, as they protect users from common social engineering techniques deployed by hackers.

by Victor Awogbemila

 

DIGITAL LIFE


Tech moguls enter the media to control the narrative

Heads of the world's largest technology companies have become frequent figures in podcasts and programs favorable to them. Some companies have even started their own blogs and channels as a way to project a positive image.

This trend was noted by the British newspaper The Guardian in an article published on Saturday (November 29). "Heads of the largest technology companies, including Mark Zuckerberg, Elon Musk, Sam Altman, Satya Nadella and others, have participated in long and comfortable interviews in recent months," notes reporter Nick Robins-Early.

These appearances usually yield headlines that highlight the disruptive nature of the current wave of artificial intelligence. Satya Nadella, CEO of Microsoft, predicted that AI agents will replace SaaS in an interview for the BG2 Pod.

The BG2 Pod is presented by two venture capital investors. Brad Gerstner is CEO and founder of Alttimer Capital, one of OpenAI's investors and a shareholder in Meta and Nvidia, while Bill Gurley is a partner at Benchmark, which funds startups founded by former OpenAI executives.

Speaking of OpenAI, CEO Sam Altman also gave his opinion that Generation Z is privileged to live in the AI ​​age on the Huge If True podcast, which defines itself as "an optimistic show about science and technology" and "an antidote to sadness and pessimism."

Big tech companies are betting on their own blogs and magazines...In some cases, big tech companies and investors are cutting out the middlemen. Andreessen Horowitz, one of the largest venture capital firms in Silicon Valley, launched its blog on Substack, where it presents itself as an "independent voice" building a direct relationship with the public.

Palantir, a technology company that develops security solutions, founded a magazine called Republic. According to The Guardian, it mimics the style of academic publications like Foreign Affairs.

“Many people who shouldn’t have a platform do. And many who should, don’t,” says an editorial signed by company executives. Examples of Republic's content include articles against copyright laws and in favor of cooperation with the military.

As the Guardian notes, these initiatives echo a sentiment among technology companies: specialized magazines and websites have become increasingly harsh and critical in their coverage of the sector.

And, of course, we can't ignore Elon Musk, who bought Twitter, which became known as X. It remains open to any user, but there are some symptomatic episodes: the AI ​​Grok, which works in an integrated way with the social network, considers its owner to have the intelligence of Leonardo da Vinci and the physical conditioning of LeBron James.

Americans disapprove of big tech, CEOs and AI...Meanwhile, research shows that the United States public has predominantly negative views of technology companies, social networks and artificial intelligence.

According to data from the Pew Research Center, 78% of Americans believe that social media companies have more power and influence in politics than they should, and 64% believe that the platforms have had a negative impact on the country.

Pessimism reappears when the subject is AI. Among Americans, 53% believe that this technology will harm creativity. The view is also unfavorable regarding relationships, difficult decisions, and problem-solving—in this last aspect, there is some optimism, with 29% indicating that AI will improve this skill.

The Tech Oversight Project, in turn, reveals that the CEOs of big tech companies are personally disapproved of by the population. The worst case is Mark Zuckerberg: 74% of those interviewed have a negative opinion of him, 59 percentage points more than his approval rating.

Even in times of political polarization, the difference between the opinions of Democratic and Republican voters isn't that great—although Donald Trump's supporters are less reticent about technology, there's also significant rejection within this group.

Strategy doesn't always work...These numbers provide some context for initiatives to create communication channels and talk to interviewers who don't ask tough questions.

Alex Karp, CEO of Palantir, recently gave a podcast interview where he talked about his childhood pet dog and answered questions like "If you were a cupcake, which cupcake would you be?". Meanwhile, there were no questions about privacy and human rights controversies in which the startup has been involved.

But even these situations can have the opposite result to what was expected. In some cases, the content attracts comments that show the public's dissatisfaction.

In Altman's interview with Huge If True, users joked about the lack of content in the conversation. "Now I understand why ChatGPT is like that. This guy can talk for hours without answering a single question," says one viewer.

Another is more critical and states that it's "crazy" to say that a 22-year-old recent graduate is lucky to live in the AI ​​age, considering that technology is destroying junior-level jobs.

Recently, Adam Mosseri, CEO of Instagram, participated in a video on the Track Star channel. The page does a quiz to guess songs with celebrities and ordinary people, mixing game show and interview.

The reaction on Instagram was negative: users took the opportunity to criticize changes to the platform, scam advertisements, accounts banned without reason, and choices to supposedly addict users.

Not even Track Star itself was spared. "Honestly, this account was more fun when it was random people on the street trying to guess the songs," says the most liked comment on the video.

Reporter: Giovanni Santa Rosa https://www.linkedin.com/in/giosantarosa//

Tuesday, December 2, 2025

 

TECH


AI is growing, but can the planet handle it? The hidden cost of the digital revolution

Artificial intelligence is emerging as a hero in the fight against climate change, but its accelerated growth hides an environmental cost that almost no one sees. Between explosive energy consumption, thirsty data centers, and a new avalanche of electronic waste, the technology that can save us also threatens to worsen the crisis it is trying to solve.

Artificial intelligence is now at the center of major solutions to the planet's challenges. It anticipates catastrophes, optimizes energy systems, creates more sustainable materials, and revolutionizes science. However, behind this promising image, an invisible physical structure is growing that requires gigantic volumes of energy, water, and natural resources. As AI expands, its own environmental footprint is beginning to raise a global alarm.

Training large AI models requires colossal amounts of electricity. To give you an idea, large-scale text generation systems have already consumed volumes of energy equivalent to the annual consumption of dozens of homes. Today, with daily use on a global scale, this consumption is even greater.

This growth has led tech giants to seek their own energy sources, including contracts with nuclear power plants, to ensure the stable operation of their data centers. The problem is structural: the architecture of current computers constantly transfers data between memory and processor, generating heat and wasting energy. With physical limits approaching, efficiency is no longer growing at the same rate as demand.

Water, minerals, and waste: the invisible chain of AI...The impact of artificial intelligence goes beyond the electricity bill. Data centers use enormous volumes of potable water to cool equipment that operates non-stop. On average, each kilowatt-hour consumes liters of clean water, a resource that is increasingly scarce in many regions of the world.

In addition, the manufacture of chips requires rare minerals, often extracted under controversial social and environmental conditions. At the end of the life cycle of this equipment, another problem arises: the explosion of electronic waste, which is difficult to recycle and highly polluting.

Artificial intelligence lives in a profound contradiction. While contributing to the energy transition, precision agriculture, climate monitoring, and disaster prevention, it also increases the pressure on the planet's own resources.

Studies indicate that AI can help advance many global sustainability goals, but it can also delay several of them if its growth goes unchecked. It's not about rejecting the technology, but about recognizing that its impact is ambiguous.

The solutions are not just in more efficient software. The real leap will come from hardware. Research is advancing in in-memory computing, which eliminates unnecessary energy shifts, in memristors that process and store simultaneously, in photonic chips that use light instead of electricity, and in analog systems inspired by the workings of the human brain.

These technologies promise to drastically reduce energy consumption and device heat generation.

Governance: the decisive factor for the future of AI...The sustainability of artificial intelligence depends not only on technical innovation, but also on public policies, transparency, and corporate responsibility. "Green algorithm" programs and environmental requirements are already beginning to emerge in some countries.

AI has the potential to profoundly transform the world. But this transformation will only be truly positive if the technology itself is designed not to become yet another threat to the planet's climate balance.

The growth of artificial intelligence (AI) represents a significant challenge to the planet's sustainability due to its high consumption of energy and water resources, especially in the data centers that support it. However, AI also offers potential for energy efficiency solutions.

The Hidden Cost of AI

-Massive Energy Consumption: Data centers, the heart of AI infrastructure, already consume about 1% to 2% of global electricity. Intensive AI use could increase this consumption dramatically; some estimates predict that data center electricity consumption could more than double by 2030. This increase in demand often relies on non-renewable energy sources, contributing to greenhouse gas emissions.

-Intensive Water Use: The cooling systems needed to prevent servers from overheating consume vast amounts of water. Large data centers can use millions of gallons per day, the equivalent of the consumption of a medium-sized city. It is estimated that 20 to 30 questions to a generative AI can consume half a liter of water.

-Resource Extraction and Electronic Waste: The production of AI hardware requires the extraction of valuable minerals, such as lithium and cobalt, which have significant environmental impacts. The rapid planned obsolescence of this equipment also exacerbates the problem of electronic waste.

The Potential of AI for Sustainability...Despite the costs, AI can also be a powerful tool in the search for climate solutions:

-Energy Optimization: AI can optimize energy consumption in buildings, transportation networks, and data center management, increasing operational efficiency and reducing waste.

-Climate Forecasting and Resource Management: AI models can help predict climate patterns, manage water resources more efficiently, and better integrate renewable energy sources into the electricity grid.

mundophone


TECH



Rock 'n' research: Engineering student builds 3D-printed guitar

Timothy Tran '27 has a new guitar for jamming out to his favorite Jimi Hendrix tunes, and he didn't pick it up at a music shop—he printed it.

Unlike most acoustic guitars, Tran's prototype was printed on a Prusa MK4 3D printer.That's no rock 'n' roll fantasy. Tran, a Binghamton University, State University of New York junior majoring in mechanical engineering, has created a playable 3D-printed acoustic guitar. Unlike a traditional instrument, it's made out of thermoplastic filament, but it works. You can even play Hendrix's "Purple Haze" if you'd like.

And while most 3D-printed guitars have been electric, Tran's is unique for being an acoustic model.

"It wasn't something that people had really looked into," Tran said. "People have made 3D-printed electric guitars—that's probably easier because you don't have to worry about the vibrations as much. I just wanted to try something new."

A guitarist since his senior year of high school, Tran was looking for ways to apply his engineering skills over summer break when he stumbled upon his dad's old guitar up in the attic. Unfortunately, it was no longer playable, but Tim saw an opportunity.

"I saw it broke and I said, 'I can make a new one for you,'" Tran said. "I decided it would be a good thing to just try."

With that goal in mind, Tran started designing a prototype based on his father's guitar and sought out assistance from William E. Schiesser, a lecturer at the School of Computing. The project "struck a chord" with Schiesser.

"I just kind of pointed in the right direction and gave him some guidance," Schiesser said. "We've met every week since the beginning of the summer, and he's really done some great work with us."

String theory...You don't become a luthier (that's a person who builds stringed instruments) overnight, and you don't design a 3D printed guitar overnight either. Tran's design process took time and patience. He painstakingly measured the dimensions of his father's vintage guitar, a process that took a couple of weeks to get right.

"Design work is key," Schiesser said. "You have to do a lot of planning up front to make sure all the parts fit together."

Once the design was in place, Tran modeled the guitar in Fusion 360 design software and printed it on a Prusa MK4 3D printer. Because the printer was only 10 inches by 10 inches, pieces that normally would be a single unit had to be split up. For example, the fretboard was divided into two pieces.

Unlike a traditionally constructed guitar, the pieces fit together using a press-fit method. A special connecting plate allows the different pieces to slide together.

"Putting it together was pretty easy. It only took one or two days. It was more just waiting for the glue to dry," Tran said. "The longest part was probably just waiting for parts to print. Bigger pieces might take six or seven hours to print."

Fine-tuning...Tran can play some classical pieces like Pachelbel's Canon, scales and a few funky licks, but the guitar is not quite stage-ready. The action (that's how far the strings sit above the neck of the guitar) is a bit too high, and the guitar is hard to keep in tune. Tran and Schiesser are developing a second prototype to work out the kinks.

"I'm just trying to figure out a way to make the neck piece a little more uniform, and how to get the action lower," Tran said. "Just to get it to a more playable feel, because right now this is pretty tough to play. It doesn't feel that comfortable."

"It's an iterative process. You make one prototype, see how it works, then correct any issues," Schiesser added. Tran's father was excited about the project, regularly asking him for updates on when a new part was being made or which stage he had reached in the design process.

"Once I finished it, I gave it to him first, and he was overjoyed to play it; he texted all his siblings bragging about it!" Tran said.

Timothy Tran has been playing guitar since his senior year of high school. One of his favorite guitarists is Jimi Hendrix. Credit: Binghamton University, State University of New York

Guitars without borders...While it's cool to play Led Zeppelin riffs in a campus lab, Tran and Schiesser hope that this design can be made accessible to people around the world who can't afford to spend big bucks on an instrument. One of Tran's guitars costs just $25–$30 to print, which includes the price of one and a half rolls of filament, strings and tuning pegs.
"If it were to become something really successful, I want it to be just something that can be free access for everyone," Tran said. "Growing up, we didn't have that much money … so if it's just something that's easy and accessible for people who need it, I think that'd be really cool."

Schiesser, who has a background in patents and intellectual property law, said that the design could be like an "open-source guitar."

"Just like we share software and code by open source with license, this could be posted online, and anybody could access the design for free, as long as they have access to a 3D printer and can get it printed out," Schiesser said.

Tran, who will intern at GE Aerospace in spring 2026, wants to work on vehicles and engines once he graduates. But he's not counting out a gig in the guitar industry—if the opportunity is right. Whatever happens, he's happy that he was able to use his interest in engineering and apply it to another field that he's passionate about.

"It was good to put out into the world," Tran said. "It's just a good way to direct what I wanted to do, because I had a lot of ideas, but I didn't really know how to employ them. It was just really cool to see an idea I had just really come to life."


Provided by Binghamton University

Monday, December 1, 2025

 

TECH


The resurgence of an underground market challenging the Chinese communist government

Four years after a historic ban, an activity that many believed to be extinct has resurfaced. The state is tightening its grip, creating new control fronts, and hardening its rhetoric. Yet, the phenomenon persists in reappearing through paths that defy any system.

In 2021, China decreed a complete end to cryptocurrency-related activities. The decision seemed definitive: trading prohibited, mining deactivated, and platforms shut down. The objective was clear—to protect the financial system, contain capital flight, and prevent a parallel economy from consolidating outside of state control. But, four years later, unexpected signs are forcing the government to reinforce its offensive once again.

Faced with the recent growth of informal operations, the People's Bank of China convened a high-level meeting with judicial authorities, financial regulators, and technology agencies. All are part of a new inter-institutional coordination model focused on supervising digital activities considered risky.

The legal framework remains the same as in 2021, when cryptocurrencies were classified as "illegal financial activity." Exchanges were shut down, stablecoins were treated as a threat, and any type of intermediation was banned. What has changed now is the degree of urgency. According to the central bank itself, clandestine activity has once again grown in sufficient volume to trigger alert systems.

Authorities have reinforced that digital assets do not have legal tender status in the country and that the anonymity of transactions increases the risks of fraud, money laundering, and capital flight.

Mining is back on the radar — and reignites the maximum alert... The most sensitive data in this new moment is not so much in illegal trade, but in mining. Even though it has been prohibited for four years, the activity has reappeared in international measurements. The computing power originating in Chinese territory has grown slowly, indicating that old structures may have migrated to areas less visible from oversight.

Mining does not depend on direct sales, but it generates Bitcoin income that can circulate through routes that are difficult to trace. This point is particularly critical for the government: digital financial flows crossing borders without going through banks or traditional control systems.

This contradiction — formal prohibition and real activity on the rise — is the main reason for the new regulatory tightening.

A model that does not allow for flexibility...The government has made it clear that there is no sign of openness. The position remains aligned with the vision of financial stability advocated by the Chinese leadership, which sees cryptocurrencies as a structural threat to the control of currency and monetary policy.

Even so, the demand for digital assets persists, and mining has demonstrated the ability to survive even after massive shutdowns in 2021. This is a direct clash between a model of absolute control and a technology designed precisely to escape that control.

China is once again facing an adversary that never completely disappeared. The state is increasing surveillance, tightening oversight, and trying to close the loopholes that allowed the activity to return. But the very decentralized functioning of the crypto ecosystem continues to generate more questions than answers.

The big question now is whether the new offensive will be enough to contain this resurgence — or whether, once again, the system will find alternative ways to survive regulatory pressure.

More crackdown...The People's Bank of China (PBoC) emphasized that it will intensify its crackdown on cryptocurrency trading and speculation.

In a statement, the Chinese central bank reiterates that cryptocurrencies do not have "legal status equivalent to fiat currency" and that they are not legal tender, which is why "they should not and cannot be used as currency in market circulation."

The statement classifies activities linked to crypto assets as "illegal financial activities" and warns that, recently, speculation with crypto assets has started to grow again, bringing new scenarios and challenges to risk control.

The PBoC highlighted specific concerns about stablecoins, cryptocurrencies designed to maintain stable value, usually pegged to official currencies such as the dollar, stating that they "do not effectively meet the requirements for customer identification and money laundering prevention" and can be used in financial fraud schemes and irregular transfers of funds across borders.

The bank concluded by urging state bodies to maintain a prohibitive policy on cryptocurrencies, deepen coordination, strengthen monitoring, and share information to preserve economic and financial stability.

by Aleksandra Lima dos Santos

  TECH Software platform helps users find the best hearing protection The world is loud. A walk down the street bombards one's ears with...