Monday, January 19, 2026

 

DIGITAL LIFE


Cyberattacks can trigger societal crises, scientists warn

Cyberattacks can wreak havoc on the systems they target, yet their impact often spreads far beyond technical failures, potentially triggering crises that engulf entire communities, a new study argues.

When hackers strike "critical infrastructure," they "pose serious risks to societal resilience," the study notes, adding that public responses can be intense and wide-ranging, shifting from denial and humor to anger, bargaining, depression, and acceptance.

The study, published in the journal Engineering, Construction and Architectural Management, explores public responses to the 2021 cyberattack on a Florida water treatment plant by examining social media conversations and their role in shaping cybersecurity strategies.

In February 2021, an intruder gained remote access to the control system of a water plant in Oldsmar, Florida, and attempted to dramatically increase levels of sodium hydroxide (lye), a chemical used in small amounts to treat water.

The hacker initially succeeded in altering the settings and increasing the level of the chemical in treated water, which could have posed serious health risks if left uncorrected. A plant operator spotted the unauthorized changes in real time and quickly reversed them, preventing harm.

"This research examined how people react when a cyberattack targets critical infrastructure, using the 2021 Florida water treatment plant hack as a real-world case," said Dr. Bharadwaj R. K. Mantha, an assistant professor at the University of Sharjah's College of Engineering.

"Rather than focusing only on technical failures, the study looks at public reactions expressed on social media (e.g., X, formerly Twitter, in the context of this study) and treats them as an important part of the crisis itself."

Public reaction to Florida water plant hack...To explore public perceptions of the Florida cyberattack, the researchers conducted a qualitative data analysis of online narratives surrounding the incident. They collected social media posts from X (formerly Twitter) during the first week following the hack. They compiled conversations on the incident from February 8 to February 15, 2021.

"These tweets provided a diverse range of public reactions, including expressions of disbelief, humor, fear, critiques of systemic vulnerabilities, and calls for accountability," the authors explain.

"This dataset, as naturally occurring data without the intervention of the researchers, offered valuable insights into how the public processes and responds to cybersecurity incidents in real time."

To reinforce their findings, the researchers also carried out a critical review of existing literature on cybersecurity vulnerabilities in infrastructure systems and on cyberattacks as sociotechnical crises.

Their review systematically explores the complex landscape of cybersecurity challenges and their varied implications, draws parallels from previous studies on how cyberattacks can be considered sociotechnical crises, and then identifies key research gaps and positions.

The authors frame their analysis using the Kübler-Ross model, a well-known psychological framework introduced by the psychiatrist Elisabeth Kübler-Ross in 1969 to describe emotional responses to death and dying.

Although originally developed for end-of-life contexts, the five-stage model—denial, anger, bargaining, depression, and acceptance—has since been widely applied to explain how individuals respond to loss, crises, and major disruptions.

In the context of the study, the authors use the Kübler-Ross model to shed light on the model's stages and how they unfold when the public is confronted with serious cyber threats to "assist facility managers, government agencies, and municipalities to better understand how the public perceives cyber incidents on critical infrastructure," Dr. Mantha emphasized.

Public emotions and perceptions...Analyzing social media conversations surrounding the attempted hack of Oldsmar's water system, the study finds that public reactions "followed a structured emotional progression, from denial and humor to anger, bargaining, depression, and acceptance.

"Social media discourse revealed concerns over systemic vulnerabilities, accountability demands and calls for cybersecurity reform. These insights emphasize the importance of transparent crisis communication, proactive risk management and public engagement in strengthening cybersecurity resilience."

Dr. Mantha, a co-author of the study, emphasized that cyberattacks affect far more than infrastructure and digital systems. "Public reactions follow recognizable patterns, from disbelief and humor to fear, anger, and eventual acceptance."

He added that social media functions as a real-time "public sector," exposing underlying anxieties, mistrust, and expectations. "Ignoring public sentiment during cyber crises undermines trust and slows recovery. Online discussions rapidly highlighted systemic weaknesses—such as outdated software—and sharply criticized perceived lapses in cybersecurity practices."

While public perceptions may not always reflect actual reality, Dr. Mantha notes that they remain critical for policymakers and crisis managers.

"Our findings show that these perceptions are a key part of the landscape and must be addressed appropriately by the responsible authorities. There was sustained public demand for accountability, transparency, and reform. Importantly, the discourse often included technically informed suggestions—not just emotional reactions."

Implications for cyber crisis communication...The authors argue that their findings carry far-reaching implications, particularly for improving crisis communication strategies during cyber incidents.

He added that social media functions as a real-time "public sector," exposing underlying anxieties, mistrust, and expectations. "Ignoring public sentiment during cyber crises undermines trust and slows recovery. Online discussions rapidly highlighted systemic weaknesses—such as outdated software—and sharply criticized perceived lapses in cybersecurity practices."

While public perceptions may not always reflect actual reality, Dr. Mantha notes that they remain critical for policymakers and crisis managers.

"Our findings show that these perceptions are a key part of the landscape and must be addressed appropriately by the responsible authorities. There was sustained public demand for accountability, transparency, and reform. Importantly, the discourse often included technically informed suggestions—not just emotional reactions."

Implications for cyber crisis communication...The authors argue that their findings carry far-reaching implications, particularly for improving crisis communication strategies during cyber incidents.

Provided by University of Sharjah

 

TECH


4 in 5 small businesses had cyberscams in 2025 and almost half of attacks were AI powered

One more reason things cost more today: cybercrime.

A survey by the Identity Theft Resource Center, a San Diego-based education and victim resource nonprofit, found that 38% of small businesses hit by a cyberscam or breach in the previous 12 months passed those losses to customers by raising prices.

Another key finding: Cybercrime against small businesses is increasingly fueled by artificial intelligence.

"The era of predictable, human-scale threats has been superseded by a new reality of automated, intelligent and massively scalable attacks powered by AI," said the report, which discusses trends in threats, prevention and attacks. It also gives detailed recommendations about network and application security, data protection and employee and contractor practices. (The survey reached out to more than 650 companies across more than 12 industries in August.)

Eva Velasquez, the CEO of the Identity Theft Resource Center, said the results offer a stark reminder that hackers aren't picky. They will grab data and money from anyone, including large and small businesses, and individuals.

"When we think about risk, it really is all businesses," Velasquez said. From mom and pops to large companies, "They're all attractive to hackers." Small businesses sometimes don't pay enough attention to cybersecurity "because they think they're not vulnerable. They think, "Well, why would anybody target me?'"

Not only are they being targeted, but they are being successfully breached, some multiple times a year. Two or three breaches in a 12-month period was the most common pattern. Another 34% had one breach and almost 12% had four or more.

One encouraging shift: The percentage of companies with one or two breaches increased from 2024, while the percentage of companies with more than two breaches dropped. Perhaps companies are improving their cybersecurity protocols after a first or second breach.

The report, however, said companies being hit only once says something about cyber attackers' methods.

"Threat actors appear to be focusing on opportunistic, high-volume strikes. This alters the risk calculus for (small businesses), shifting the primary challenge from defending against a determined, persistent adversary to repelling a continuous barrage of single-shot attacks from a multitude of sources."

The nonprofit helps individuals for free, and businesses in some cases get charged fees used to fund its free services. The nonprofit faced a significant drop in federal government grants last year, but remains financially robust thanks to private donors and unclaimed awards from class action settlements, Velasquez said.

"Our services remain available at the same level they were prior to changes in the federal grant processes/availability," Velasquez said.

AI attacks have skyrocketed...Four out of five small businesses reported they were victims of a security or data breach in the past 12 months—a statistic unchanged from a year before.

But the nature of these attacks has changed, with AI taking center stage.

In past surveys of small businesses that suffered cyber and data breaches, incidents were caused by insecure cloud environments, ransomware, hackers, malicious employees or contractors, lapses by remote workers, software flaws and attacks on third-party vendors, the report said.

AI was not even named as a cause, as recently as 2024.

But in 2025, 41% of small business victims said AI was the root cause of a recent attack.

Generative AI can craft "highly personalized social engineering attacks that mimic the tone and context of legitimate internal communications," the report says.

Hackers now are launching large-scale, automated attacks that cover a lot more ground, Velasquez said.

In cybercrime, AI is the great equalizer. Sophisticated scams can be carried out by less knowledgeable wrongdoers who use generative AI.

"These tools are effectively democratizing advanced attack capabilities that were once the domain of highly skilled actors," the report said.

The cause for data and cyber breaches that saw the biggest percent drop in 2025, compared to 2024, was remote work—which makes sense, as workers have returned to offices. Every other cause of attacks has also dipped, perhaps as scammers and data thieves turned to AI.

While AI was added to the list and some causes became less prevalent, no cause disappeared.

Paying the price...When small businesses suffer a breach or fraud, the financial hit can include lost revenue, legal costs, fines and penalties, insurance, marketing and security overhauls.

Adding up these expenses, the survey found that 37% of companies lost more than $500,000 last year, per incident. A quarter lost up to $250,000 and another quarter lost between $250,000 and $500,000.

"This represents a significant, inflationary macroeconomic ripple effect stemming directly from the worsening cyberthreat landscape for small businesses," the report said.

One reason for this change may be that other sources of funding were harder to come by. A smaller percentage got money from investors to respond to cyber and data breach incidents in 2025 than in 2024.

Also, fewer companies turned to cyber insurance, with almost a quarter of companies saying they had "difficulty obtaining or renewing cyber insurance" after a breach. "This suggests that as the frequency and cost of claims have risen, insurers have responded by adjusting underwriting standards."

Compared to 2024, fewer companies cut jobs as a way to offset losses due to cybercrime: 18%, down from 27%.

Relying less on insurance and investors, and opting to cut fewer jobs as a result of cyber breaches, may have each or all contributed to the raising of prices.

Preventing losses...Which sensitive data did crooks slink away with? Employee data was most commonly accessed in breaches, with customer data and company IP both ranking close behind.Velasquez and her nonprofit urge companies to keep studying known and evolving threats and to keep adapting their cybersecurity practices.

"The single most critical access control for any (small business) to implement is MFA," the report said.

MFA stands for multi-factor authentication—a system of safety checks where a request to access secure information has to be vetted through multiple, independent channels. MFA makes it "significantly harder for attackers to use stolen passwords."

Examples of these are free authenticator apps (like Google Authenticator), SMS codes that get sent to a user's phone when they try to log in using a password, and physical hardware tokens.

The report cited an "alarming decline in MFA adoption for internal systems," from around 33% in 2024 to around 27% in 2025. This "represents a critical, high-priority vulnerability that SBs must address immediately."

'A societal shift'..."Really good companies with robust cybersecurity can have a breach," Velasquez said. "It's not an automatic indicator of negligence."

But companies with less robust cybersecurity are far more at risk.

The report has six pages of tips for preventing cyber and data breaches and countering AI-powered attacks. These range from what kind of training companies should offer to how firewalls should be set up, to data encryption best practices and more.

Small businesses need to strengthen their prevention, but Velasquez also made this pitch to consumers: don't turn away from companies that are taking steps to protect your data, even if it's annoying.

To fight back, some companies have robust tools in place, but the survey also found a disturbing trend. "The implementation of critical security measures, such as multi-factor authentication, has declined," it said. One reason, the report posited: company leaders are overwhelmed and "neglecting the very basics that provide an effective defense."

That crushingly long four-second delay until a verification text message arrives, the extra screen taps involved in using an authenticator app—those are a sign a company is doing things right.

"One of the conflicts that we have is convenience versus security. And businesses are fighting this tension between, 'I have to be secure and I have to make people jump through hoops to prove that they are who they say they are, so that I can protect their data, their account, their information.' And individuals going, 'I want convenience.'"

"If we have a societal shift where we understand that some friction, a little bit of inconvenience, is actually good for us," she said.

A company that asks you to do those things is one you should do business with, Velasquez added, "because you know that they have put measures in place to protect you and your data."


2026 The San Diego Union-Tribune. Distributed by Tribune Content Agency, LLC.

Sunday, January 18, 2026

 

DIGITAL LIFE


Using AI to decide your vote? "we can be easily manipulated"

This indecision can lead to the use of Artificial Intelligence (AI) tools such as ChatGPT or Gemini, which, despite being viable options for learning more about candidates, also present risks and - depending on how they are used - can also contribute to the spread of misinformation.

In recent years, we have seen an increasing use of Artificial Intelligence tools (ChatGPT, Gemini, etc.) to search for information on a wide variety of subjects, including politics. With the approach of the presidential elections, what are the risks of using this type of tool to know who to vote for?

These tools must be used well, always with a critical sense, so that we do not run the risk of the information being used in a biased way. For example, imagine that you have a candidate's political program and that it is submitted to one of these tools and you ask for a summary to be made.

This is a way for us to be better informed, because a summary of the program is made and sometimes it is even offered to be compared with others. Someone who doesn't have the patience to read all the programs might become interested in a specific point, want more details, and take the time to learn more or compare it with other programs.

But we are the ones who have to be in control. We are the ones who submit, who say what we want to do, and therefore, AI functions as an administrative assistant. Of course, afterwards we can have a conversation, which can range from how we vote to something more enlightened. In this conversation, it's important to always maintain a critical sense, because if we let ourselves be guided by the prior knowledge that these tools have, we can be easily manipulated. And we already know that there are contexts.

If we use AI tools to process information, it's excellent because it's a great help, and even if mistakes are made, they will be marginal. Of course, everyone knows their own situation, but what is expected is that the person makes an informed decision, and that it is their own decision.

It's a way, perhaps, for people to become more interested in subjects like politics, but it's important to maintain a critical sense and avoid open-ended questions that are susceptible to manipulation. It's about providing information, asking it to process, asking it to compare, and eventually asking some questions, but from this point on, some caution is necessary. Because if we consider that on the other side there is someone manipulating these tools, they have an advantage if we are not careful.

The problem here is that, when we ask an AI tool for opinions, it's very difficult to understand the reasoning. Because effectively there is no reasoning, there is statistical processing. The same applies to online questionnaires in some newspapers to decide the most suitable candidate for our profile, where they ask us a few questions and then make a suggestion. Nobody really justifies why that suggestion appears, so it's important to have a certain critical sense.

When we get into matters of opinion, we can't even understand what's behind it.

We interviewed Inês Lynce, a researcher and professor of Artificial Intelligence at Instituto Superior Técnico, about the use, risks, and best practices of Artificial Intelligence (AI) tools to learn more about electoral candidates. Watch the interview below:

Regarding the manipulation of information and the risk of asking very open-ended questions, on what basis do these tools make suggestions about, for example, voting intentions?

Just as if we were to ask the same question to a person, the best thing to do is to interrogate (so to speak) the person giving us the information in order to verify its credibility and see if there is any guarantee that there is more substantiated knowledge. This is a way to protect ourselves, to always be alert.

In other words, ask about the sources of information and where they got the data they cite?

Yes, but at this point, typically, they have a lot of difficulty and, every now and then, they invent things.

The hallucinations, right?

Yes, yes, it can be. In fact, nowadays one of the ways to identify documents lacking critical sense generated by these tools is to find references that don't exist or that have nothing to do with the topic. That happens.

There can be this dialogue, but leaving the decision of the vote in the hands of a tool...maybe it's better to just ask it about the election results so nobody has to go vote [laughs].

If it were simply processing all the information that exists on the Internet, we know that would have many risks. The sources and origin of information are extremely important.

Inês has already mentioned some good practices in the use of AI: using the tools to process information and having critical thinking skills were some of them. What other good practices or advice would you give to someone who wants to use these tools to learn more about the presidential elections?

I always advocate for human oversight. I don't recommend open-ended questions, but rather a conversation, because the AI ​​develops based on what we've said before and, so to speak, creates our profile.

We can also present a series of sources, documents, and other information that you can use as a basis for presenting information and ask to "cross-check" it. Sources are increasingly important because that's what defines the credibility of the conclusions we reach. In podcast interviews, for example, AI can summarize the conversation or interview, and we can ask for the key points the candidate mentioned.

There's a lot of information, and AI helps us process this data, but it doesn't help us make value judgments—which is what's required when a person decides which candidate to vote for. But that's how it is with everything. It's like making a decision about our health based on what an AI tool tells us. If we have a cough and an AI tells us to drink tea with honey, the risk is minimal. But if you tell us to drink bleach, it's good that we have critical thinking skills and know that there's information like that circulating on the internet.

We must always be critical, if possible to standardize the information we provide, and cross-examine to see how reliable the information is when it is more openly presented.

mundophone


DIGITAL LIFE


Patient privacy in the age of clinical AI: Scientists investigate memorization risk

What is patient privacy for? The Hippocratic Oath, thought to be one of the earliest and most widely known medical ethics texts in the world, reads: "Whatever I see or hear in the lives of my patients, whether in connection with my professional practice or not, which ought not to be spoken of outside, I will keep secret, as considering all such things to be private."

As privacy becomes increasingly scarce in the age of data-hungry algorithms and cyberattacks, medicine is one of the few remaining domains where confidentiality remains central to practice, enabling patients to trust their physicians with sensitive information.

Risks of AI memorization in health care...But a paper co-authored by MIT researchers and posted to the arXiv preprint server investigates how artificial intelligence models trained on de-identified electronic health records (EHRs) can memorize patient-specific information.

The work, which was recently presented at the 2025 Conference on Neural Information Processing Systems (NeurIPS 2025), recommends a rigorous testing setup to ensure targeted prompts cannot reveal information, emphasizing that leakage must be evaluated in a health care context to determine whether it meaningfully compromises patient privacy.

Foundation models trained on EHRs should normally generalize knowledge to make better predictions, drawing upon many patient records. But in "memorization," the model draws upon a singular patient record to deliver its output, potentially violating patient privacy. Notably, foundation models are already known to be prone to data leakage.

"Knowledge in these high-capacity models can be a resource for many communities, but adversarial attackers can prompt a model to extract information on training data," says Sana Tonekaboni, a postdoc at the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard and first author of the paper. Given the risk that foundation models could also memorize private data, she notes, "this work is a step toward ensuring there are practical evaluation steps our community can take before releasing models."

Testing privacy risks and attack scenarios...To conduct research on the potential risk EHR foundation models could pose in medicine, Tonekaboni approached MIT Associate Professor Marzyeh Ghassemi, who is a principal investigator at the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), a member of the Computer Science and Artificial Intelligence Lab. Ghassemi, a faculty member in the MIT Department of Electrical Engineering and Computer Science and Institute for Medical Engineering and Science, runs the Healthy ML group, which focuses on robust machine learning in health.

Just how much information does a bad actor need to expose sensitive data, and what are the risks associated with the leaked information? To assess this, the research team developed a series of tests that they hope will lay the groundwork for future privacy evaluations. These tests are designed to measure various types of uncertainty, and assess their practical risk to patients by measuring various tiers of attack possibility.

"We really tried to emphasize practicality here; if an attacker has to know the date and value of a dozen laboratory tests from your record in order to extract information, there is very little risk of harm. If I already have access to that level of protected source data, why would I need to attack a large foundation model for more?" says Ghassemi.

Findings and implications for patient safety...With the inevitable digitization of medical records, data breaches have become more commonplace. In the past 24 months, the U.S. Department of Health and Human Services has recorded 747 data breaches of health information affecting more than 500 individuals, with the majority categorized as hacking/IT incidents.

In their structured tests, the researchers found that the more information the attacker has about a particular patient, the more likely the model is to leak information. They demonstrated how to distinguish model generalization cases from patient-level memorization, to properly assess privacy risk.

The paper also emphasized that some leaks are more harmful than others. For instance, a model revealing a patient's age or demographics could be characterized as a more benign leakage than the model revealing more sensitive information, like an HIV diagnosis or alcohol abuse.

The researchers note that patients with unique conditions are especially vulnerable given how easy it is to pick them out, which may require higher levels of protection. "Even with de-identified data, it really depends on what sort of information you leak about the individual," Tonekaboni says. "Once you identify them, you know a lot more."

The researchers plan to expand the work to become more interdisciplinary, adding clinicians and privacy experts as well as legal experts. "There's a reason our health data is private," Tonekaboni says. "There's no reason for others to know about it."

Provided by Massachusetts Institute of Technology

Saturday, January 17, 2026

 

DIGITAL LIFE


RedVDS: Microsoft dismantles platform used in thousands of phishing attacks and financial fraud

An investigation conducted by Microsoft revealed a global network of cybercriminals that used cheap virtual servers for multiple phishing attacks, financial fraud, and data theft.

In a joint international operation, Microsoft's Digital Crime Unit, in collaboration with Europol and authorities from the United Kingdom and Germany, dismantled the RedVDS infrastructure. The organization operated under the "Cybercrime as a Service" (CaaS) model, selling subscriptions so that ordinary criminals could carry out sophisticated attacks without needing advanced technical knowledge.

Active since March 2025, RedVDS allowed anyone, paying just $24 a month, to access disposable virtual machines and tools capable of sending millions of phishing emails daily. It is estimated that the group facilitated fraud totaling more than $40 million in losses in the United States alone.

Microsoft's Digital Crimes Unit announced the successful dismantling of RedVDS, a virtual dedicated server (VDS) provider that had established itself as a central piece in the machinery of international cybercrime.

The operation, coordinated with authorities from several countries, put an end to a platform that facilitated thousands of phishing attacks, financial fraud, and intrusions into corporate networks, generating estimated losses of $40 million (approximately €34.4 million) in the United States alone since March 2025.

RedVDS was not a typical hosting provider, as it functioned as a veritable marketplace for malicious actors, offering a simplified interface where it was possible to purchase Windows servers based on Remote Desktop Protocol (RDP) at extremely low prices.

Without any type of verification or usage restriction, the platform guaranteed total anonymity to its "clients," exclusively accepting payments in cryptocurrencies such as Bitcoin, Litecoin, and Monero. To give an appearance of legality, the organization operated under a fictitious entity based in the Bahamas.

The success of Microsoft's investigation was largely due to a flaw in the technical operation of RedVDS. Experts identified that all the servers made available were created from a single cloned image of Windows Server 2022.

This image always shared the same computer name, “WIN-BUNS25TD77J”, an anomaly that became the digital signature of the operation. The operator behind the infrastructure, identified by the code name Storm-2470, used QEMU virtualization and stolen licenses to scale the business in minutes, allowing attackers to launch massive campaigns with almost zero operating costs.

RedVDS' popularity was transversal to several threat groups, including the well-known Storm-0259, Storm-2227, Storm-1575, and Storm-1747. These groups used the infrastructure to target critical sectors such as healthcare, construction, real estate, and education in countries such as the United Kingdom, France, Germany, Canada, and the United States.

On the analyzed servers, pre-installed tools for sending mass emails, contact extractors, and VPN clients were found. The use of artificial intelligence (AI), namely ChatGPT, was also detected to create more convincing phishing messages without grammatical errors, helping criminals overcome language barriers.

The scale of the operation was impressive, with Microsoft identifying more than 7,300 IP addresses linked to RedVDS in just 30 days, hosting more than 3,700 fraudulent domains. The shutdown of this infrastructure represents a major blow to the cybercrime economy, but the tech giant is issuing a warning.

To mitigate similar risks, organizations should strengthen their defenses through phishing-resistant multi-factor authentication (MFA), conditional access policies, and continuous employee training, thus ensuring that social engineering attempts are stopped on first contact.

mundophone


DIGITAL LIFE


Profit above all else: YouTube relaxes monetization policy on videos with controversial content

YouTube is updating its guidelines for videos containing content that advertisers define as controversial, allowing more creators to earn full ad revenue when they tackle sensitive issues in a nongraphic way.

With the update that went into effect Tuesday, YouTube videos that dramatize or cover issues including domestic abuse, self-harm, suicide, adult sexual abuse, abortion and sexual harassment without graphic descriptions or imagery are now eligible for full monetization.

Ads will remain restricted on videos that include content on child abuse, child sex trafficking and eating disorders.

The changes were outlined in a video posted to the Creator Insider YouTube channel on Tuesday, and the advertiser-friendly content guidelines were also updated with specific definitions and examples.

"We want to ensure the creators who are telling sensitive stories or producing dramatized content have the opportunity to earn ad revenue while respecting advertiser choice and industry sentiment," said Conor Kavanagh, YouTube's head of monetization policy experience, in the video announcing the changes. "We took a closer look and found our guidelines in this area had become too restrictive and ended up demonetizing uploads like dramatized content."

YouTube is updating its advertiser-friendly content guidelines to allow more videos on controversial issues to earn full ad revenue, as long as they’re dramatized or discussed in a non-graphic manner. These controversial topics include self-harm, abortion, suicide, and domestic and sexual abuse. YouTube notes that content on child abuse or eating disorders will remain ineligible for full monetization.

“In the past, the degree of graphic or descriptive detail was not considered a significant factor in determining advertiser friendliness, even for some dramatized material,” YouTube explained. “Consequently, such uploads typically received a yellow dollar icon, which restricted their ability to be fully monetized. With this week’s update, our guidelines are becoming more permissive, and creators will be able to earn more ad revenue.”

The Google-owned company says it’s making the change in response to creator feedback that YouTube’s guidelines were leading to limited ad revenue on dramatized and topical content. YouTube notes that it wants to ensure that creators who are telling sensitive stories or producing dramatized content have the opportunity to earn ad revenue.

“We took a closer look and found our guidelines in this area had become too restrictive and ended up demonetizing uploads like dramatized content,” YouTube said. “This content might reference topics that advertisers find controversial, but are ultimately comfortable running their ads against. For example, content may be in a fictional context or voiced from personal experiences in passing or in a non-graphic manner. So, as long as the content steers clear of very descriptive or graphic scenes or segments, creators can now earn more ad revenue.”

The company told moderators last year to leave up videos that may violate platform rules if they are considered to be in the public interest. The New York Times reported at the time that these videos included discussions of political, social, and cultural issues. The policy shift came at a time when social media platforms were rolling back online speech moderation after President Donald Trump returned to office.

YouTube notes that there are still some areas where ads will remain restricted, as topics like child abuse, including child sex trafficking and eating disorders, are not included in this update. Descriptive segments of those topics or dramatized content around them remain ineligible for ad revenue.

The update also makes personal accounts of these sensitive issues, as well as preventative content and journalistic coverage on these subjects, eligible for full monetization.

The Google-owned company said the degree of graphic or descriptive detail in videos wasn't previously considered when determining advertiser friendliness.

Some creators would attempt to bypass these policies on YouTube and other platforms by using workaround language or substituting symbols and numbers for letters in written text—the most prevalent example across social platforms has been the use of the term "unalive."

YouTube has updated its policies in response to creator feedback before. In July, the company eased its monetization policy regarding profanity, making videos that use strong profanity in the first seven seconds eligible for full ad revenue.

© 2026 The Associated Press. All rights reserved

Friday, January 16, 2026

 

TECH


Fragmented permitting slows US clean energy projects, study finds

As states race to build wind and solar projects needed to curb climate change, how governments approve those projects can either speed construction or fuel delays and conflict, according to a new study by researchers at the University of Massachusetts Amherst.

The article, published in the Policy Studies Journal, examines how renewable energy projects in the U.S. are often slowed by complex and fragmented permitting systems that involve multiple state and local authorities, overlapping rules and poorly timed public engagement.

"The punditry … is that if we do permit reform at the federal level, it's going to solve these permitting pipeline issues, but most of these projects are locally permitted," says Juniper Katz, assistant professor of public policy at UMass Amherst and lead author of the study.

She points out that roughly 96% of large renewable energy projects are built on private land, where federal environmental review laws typically do not apply. Most projects, such as onshore wind, solar farms, battery storage and in-state transmission lines, are approved at the state or local level—not by the federal government.

One review of 53 large wind and solar projects facing organized local opposition between 2008 and 2021, found that nearly half were ultimately canceled, with developers reporting that zoning disputes and local ordinances are the leading cause of multi-year delays.

Developers often must navigate zoning boards, environmental agencies, utility regulators and, in some cases, federal reviews. This creates what Katz and study co-author Natalie Baillargeon, a master's degree candidate at UMass Amherst, describe as a "polycentric" system with multiple centers of authority.

Twelve states leave most decisions to local governments, six place authority at the state level, six split authority between state and local governments, and 26 use a hybrid system that depends on project size or other standards.

Highly centralized systems with minimal procedures tend to move faster but provide fewer opportunities for public input. Systems with extensive procedures and multiple decision-making venues tend to offer more participation but are slower and less predictable.

Katz says it comes down to whether policymakers want to swiftly build out renewable energy infrastructure to combat warming temperatures, or encourage a more deliberate process featuring robust public engagement, layers of approval and potential delays.

"Trade-offs are real," she says. "Communities have to decide what their values are and then have the courage to follow them."

"What's really important is that community engagement starts as early as possible," Baillargeon explains. This lets developers incorporate stakeholder feedback, accelerating the process and sometimes converting a potential denial into an approval with only minor changes.

The study also highlights competing equity claims. Rural communities often argue they bear the land-use and visual impacts of renewable energy while benefits flow elsewhere. At the same time, delays in renewable energy development can prolong pollution exposure in urban and low-income communities located near fossil fuel plants.

Several states have recently overhauled their permitting systems. Massachusetts and New York have contingent approval authority based on the size of the project but with firm timelines and structured public hearings. Illinois imposed uniform state standards while leaving decisions with counties. Michigan allows developers to choose between state and local approval, offering incentives for communities that approve projects locally.

Ultimately, Katz and Baillargeon frame renewable energy permitting as an institutional design problem rather than a simple choice between democracy and speed.

Provided by University of Massachusetts Amherst

  DIGITAL LIFE Cyberattacks can trigger societal crises , scientists warn Cyberattacks can wreak havoc on the systems they target, yet their...