Friday, April 24, 2026

 

TECH


JEDEC LPDDR6 roadmap signals major shift to memory-centric computing

Remember when LPDDR memory was just strictly meant for thin-and-light laptops and smartphones? Those days are officially over. While we originally saw JEDEC unleash the foundational LPDDR6 standard in July of last year to fuel faster mobile and AI devices, the standards group is already looking ahead to the next evolution. Today, JEDEC previewed a roadmap that completely reshapes LPDDR6, extending the standard heavily into datacenters and high-performance accelerated computing. 

We've been tracking this memory's blistering potential for a while, from our initial look at the massive speed boosts revealed for next-gen DDR6 and LPDDR6 back in 2024, to Innosilicon shipping the first commercial LPDDR6 IP at an insane 14.4Gbps per pin back in January. This latest update isn't just about raw speed, though; instead, it's about fundamentally changing how your PC's memory handles data.

The newly planned features JEDEC announced include massive capacities up to 512GB density, a new narrower x6 interface, support for processing-in-memory (PIM) and the SOCAMM2 form factor, and then a new flexible metadata carve-out.

Starting from the top, JEDEC expects to unlock staggering densities beyond the current maximums of LPDDR5 and LPDDR5X, targeting up to 512 GB. This massive scale-up is designed specifically to feed the ever-growing memory capacity requirements of AI training and inference workloads, of course. Considering those requirements are why you can't buy RAM at a reasonable price right now, that's a very good thing. To actually pull off those higher capacities, JEDEC is introducing a narrower per-die interface. Moving to a non-binary interface width (adding a new x6 sub-channel mode alongside x12, and the move from x16 to x24) allows manufacturers to cram more dies into a single package. This means higher memory capacities per component and per channel.

PIM is where the "memory-centric" part of our headline comes from. JEDEC is nearing the completion of an LPDDR6 Processing-in-Memory (PIM) standard. Essentially, by baking processing capabilities directly into the memory itself, this tech reduces the need to constantly shuttle data back and forth between the RAM and the CPU. The result is higher inference performance and much lower power consumption. This is pretty bleeding-edge stuff, but it's not completely novel; companies like Samsung and SK hynix have been talking about PIM for years now.

Also, JEDEC is actively developing an LPDDR6 SOCAMM2 module standard. This ensures the compact, serviceable module form factor has a clear upgrade path from today's LPDDR5X SOCAMM2 modules, which are currently used exclusively in massive datacenters and GPU clusters like NVIDIA's NVL72 racks. Hopefully it means that this form factor comes to the desktop as well, so we can keep getting socketed, upgradable memory without sacrificing LPDDR performance. Finally, another feature largely aimed at server farms: JEDEC is giving datacenters the option to balance their user capacity and metadata needs based on specific reliability requirements. The goal here is to implement these stability features while minimizing any hit to peak data throughput.

With rumors swirling that chips as early as AMD's upcoming Medusa Halo (expected to launch early next year) might leverage LPDDR6 for huge bandwidth gains, this JEDEC roadmap makes perfect sense. LPDDR is no longer just "low power" memory; rather, it is becoming a foundational building block for the next generation of high-capacity, insanely fast PCs and servers.

In the announcement, the company highlights the new in-memory processing architecture called PIM (processing-in-memory), which allows the RAM chip itself to perform some calculations, instead of constantly assigning this task to the processor. This avoids the constant transfer of information back and forth across the board, increasing speed and significantly reducing energy consumption.

To achieve this, significant physical modifications were made to the hardware. The internal memory communication path was widened, increasing from 16 to 24 channels. In practice, this change frees up space for more chips within the same component. The end result is memory with much greater capacity in a smaller space.

Furthermore, the design introduces the new, more compact SOCAMM2 socket format, which facilitates maintenance and allows for quick replacement in machines already using older generation technology.

Plenty of space for heavy tasks... With all these adjustments, the new memory standard can reach an impressive 512 GB of capacity, a huge leap entirely focused on handling the mountain of data required by complex tasks today. Electronics manufacturers will also have more freedom to configure memory and find the perfect balance between speed and information security for each product.

The president of JEDEC, Mian Quddus, warned that the organization is still working to finalize the last technical details before publishing the official standard. The market now awaits the start of factory testing to see how all this evolution will behave in the real world, and the expectation is that the technology will arrive in the next generation of AI servers.

Other companies have already released news about the LPDDR6 standard, such as SK Hynix, which promises 33% more speed in cell phones. Samsung and Qualcomm are already working on the next generation, LPDDR6X with up to 1 TB for AI chips, and some rumors speak of 14.4 Gbps RAM.

JEDEC's Board of Directors Chairman, Mian Quddus, noted to "stay tuned for more details" as the subcommittee evaluates these features for final publication. We'll be keeping a close eye on this as LPDDR6 gets ready to take over the hardware space!

by mundophone


DIGITAL LIFE


'Clearly me': AI drama accused of stealing faces

Christine Li is a model and influencer, but not an actor, so when she saw herself playing a cruel character in a Chinese microdrama she felt bewildered, then angry and afraid.

The 26-year-old is one of two people who told AFP their likenesses were cast without consent in the AI-generated show "The Peach Blossom Hairpin," which ran on Hongguo, a major microdrama app owned by Tiktok parent company ByteDance.

Li plans to sue the drama makers and the platform, highlighting new legal and regulatory gray areas created by artificial intelligence.

"I was genuinely shocked. It was clearly me," said Li, who lives in Hangzhou in eastern China.

"It was so obvious that they used a specific set of photos I took two years ago" and had posted on social media, she said.

Microdramas are ultra-short, online soap operas hugely popular in China and elsewhere.

When Li's fans alerted her to the series, she was horrified to find her digital twin shown slapping women and mistreating animals.

"I also felt a deep fear. I kept wondering what kind of person would do something like this," Li said.

Hongguo hosts thousands of free, bite-sized shows—both live-action and AI-generated—whose episodes are two or three minutes long.

As of October, the platform had around 245 million monthly active users, according to data cited by Wenwen Han, president of the Short Drama Alliance.

A Hongguo statement in early April said it had taken the series down because the producers had violated platform rules and contractual obligations.

"Sleazy" antagonist...AI's ability to mimic real people has sparked global concern for actors' jobs, and over such deepfakes being used for scams and propaganda.

Li and a man who says he was portrayed as her AI husband in the series, which became a hit last month on Hongguo, spoke out online about their separate unwelcome discoveries.

But even as their stories sparked a public outcry about AI ethics, AFP saw that "The Peach Blossom Hairpin" kept running for days before its removal, with the disputed characters quietly replaced.

The man, a stylist specialized in traditional Chinese clothing and make-up, had posted photos of himself in costume on the Instagram-like Xiaohongshu app.

Like Li, he was upset by the "ugly" portrayal of his likeness as a "sleazy" antagonist in the show.

"Will it have an impact on me, on my job, on my future work opportunities?" said the man, who asked to use the pseudonym Baicai.

To keep audiences hooked, microdramas are often full of shocking, larger-than-life moments.

Li and Baicai both showed AFP their original photos and the characters in "The Peach Blossom Hairpin," which bore a strong resemblance.

This photo illustration shows phones displaying the screenshots of Chinese Hanfu stylist Baicai's social media post (left) and the AI microdrama (right) accused of stealing his likeness.

This photo illustration taken in Hong Kong shows phones displaying screenshots of a video from Chinese model and influencer Christine Li accusing an AI microdrama of stealing her likeness without consent.

Legal risk...For low-budget AI microdramas, Chinese regulations say platforms must be the primary checkpoint for potentially dodgy content.

If they do not carry out mandatory content reviews, the videos will be forcibly taken down, according to the National Radio and Television Administration.

If the platforms were aware of any infringement but failed to act on it, parties affected can alert China's cyberspace authorities which can impose administrative penalties, according to Zhao Zhanling, a partner at Beijing Javy Law Firm.

Hongguo said in a second statement this month it would continue to strengthen how it reviews content and how it authorizes creators, among other steps.

It said it had dealt with 670 AI microdramas that violated regulations, with most taken down, and warned it would crack down on repeated breaches.

When approached for comment, parent company Bytedance referred AFP to the two Hongguo statements.

Li and Baicai say they need more information from Hongguo to confirm the identity of the drama's creator—with two companies potential candidates.

One is linked to a verified account on the Chinese version of TikTok that also published the series. Another is listed as the drama's producer on an official Chinese filing system.

AFP contacted both firms but received no response.

Using AI to slash costs may be tempting in the fast-growing, multibillion-dollar microdrama market.

But featuring someone in a demeaning way without permission "may constitute an infringement of both portrait rights and reputation rights," said Li's lawyer Yijie Zhao, from Henan Huailv Law Firm.

"Associated with controversy"...National regulations require microdrama makers to register to obtain a license—a step made mandatory for AI-generated animations from this month.

But producers could remain in the shadows by registering temporary outfits, Zhao said, while some allegedly use overseas servers to hide.

In 2024, a Beijing court ordered a company to apologize and pay compensation to a celebrity after its AI software enabled users to produce a virtual persona using his photos and name that could exchange intimate messages.

But lawyers told AFP that compensation for plaintiffs like Li likely won't amount to much due to the limited commercial value of an ordinary likeness.

Li worries that the saga may cost her opportunities in the modeling industry, as she is now "associated with controversy."

Baicai has not launched legal action, but hopes to see more measures from regulators and platforms to protect people like him.

"There are probably plenty of cases with unknown victims," he said.

© 2026 AFP 



Thursday, April 23, 2026


DIGITAL LIFE


How we got to the point where extreme power fits on your desktop

Two decades ago, reaching the pinnacle of computing required gigantic structures. Today, some of that same power is within reach at home — and the comparison reveals an impressive silent transformation.

The history of technology rarely advances linearly. In many cases, it takes leaps that we only notice when we look back. What once seemed unattainable, restricted to laboratories and large corporations, is beginning to emerge in everyday contexts. And few comparisons illustrate this change as well as the evolution of high-performance computing in the last two decades.

When power meant monumental scale...In the early 2000s, reaching the pinnacle of global computing was an achievement reserved for a select few. One of the most emblematic machines of this period was the IBM Blue Gene/L, a system that redefined the limits of performance at the time.

Installed in highly controlled environments, this supermachine occupied entire rooms and required a complex infrastructure to operate. Its power came from a massive architecture: more than 30,000 processors working together, distributed across thousands of interconnected nodes. By 2004 standards, its more than 70 teraflops represented the most advanced technology on the planet.

This kind of capacity wasn't just impressive—it was essential for cutting-edge scientific research, such as physics simulations, climate studies, and molecular modeling. Access, however, was extremely limited. High costs, intense energy consumption, and technical requirements made this type of technology inaccessible to the general public.

At that time, imagining that a significant fraction of that power could fit into a home computer seemed simply impossible. But the history of technology often defies this kind of prediction.

The silent turning point that changed everything...Fast forward two decades, and the scenario is radically different. A single modern GPU, like the NVIDIA GeForce RTX 4090, is already capable of achieving—and in some cases surpassing—the raw performance of that supermachine in specific tasks.

This board, which fits inside a standard computer case, can exceed 80 teraflops in parallel processing operations. What's most impressive isn't just the number, but the context: we're talking about a component accessible to consumers, not a multi-million dollar scientific facility.

Of course, the comparison has nuances. The old supercomputer was designed for highly distributed workloads and complex simulations, while the modern GPU excels in parallel tasks such as graphics, artificial intelligence, and data processing. Still, the symbolism is undeniable.

What once required thousands of components can now be partially replicated by a single piece of hardware.

This transformation didn't happen by chance. It's the result of several simultaneous revolutions: transistor miniaturization, advances in parallel architectures, improvements in energy efficiency, and constant evolution in manufacturing processes.

Furthermore, GPUs have ceased to be just tools for gaming. They have become central to areas such as artificial intelligence, data science, rendering, and simulation. In other words, they have not only become faster—they have come to play much broader roles.

Far beyond computers...This compression of scale isn't exclusive to high-performance computing. It reflects a broader trend in the technology industry.

Over the years, we've seen storage media evolve from floppy disks to tiny devices with thousands of times greater capacity. CDs have been replaced by ultra-fast SSDs. Complex photographic equipment has been, in part, incorporated into smartphones.

The logic repeats itself: less space, more power.

Interestingly, this evolution also brings new challenges. Modern GPUs, for example, have grown so much in performance—and also in physical size—that they don't always fit in every computer case. The advancement continues, but not without its own limitations.

Still, the big picture is clear. What once represented the absolute limit of technology is now, in part, within reach of any enthusiast.

And this is not just a historical curiosity.

It is a direct sign of how the future of computing will be built: more accessible, more compact, and potentially much more powerful than we can imagine today.

The ability to fit extreme computing power—capable of advanced AI, high-end gaming, and complex simulations—onto a desktop is the result of decades of exponential transistor miniaturization, the shift from sequential CPU processing to parallel GPU computing, and the rise of dedicated AI hardware. This transformation moved computing from room-sized mainframes to powerful, compact personal units. 

Here is how we arrived at this point(below):

1. The Foundation: Moore’s Law and Miniaturization (1960s-2000s)...Transistor Shrinking: The journey began with replacing vacuum tubes with transistors in the 1950s, followed by integrated circuits in the 1960s. Moore’s Law predicted that the number of transistors on a chip would double approximately every two years, shrinking their size while increasing power.

Microprocessors: The 1970s brought single-chip microprocessors (e.g., Intel 4004), allowing PCs to enter homes.

Nanometer Scaling: Modern transistors have shrunk to the 7-nanometer scale, allowing billions of transistors to fit on a chip no larger than a fingernail, drastically reducing power consumption while boosting speed. 

2. The shift: CPUs to GPUs (2000s-2010s)...Parallel processing: While Central Processing Units (CPUs) handle general tasks sequentially, Graphics Processing Units (GPUs) were developed to handle thousands of tasks simultaneously (parallel processing).

Gaming driving innovation: The demand for high-resolution graphics and 3D games in the 1990s and 2000s drove the rapid advancement of GPUs (e.g., Nvidia's GeForce 256).

CUDA cores: In the 2000s, the introduction of CUDA cores by Nvidia enabled GPUs to be used for general-purpose computing, not just graphics, paving the way for AI and complex simulations on desktop machines. 

3. The new era: specialized AI hardware (2020s-Present)...Neural processing units (NPUs): Modern desktops now feature specialized AI accelerators called Neural Processing Units (NPUs) or Tensor cores. These are designed specifically to run AI workloads locally rather than relying on the cloud.

Local AI capabilities: This allows for instantaneous response for AI tasks like content creation, real-time translation, and voice assistance, enhancing privacy and reducing latency.

"Ultimate performance" tuning: Operating systems like Windows now feature hidden "Ultimate Performance" modes that eliminate CPU idle states, keeping hardware at maximum performance levels consistently, which is useful for workstation-level tasks on a desktop. 

4. Enabling technologies...Solid-state drives (SSDs): Faster, smaller storage replaced traditional hard drives.

64-bit processors & DDR memory: Improvements in memory bandwidth and processing capacity allowed for larger, more complex applications.

UEFI: Replaced the older BIOS, improving boot times and hardware management. 

mundophone


DIGITAL LIFE


How AI bias can creep into online content moderation

A University of Queensland study has shown large language models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality. A team led by data scientist Professor Gianluca Demartini from UQ's School of Electrical Engineering and Computer Science used persona prompting to test the tendency of AI chatbots to encode and reproduce political biases, and found significant behavioral shifts.

Testing chatbots through political personas...The research team asked six LLMs—including vision models—to moderate thousands of examples of hateful text and memes through the lens of different ideologically diverse AI personas. The results are published in the journal ACM Transactions on Intelligent Systems and Technology.

Professor Demartini said the exercise revealed that AI political personas, even without significantly altering overall accuracy, were prone to introducing consistent ideological biases and divergences in chatbot content moderation judgments.

"It has already been established that persona conditioning can shift the political stance expressed by LLMs," Professor Demartini said. "It demonstrates a need to rigorously examine the ideological robustness of AI systems used in tasks where even subtle biases can affect fairness, inclusivity and public trust."

How the ideological personas were built...The AI personas used in the study were from a database of 200,000 synthetic identities ranging from schoolteachers to musicians, sports stars and political activists. Each persona was put through a political compass test to determine their ideological positioning, with 400 of the more "extreme" positions asked to identify hateful online content.

Professor Demartini said his team found that assigning a persona to an LLM chatbot altered its precision and recall in line with ideological leanings, rather than change the overall accuracy of hate speech detection.

Ideological cohesion in larger language models...However, the team found LLMs—especially larger models—exhibited strong ideological cohesion and alignment between personas from the same ideological "region."

Professor Demartini said this suggested larger AI models tend to internalize ideological framings, as opposed to smoothing them out or 'neutralizing' them.

"As LLMs become more capable at persona adoption, they also encode ideological 'in-groups' more distinctly," Professor Demartini said. "On politically targeted tasks like hate speech detection, this manifested as partisan bias, with LLMs judging criticism directed at their ideological in-group more harshly than content aimed at their opponents."

In-group protection and defensive bias...Professor Demartini said larger LLMs also displayed more complex patterns, including a tendency towards defensive bias.

"Left personas showed heightened sensitivity to anti-left hate, and right-wing personas were more sensitive to anti-right hate speech," Professor Demartini said. "This suggests that ideological alignment not only shifts detection thresholds globally, but also conditions the model to prioritize protection of its 'in-group' while downplaying harmfulness directed at opposing groups."

Why neutral oversight still matters...Researchers said the project highlighted that it was crucial for high-stakes content moderation tasks to be overseen by neutral arbiters so that fairness and public trust are maintained and the health and well-being of vulnerable demographics is protected.

"People interact with AI programs trusting and believing they are completely neutral," Professor Demartini said. "In content moderation the outputs of these models reflect embedded ideological biases that can disproportionately affect certain groups, potentially leading to unfair treatment of billions of users."

AI bias in content moderation isn't usually the result of a single "glitch." Instead, it typically "creeps in" through several interconnected layers of the system—from the data used for training to the way the software is designed and how it interacts with users. 

1. Training Data: "Garbage In, Garbage Out"...The most common entry point for bias is the data used to train the AI. 

Historical biases: If a platform’s past moderation decisions (made by humans) were biased, the AI will learn and reproduce those same patterns. For example, if certain groups were historically flagged more often, the AI may incorrectly "learn" that their content is inherently more problematic.

Lack of diversity: Models trained primarily on data from Western, English-speaking users often fail to understand the cultural nuances or slang of other groups, leading to "context blindness".

Proxy variables: Even if an AI isn't explicitly told a user's race or gender, it can use "proxies" like ZIP codes, dialects (e.g., AAVE), or specific cultural references to unintentionally target certain demographics. 

2. Algorithmic design: technical choices...How a model is built and fine-tuned can inadvertently bake in bias. 

Ideological alignment: Recent studies (April 2026) show that Large Language Models (LLMs) used for moderation can adopt "ideological personas". This leads to a "defensive bias" where the AI prioritizes protecting its own "in-group" while being less sensitive to harm directed at opposing groups.

Translation errors: Many global platforms use AI to translate content before moderating it. These systems often strip away the very context—like irony, satire, or reclaimed slurs—needed to judge if a post is truly harmful. 

3. Human & interaction bias...Human decisions at every stage influence the final outcome. 

Subjective labeling: The humans who "label" the training data (deciding what counts as "hate speech" or "harassment") bring their own personal and cultural biases to the task.

Automation bias: Human moderators who oversee AI-flagged content may become overly reliant on the machine's judgment, assuming it is "objective" and failing to catch its errors.

Weaponized reporting: Bad actors sometimes exploit "reactive moderation" by mass-reporting legitimate content from marginalized groups, tricking the AI into flagging it as "abusive" based on the volume of reports. 

Impact on marginalized groups...Because of these issues, marginalized communities often face "over-moderation". 

LGBTQ+ Content: Valid health or identity discussions may be incorrectly flagged as "sexually explicit" because of rigid or biased nudity and policy filters.

Activists: Posts related to movements like Black Lives Matter or Indigenous rights have been mistakenly removed due to automated systems failing to distinguish political speech from "violence" or "hate speech"


Provided by University of Queensland

Wednesday, April 22, 2026


DIGITAL LIFE


Half of the major digital platforms fail in transparency regarding advertising and user data, according to an international study

A survey conducted by researchers from Brazil and the United Kingdom reveals that social networks operate with low levels of transparency, hindering independent investigations and the fight against disinformation. In the Brazilian scenario, limitations are even more evident and worrying.

The influence of digital platforms on the flow of information has never been so evident — and, at the same time, so difficult to examine closely. A new international study sheds light on this paradox by showing that, although these companies collect enormous volumes of data on users, they offer little visibility into their own practices.

The research, entitled Data Not Found, was conducted by NetLab, from the Federal University of Rio de Janeiro, in partnership with the Minderoo Centre for Technology & Democracy, in the United Kingdom. The objective was to analyze, in an unprecedented way, how large digital platforms make data on content and advertising available.

Fifteen platforms operating in Brazil, the European Union, and the United Kingdom were evaluated, including popular names such as TikTok, Instagram, Facebook, YouTube, Kwai, and Telegram. Comparing these regions allows us to understand how different regulatory contexts influence access to information.

The European Union, for example, has one of the most advanced legislations in the world, notably the Digital Services Act (DSA), which establishes stricter transparency rules. The United Kingdom, on the other hand, adopts a more flexible approach, based on specific assessments by regulatory authorities. Brazil, in turn, still faces a developing regulatory landscape.

Limited transparency and incomplete data...To measure the level of openness of the platforms, the researchers used the Social Media Transparency Index, which assesses factors such as availability, quality, and accessibility of data.

The results point to a widespread problem: in virtually all the platforms analyzed, the data is incomplete, difficult to access, and poorly standardized. This includes flaws in ad libraries, lack of clarity on campaign financing and targeting, as well as obstacles to tracking essential information.

In Brazil, the situation is even more critical. Some tools available in other countries simply do not exist here, or function in a more limited way. This significantly reduces the ability of independent researchers to analyze the impact of these platforms.

An opaque system by nature...According to the study, the lack of transparency is not isolated, but structural. Even when mechanisms for accessing data exist, they are often inconsistent and unreliable.

This scenario creates a clear imbalance: while platforms accumulate detailed information about their users, the internal workings of these companies remain virtually inaccessible to the public. In practice, the platforms themselves define what can or cannot be investigated about them.

Many of these transparency initiatives end up functioning more as image strategies than as real tools for accessing information. The result is an appearance of openness that does not translate into useful data for analysis.

Impacts for research, regulation, and society...The opacity of platforms has direct consequences for different sectors. Researchers face difficulties in validating studies and investigating social impacts, while regulatory authorities lack information to conduct audits or open investigations.

This prevents, for example, the effective mapping of disinformation campaigns, abusive advertising practices, or the exposure of vulnerable audiences—such as children and adolescents—to harmful content.

Without reliable and accessible data, it becomes almost impossible to understand the true extent of these problems or to develop effective public policies to address them.

The global debate on the power of digital platforms has reinforced the importance of transparency as a central element in ensuring the integrity of information. Organizations such as the UN already recognize that access to quality data is essential for accurate diagnoses of the digital environment.

However, the study highlights that it is not enough to simply release data: it is essential that it be complete, standardized, and truly useful for analysis. Currently, many available tools offer limited resources, hindering deeper investigations.

Furthermore, even in regions with advanced legislation, such as the European Union, access to data still largely depends on the decision of the platforms themselves—which represents a significant limitation.

An urgent and global challenge...Faced with this scenario, researchers advocate for the creation of more robust and effective regulations, especially in countries like Brazil. At the same time, they suggest that the platforms themselves adopt higher standards of transparency on a voluntary basis.

The lack of uniformity across regions also exacerbates inequalities: while some researchers gain access to data, others—especially in the Global South—remain excluded, even when dealing with more vulnerable contexts.

Ultimately, transparency cannot be treated as a corporate choice. In a world increasingly dependent on digital platforms for information and public debate, it needs to be seen as an essential condition for protecting the collective interest.

Research indicates that around half or more of major digital platforms (including Meta, Google, TikTok, X, and others) fall short in providing adequate transparency regarding advertising and user data. A 2024 analysis found that opacity is the norm rather than the exception, particularly regarding how user data is used for targeting and the lack of accessible advertising repositories for independent researchers. 

Key findings on ad transparency failures(below):

Lack of repositories: Several major platforms, including Telegram, TikTok, X (Twitter), and Spotify, have failed to provide functional, comprehensive, and public advertising repositories in many regions, notably in the Global South.

Pinterest: Pinterest has faced significant scrutiny and legal complaints regarding its transparency in user data tracking and advertising practices, particularly within the European Union. Critics, including digital rights advocacy group noyb (None Of Your Business), have accused the platform of violating GDPR by engaging in "secret tracking" and failing to provide adequate information on how data is shared with third parties.

Inadequate data access: Even where libraries exist, such as Meta’s Ad Library, the provided data is often limited, providing insufficient information on ad targeting, total spend, or reach.

"Transparency-washing": Researchers argue that platforms often employ "transparency-washing," creating limited, self-regulated tools to avoid stricter, mandatory, and more comprehensive oversight.

API restrictions: Social media platforms are increasingly restricting access to their Application Programming Interfaces (APIs), which are essential for independent data collection, effectively blocking researchers from auditing their systems.

Ephemeral ads: Ephemeral (short-lived) ads are often missed by transparency tools, creating significant blind spots for monitoring disinformation or illegal content. 

Source: The Conversation


DIGITAL LIFE


Generative AI may cut costs in machine-learning systems, but it increases risks of cyberattacks and data leaks

Using generative AI to design, train, or perform steps within a machine-learning system is risky, argues computer scientist Micheal Lones in a paper appearing in Patterns. Though large language models (LLMs) could expand the capabilities of machine-learning systems and decrease costs and labor needs, Lones warns that using them reduces transparency and control for the people developing and using these systems and increases the risk of malicious cyberattacks, data leaks, and bias against underrepresented groups.

"Machine-learning developers need to be aware of the risks of using GenAI in machine learning and find a sensible balance between improvements in capability and the risks that might come with that," says Lones, a computer scientist at Heriot-Watt University in Edinburgh, UK. "Given the current limitations of generative AI, I'd say this is a clear example of just because you can do something doesn't mean you should."

How generative AI is being integrated...Machine-learning systems are algorithms that learn to recognize patterns in data, which they can then use to make predictions and decisions regarding new data. Machine learning has been around for decades, and most people encounter it in their daily lives in the form of spam filters, product recommendations on e-commerce websites, and social media newsfeeds. In the last two or so years, there has been a push to incorporate generative AI (in the form of LLMs) into machine-learning systems, but doing so carries risks and limitations that developers and the general public should be aware of, Lones says.

Lones explores four ways in which generative AI is currently being applied in machine learning: as a component within a machine-learning pipeline, to design and code machine-learning pipelines, to synthesize training data, and to analyze machine-learning outputs. All of these applications carry risks, Lones says, and these risks are compounded if LLMs are used for multiple tasks within a machine-learning system, or if LLMs are "agentic"—meaning they can autonomously use external tools to solve problems.

Complex systems and high‑stakes sectors..."If you have GenAI working in a number of different ways within your machine-learning workflows or system, then they can interact in unpredictable and hard to understand ways," says Lones. "My advice at the moment is to avoid adding too much complexity in terms of how we use GenAI in machine learning, particularly if you're in a sector that has high stakes that impact people's lives and livelihood."

One of the biggest risks is simply that LLMs sometimes make mistakes, bad decisions, and fabricate or "hallucinate" information. Lones says that these errors aren't necessarily predictable and may be difficult to evaluate because LLMs operate in a non-transparent way, which presents an additional issue for legal compliance.

"In areas like medicine or finance, there are laws about being able to show that the machine-learning system is reliable, and that you can explain how it reaches decisions," says Lones. "As soon as you start using LLMs, that gets really hard, because they're so opaque."

Security, privacy, and public awareness...Lones advises machine-learning developers to always manually evaluate LLM-generated code and outputs. He also warns that bigger, remotely hosted LLMs often store and share data, which means that using them opens up opportunities for cybersecurity breaches and the leakage of data and sensitive information.

"It's important for people in the general public to be aware of the limitations of GenAI systems," says Lones. "Companies will deploy these systems to do things like cut costs, and this may improve the experience that end users get, but it may also have negative consequences, such as bias and unfairness."

Generative artificial intelligence (GenAI) can indeed reduce costs in machine learning (ML) systems, but this savings comes with new operational and financial risks. While traditional ML focuses on analysis and prediction, GenAI acts on creation and synthesis, transforming the software development lifecycle and business management.

How generative AI reduces costs... GenAI reduces expenses primarily by automating tasks that previously required skilled human intervention or slow manual processes:

Software and IT development: GenAI tools accelerate workflow by generating repetitive code (boilerplate), creating test scripts, and writing technical documentation. Some companies report reductions of 30% to 45% in development costs.

Data management and R&D: GenAI can synthesize training data, which is crucial when historical data is scarce or protected by privacy, reducing research and development costs by about 10% to 15%.

Customer operations: Advanced chatbots based on LLMs can manage a higher percentage of complex queries without constant human supervision, decreasing the cost per ticket by up to 60%.

Mechanical and structural design: The use of generative AI allows for the optimization of material use, creating lighter and more resistant designs that reduce waste and production costs.

The hidden side of costs and risks...Despite the potential for savings, experts warn of the "cost iceberg" of GenAI(below):

Uncertainty and scale: Computing costs can skyrocket when moving from pilots to production systems, with predictions of a nearly 90% increase in cloud spending between 2023 and 2025 due to GenAI.

Security and privacy risks: The use of LLMs increases the opacity of systems, making it difficult to control sensitive data and opening doors to leaks and cyberattacks.

Continuous maintenance: Unlike traditional software, AI models require constant retraining and monitoring. It is estimated that up to 75% of the resources initially invested need to be maintained for ongoing support to prevent model degradation.

Biases and hallucinations: The lack of transparency in LLMs can introduce biases or "hallucinations" (false information), which generates legal and compliance risks, especially in sensitive sectors such as finance and medicine.

To maximize return on investment (ROI), organizations are adopting strategies such as intelligent model routing (using smaller and cheaper models for simple tasks) and the use of AI gateways to centralize governance and spending control.

Provided by Cell Press 

Tuesday, April 21, 2026


TECH


Industrial electrification is now a security imperative, finds analysis

Industrial electrification is becoming a matter of economic security as well as decarbonization, according to new Oxford analysis. Continued reliance on fossil fuels leaves 75% of global industry exposed to recurring price shocks, while electrification offers a pathway to stable and resilient energy costs.

The latest disruption linked to tensions around the Strait of Hormuz is only the most recent example of a broader pattern. The 2022 Russian gas crisis forced widespread factory closures and production shifts across Europe and beyond, with many energy-intensive industries yet to fully recover. The authors argue that such shocks are not isolated events, but symptoms of a structural vulnerability tied to fossil fuel dependence.

The impact has been global and persistent. In Asia, the 2022 LNG price spike forced factory shutdowns in Pakistan and Bangladesh and drove up costs for manufacturers in Japan and South Korea. Now, tensions around the Strait of Hormuz are once again feeding through into higher energy prices, renewing pressure on industrial producers across the region. The message is clear: fossil fuel shocks are not one-off events, but a repeated risk.

"Industry has now lived through two major fossil fuel shocks in three years. First the 2022 gas crisis and now Hormuz," says Jan Rosenow, Professor of Energy and Climate Policy at the University of Oxford. "At some point you have to ask: how many times does the alarm have to go off before we change the system?"

Industry's slow shift away from fossil fuels...Industry runs almost entirely on fossil fuels and is therefore uniquely exposed to these risks, the authors say. Yet, despite the exposure, it has been among the slowest sectors to transition.

The analysis highlights that the technologies needed to electrify industry are already becoming available at scale. Recent developments include the delivery a 95-tonne, 16-meter evaporator for one of the world's most powerful industrial heat pumps at BASF's Ludwigshafen chemical site, and the commissioning of Southeast Asia's first industrial heat battery at a cement plant in Saraburi, Thailand built entirely with local supply chains in just eight months. These projects demonstrate that industrial electrification is moving beyond pilot stages and beginning a global industrial shift.

"The technology to electrify industry exists today," says Professor Rosenow. "What's missing is the political will to fix the price signals and build the grids that would make it happen at scale."

Report findings...The new Oxford report, "High Voltage," provides the evidence base behind that shift. Drawing on more than 1,600 global climate scenarios alongside a systematic engineering review, the report finds that up to 90% of industrial energy demand could be electrified with existing and emerging technologies.

"What surprised us most in this research is how strong the convergence is across two completely independent lines of evidence," says Cassandra Etter-Wenzel, Researcher at the Environmental Change Institute, University of Oxford. "Detailed engineering studies and 1,600 global climate scenarios both point to the same conclusion: up to 90% of industrial energy demand could ultimately be electrified. The potential is not the constraint. The question is whether policy moves fast enough to realize it."

The authors emphasize that key electrification technologies, like heat pumps, electric boilers, heat batteries, and resistance heating, are already proven and commercially available. But deployment is being held back by policy and market failures.

Price, grid and finance barriers...Electricity prices remain artificially expensive relative to gas in many regions due to legacy tax and levy structures that disproportionately burden electricity. Reforming these price signals through electricity pricing reform, carbon pricing, and targeted support for electrified industrial heat will be crucial.

Grid access is another major constraint. Even where the technology exists and the economics work, long connection timelines stall industrial projects . Governments need to streamline permitting, enable anticipatory grid investment, and prioritize industrial connections to unlock progress.

Finally, first-of-kind industrial electrification projects face technology and integration risks that private capital won't bear alone. Instruments like Carbon Contracts for Difference—as used to support the BASF heat pump—grants, and concessional finance are essential to de-risk early deployment and drive down costs for what follows.

Electrification as a resilience strategy...The authors stress that reducing fossil fuel dependence is not only about emissions, but also about resilience. Each unit of fossil fuel replaced with domestic clean energy reduces exposure to geopolitical disruption and price volatility.

"The industries that electrify fastest will stop being victims of the next crisis," says Professor Rosenow. "Every unit of fossil fuel eliminated from an industrial process is a unit that can no longer be held hostage by a pipeline shutdown, a Strait closure, or a price spike."

Industrial electrification has evolved from a matter of decarbonization to a strategic security imperative. New research from institutions like the University of Oxford argues that fossil fuel dependence is now a structural vulnerability, leaving 75% of global industry exposed to recurring price shocks and geopolitical disruptions. 

The security case for electrification...Modern energy security is no longer just about securing oil and gas supplies; it is about the freedom from importing them. 

Resilience against geopolitical shocks: Unlike fossil fuels, which can be "held hostage" by pipeline shutdowns, strait closures, or price spikes, domestic clean energy eliminates exposure to external geopolitical leverage.

Economic stability: Electrification offers a pathway to stable and predictable energy costs. In the EU, large-scale electrification could cut fossil fuel dependence by two-thirds by 2040, delivering net savings of €29 billion per year in fuel imports.

"Security dividend": Transitioning to a distributed, electrified energy system provides a "security dividend" by creating a more resilient, decentralized network that is less vulnerable to centralized infrastructure sabotage. 

Industrial and defense implications...The shift toward electricity is increasingly viewed through the lens of national defense and industrial survival. 

Defense integration: Groups like Eurelectric have noted that energy systems are now a "second line of defense". There are calls to allocate a portion of defense spending (such as NATO's 1.5% GDP investment goal) toward energy infrastructure and clean innovation to bolster civil preparedness and military resilience.

Industrial competitiveness: Access to reliable, low-cost electricity is becoming a primary determinant for corporate site selection. Countries like China are pulling ahead, electrifying their energy systems by 10 percentage points each decade to anchor global manufacturing dominance.

Operational benefits: Electric equipment often provides better precision, safety, and energy efficiency—often up to three to four times higher than fossil fuel systems. 

Key strategic challenges...While the security benefits are clear, several "bottlenecks" remain to achieving this at scale(below): 

Grid capacity: The global grid must add or replace 80 million kilometers of lines by 2040 to handle the new load.

Technology gaps: While 60% of industrial heat can be electrified today, high-temperature processes still require further innovation.

New dependencies: The transition creates a new reliance on critical raw materials and technologies, currently dominated by China

Provided by University of Oxford 

  TECH JEDEC LPDDR6 roadmap signals major shift to memory-centric computing Remember when LPDDR memory was just strictly meant for thin-and-...