Friday, December 26, 2025

 

TECH


AMD Ryzen 9 9950X3D2 with 192MB L3 cache spotted in multiple benchmarks

The long-awaited "double X3D" CPU, which AMD once denied would ever happen, has finally confirmed its existence with new leaks over at Geekbench and Passmark. We'll cut to the chase: neither benchmark result is particularly interesting, as both show performance nearly identical to the extant Ryzen 9 9950X3D in these specific benchmarks. However, the confirmation of this chip's existence is the more interesting detail, as it hadn't appeared in leaks until now.

As expected, the most significant improvement is in the L3 cache — capacity increases from 144 MB to 192 MB, a record number for a consumer processor. The secret, as rumored, lies in the use of the company's special technology in the two CPU chiplets that equip the 9950X3D2, instead of just one as the current generation offers.

This can be seen in Geekbench: the tool indicates 96 MB x2, reinforcing the presence of an additional chip at level 3 cache. This could also justify the lower clock speed, since memory is sensitive and still has overheating restrictions.

Regarding benchmark performance, the scores are as expected. In Geekbench, the new chip scores 3,456 points in single-core and 21,062 points in multi-core. In PassMark, the total is 71,585 points. As comparisons from the tests themselves demonstrate, we are in error territory compared to the 9950X3D, with speed losses in the range of 2%, reaching 8% compared to the conventional 9950X.

The lack of more significant gains is not surprising, since the extra cache usually only shines in games and specific professional tasks, such as file compression and decompression. We will have to wait for the chip's debut and the availability of independent reviews to know what the upgrade will really be able to provide.

To back up a moment, we're going to assume that you're familiar with AMD's 3D V-Cache. If you're not, check out our past coverage. People have been hoping for a Ryzen 9 processor with V-Cache on both CCDs for some time despite the somewhat questionable merits of such a processor. In terms of consumer workloads, gaming is the only thing that really benefits from the extra cache, and a single-CCD configuration with one stack of 3D V-Cache is typically the best performer in that arena.

Still, having doubled-up V-Cache does mean that you can enjoy a full sixteen-core CPU without ever having to worry about whether your game is running on the "correct" CCD. With the extant Ryzen 9 7950X3D and Ryzen 9 9950X3D, only half of the cores benefit from the 3D V-Cache, and so it can happen that games end up on the "wrong" cores, drastically reducing performance. That wouldn't be an issue with this chip, and it may also provide real performance benefits in tasks like code compilation, database analysis, computational fluid dynamics, and finite element analysis.

The leaks clearly come out of China, as the Geekbench result (at least), was run on a "Galaxy Microsystems" motherboard, better known as GALAX. Interestingly, GALAX hasn't actually released any motherboards based on the B850 chipset yet, so this is actually a motherboard leak, too. Nothing in the result is particularly surprising; the chip exactly matches the specifications that were leaked back in October by chi11eddog (@g01d3nm4ng0 on Xwitter.) Geekbench doesn't list TDP, but given that the rest of the specifications match up we wouldn't be surprised if this chip tops 200W for the first time since AMD's Centurion.

The Passmark result (hat tip @x86deadandback) is less informative, but it does confirm that the "X3D2" part doesn't seem to lose any performance versus the currently available part. The peak boost clock appears to be 100 MHz lower, at 5.6 GHz instead of 5.7 GHz, but the actual effect that has on the chip is a margin-of-error modification, and the Ryzen 9 9950X3D2 actually beats the older chip in multi-core, suggesting that the higher TDP is helping out there.

It will be fascinating to test this chip, even if we don't expect the results to be all that different for most users. It will be the sort of product where you need to find the use case, rather than the sort of thing that accelerates everything. For the few users who can really make use of such a chip, it will likely be a game-changer, but for everyone else, it's most likely going to be an expensive halo product for those with deep pockets.

mundophone

 

DIGITAL LIFE


Who should get paid when AI learns from creative work?

As generative AI systems become more deeply woven into the fabric of modern life—drafting text, generating images, summarizing news—debates over who should profit from the technology are intensifying.

A new paper in the Journal of Technology and Intellectual Property argues that the current copyright system is ill-equipped to handle a world in which machines learn from—and compete with—human creativity at unprecedented scale.

Frank Pasquale, professor of law at Cornell Tech and Cornell Law School, and co-authors Thomas W. Malone, professor of information technology at MIT Sloan School of Management, and Andrew Ting, professorial lecturer in law at George Washington University, describe a legal innovation called "learnright," a new intellectual property protection that would give creators the right to license their work specifically for use in AI training.

The basic idea behind learnright was first proposed by Malone in 2023. The new paper shows how the concept could actually work legally and economically.

Legal and ethical challenges of AI training...The researchers say the idea stems from a growing imbalance. Today's largest AI systems are trained on vast datasets scraped from the internet—millions of books, articles, songs, photographs, artworks and posts. Some of the authors of those works are now suing AI companies, arguing that using their copyrighted work to train a commercial model without permission is a violation of the law.

Yet court rulings on the issue remain unsettled, especially around whether training counts as fair use. While some judges have signaled skepticism toward AI companies, others have suggested that training may indeed be lawful if considered analogous to a human reading a book.

"The ongoing legal uncertainty here creates problems for both copyright owners and technologists," Pasquale said. "Legislation that mandates bargaining between them would promote fairer sharing in AI's bounty."

The stakes are hard to ignore. Artists complain that their signature styles can now be mimicked in seconds. Journalists are seeing readers peel away as chatbots summarize the news without sending traffic back to publishers. And white-collar workers in law, design, marketing and coding now worry that the next AI upgrade could automate portions of their jobs using the very work they once produced.

Arguments for a new intellectual property right...The authors argue that this is not simply a legal puzzle but a moral one. From a utilitarian perspective, they say, society benefits when creative work continues to be produced—and that requires maintaining incentives for humans to keep making it.

From a rights-based standpoint, they argue that tech companies vigorously protect their own intellectual property while dismissing the value of those whose work powers the models.

From the perspective of virtue ethics (which focuses on the type of character and habits that are essential to human well-being), they suggest that flourishing creative communities depend on norms of attribution and respect.

"Learnright law provides an elegant way of balancing all these competing perspectives," said Malone. "It provides compensation to the people who create the content needed for AI systems to work effectively. It removes the legal uncertainties about copyright law that AI companies face today. In short, it addresses a growing legal problem in a way that is simpler, fairer, and better for society than current copyright law."

How learnright could work in practice...The proposal would not replace copyright. Rather, it would add a seventh exclusive right to the six already granted to creators, specifically addressing machine learning. Just as copyright law recognizes special protections for digital audio transmissions, the authors say, Congress could extend a new protection for submitting a work to an AI training process.

Under such a regime, companies building generative AI tools would license the right to learn from specific datasets—much as some already do with news archives or stock photo libraries. The authors say that market negotiations would naturally set fair rates, and that clearinghouses or collective licensing organizations could replicate successful models from the music industry.

Critics may argue that such a right could slow innovation or burden startups. But the authors counter that unrestrained training could ultimately undermine the very creative ecosystem that AI depends on. They point to research suggesting that feeding models their own outputs over time can lead to "model collapse," reducing quality. Without a continually refreshed supply of human-generated art, journalism and scholarship, they say, AI's progress could stagnate.

Policy implications and the path forward...The paper arrives as lawmakers signal growing interest in regulating generative AI. A learnright, the authors argue, offers a clear path for policymakers: a middle ground that neither bans training nor leaves creators uncompensated.

"At present, AI firms richly compensate their own management and employees, as well as those at suppliers like NVIDIA," says Pasquale. "But the copyrighted works used as training data are also at the foundation of AI innovation. So it's time to ensure its creators are compensated as well. Learnright would be an important step in this direction."

Provided by Cornell University

Thursday, December 25, 2025

 

TECH


Multi-agent AI could change everything—if researchers can figure out the risks

You might have seen headlines sounding the alarm about the safety of an emerging technology called agentic AI.

That's where Sarra Alqahtani(image above) comes in. An associate professor of computer science at Wake Forest University, she studies the safety of AI agents through the new field of multi-agent reinforcement learning (MARL).

Alqahtani received a National Science Foundation CAREER award to develop standards and benchmarks to better ensure that multi-agent AI systems will continue to work properly, even if one of the agents fails or is hacked.

AI agents do more than sift through information to answer questions, like the large language model (LLM) technology behind tools such as ChatGPT and Google Gemini. AI agents think and make decisions based on their changing environment—like a fleet of self-driving cars sharing the road.

Multi-agent AI offers innumerable opportunities. But failure could put lives at risk. Here's how Alqahtani proposes to solve that problem.

What's the difference between the AI behind Chat GPT and the multi-agent AI you study?

Sarra Alqahtani: ChatGPT is trained on a huge amount of text to protect what the next word should be, what the next answer should be. It's driven by humans writing. For AI agents that I build—multi-agent reinforcement learning—they think, they reason and they make decisions based on the dynamic environment around them. So they don't only predict, they predict and they make decisions based on that prediction. They also identify the uncertainty level around them and then make a decision about that: Is it safe for me to make a decision or should I consult a human?

AI agents, they live in certain environments and they react and act in these environments to change the environments over time, like a self-driving car. ChatGPT still has some intelligence, but that intelligence is connected to the text, predictability of the text, and not acting or making a decision.

You teach teams of AI agents through a process called multi-agent reinforcement learning, or MARL. How does it work?

Sarra Alqahtani: There are a team of AI agents collaborating together to achieve a certain task. You can think of it as a team of medical drones delivering blood or medical supplies. They need to coordinate and make a decision, on time, what to do next—speed up, slow down, wait. My research focus is on building and designing algorithms to help them coordinate efficiently and safely without causing any catastrophic consequences to themselves and to humans.

Reinforcement learning is the learning paradigm that actually is behind even how us humans learn. It trains the agent to behave by making mistakes and learning from their mistakes. So we give them rewards and we give them punishments if they do something good or bad. Rewards and punishments are mathematical functions, or a number a value. If you do something good as an agent, I'll give you a positive number. That tells the agent's brain that's a good thing.

Us researchers, we anticipate the problems that could happen if we deploy our AI algorithms in the real world and then simulate these problems, deal with it, and then patch the security and safety issues, hopefully before we deploy the algorithms. As part of my research, I want to develop the foundational benchmarks and standards for other researchers to encourage them to work on this very promising area that's still underdeveloped.

It seems like multi-agent AI could offer many benefits, from taking on tasks that might endanger humans to filling in gaps in the health care workforce. But what are the risks of multi-agent AI?

Sarra Alqahtani: When I started working on multi-agent reinforcement learning, I noticed when we add small changes to the environment or the task description for the agents, they will make mistakes. So they are not totally safe unless we train them in the same exact task again and again like a million times. Also, when we compromise one agent, and by saying compromise, I mean we assume there's an attacker taking over and changing the actions from the optimal behavior of that agent, the other agents will also be impacted severely, meaning their decision-making will be disrupted because one of the team is doing something unexpected.

We test our algorithms in gaming simulations because they are safe and we have clear rules for the games so we can anticipate what's going to happen if the agents made a mistake. The big risk is moving them from simulations to the real world. That's my research area, how to still keep them behaving predictably and to avoid making mistakes that could affect them and humans.

My main concern is not the sci-fi part of AI, that AI is going to take over or AI is going to steal our jobs. My main concern is how are we going to use AI and how are we going to deploy AI? We have to test and make sure our algorithms are understandable for us and for end users before we deploy it out there in the world.

You have received an NSF CAREER award to make multi-agent AI safer. What are you doing?

Sarra Alqahtani: Part of my research is to develop standards, benchmarks, baselines that encourage other researchers to be more creative with the technology to develop new, cutting-edge algorithms.

My research is trying to solve the transitioning of the algorithms from simulation to the real world, and that involves paying attention to the safety of the agents and their trustworthiness. We need to have some predictability of their actions, and then at the same time, we want them to behave in a safe manner. So we want to not only optimize them to do the task efficiently, we want them also to do the task safely for themselves, as equipment, and for humans.

I'll test my MARL algorithms on teams of drones flying over the Peruvian Amazonian rainforest to detect illegal gold mining. The idea is to keep the drones safe while they are exploring, navigating and detecting illegal gold mining activities while avoiding being shot by illegal gold miners. I work with a team of diverse expertise—hardware engineers, researchers in ecology and biology and environmental sciences, and the Sabin Center for Environment and Sustainability at Wake Forest.

There's a lot of hype about the future of AI. What's the reality? Do you trust AI?

Sarra Alqahtani: I do trust AI that I work on, so I would flip the question and say, do I trust humans who work on AI? Would you trust riding with an Uber driver in the middle of the night in a strange city? You don't know that driver. Or would you trust the self-driving car that has been tested in the same situation, in a strange city?

I would trust the self-driving car in this case. But I want to understand what the car is doing. And that's part of my research, to provide explanations of the behavior of the AI system for the end users before they actually use it. So they can interact with it and ask it, what's going to happen if I do this? Or if I put you in that situation, how are you going to behave? When you interact with something and you ask these questions and you get to understand the system, you'll trust it more.

I think the question should be, do we have enough effort going into making these systems more trustworthy? Do we spend more effort and time to make them trustworthy?

Provided by Wake Forest University

Wednesday, December 24, 2025

 

TECH


Global memory crunch pushes modders to experiment with DIY DDR5 RAM

The unprecedented memory chip shortage isn't going to end anytime soon, but that doesn't mean DDR5 RAM sticks couldn't be cheaper than they are currently selling for. Modders are now seeing the DIY method as somewhat of a viable approach.

There have been a lot of talks surrounding the ongoing global DRAM and NAND crisis, and per recent reports, the situation will only become worse. IDC, for example, notes that current market analysis suggests that the memory shortage situation "could persist well into 2027."

While DDR4 RAM is being revisited, there's a chance that manufacturers will take advantage of the current situation and seek ways to increase their profits even through last-gen memory technology. Modders, however, are seeing a different method as somewhat of a viable approach to lower RAM costs in their builds.

They are now pitching the idea of DIY RAM sticks. At the surface, it's just like upgrading GPUs by soldering extra VRAM to the board. Of course, in terms of making your own memory sticks, the process would require sourcing the PCB board with the layout traced and memory ICs.

As Pro Hi-Tech highlights, these basic parts aren't challenging to source. Chinese sellers are already offering ready-to-be-soldered DDR5 PCBs. However, this DIY approach won't save users much money, at least not in the current state. An estimate calculated by Pro Hi-Tech and Viktor "Vik-On" points to a 16 GB stick costing around 12,000 rubles, around $151.

Compared to the current market condition, the estimated cost isn't much lower than what the average 16 GB DDR5 sticks are going for (Transcend 16GB DDR5 5600 MHz curr. $169.99 on Amazon). However, given how volatile the current memory market is, this DIY approach could eventually offer more notable savings in the not-so-distant future.


TECH


Dish sells Starlink Internet with free hardware and installation

The $19 billion in spectrum purchase that will allow Starlink to offer 5G Internet service when its V3 satellites are in orbit is bringing other perks. EchoStar, the parent company of satellite TV provider Dish, is now benefitting from the partnership with SpaceX.

When Sprint was forced to divest of its Boost Mobile subsidiary in order to get the merger with T-Mobile approved, its assets were scooped up by the satellite TV provider Dish.

At the time, Dish had big plans to use the newly acquired spectrum and become America's fourth largest carrier with its own 5G network. T-Mobile's merger with Sprint, however, put it so far ahead of rivals when it comes to 5G network deployment that even stalwarts like Verizon and AT&T fell behind.

Instead of building a 5G network, Dish's parent company EchoStar decided to auction its spectrum and AT&T got $23 billion worth of 50 MHz low-band and 600 MHz and 3.45 GHz spectrum, while Dish became a hybrid MVNO instead, and started to decommission its towers.

SpaceX then paid $19 billion to Dish's EchoStar for more 50 Mhz spectrum and MSS licenses so that it can boost the capacity and speed of its Starink direct-to-cell satellite constellation 100x. The move will now let it become a true T-Mobile, Verizon, or AT&T competitor as it can beam 5G carrier service from space once it starts launching its big V3 satellites in 2026.

Starlink Internet price at Dish...While Dish's dreams of becoming the next big American cell phone carrier didn't pan out and it became a hybrid MVNO instead, the partnership with its spectrum licensees has now allowed it offer both AT&T and Starlink services.

Dish is now selling Starlink Internet, promising no upfront hardware costs as well as help with installation that its satellite service technicians are well positioned to do. Other than that, the Starlink payment fee is still starting at $80/month for up to 200 Mbps of speed, while the SpaceX satellite Internet service offers 400+ Mbps speeds for $40 more.

Starlink is thinking of offering its cheapest $40/month plan with the Mini dish that is currently discounted by 40% on Amazon, and Dish customers may be able to take advantage of that deal, too.

Tuesday, December 23, 2025

 

DIGITAL LIFE


Extremist groups are using AI to boost online propaganda

Researchers and digital security experts warn that extremist organizations have begun to exploit artificial intelligence as a central part of their online strategies. The technology allows them to adapt old speeches, translate ideological materials into various languages, and transform texts into audio and video with just a few clicks.

According to analyses published by The Guardian newspaper, this movement represents a leap in efficiency in extremist propaganda. Previously restricted to linguistic niches or specific platforms, these messages now circulate globally.

One of the most exploited advances is AI translation. Unlike older tools, current models can preserve the tone, emotion, and ideological charge of the original speeches. This allows extremist messages to reach new audiences without seeming like mechanical or artificial translations.

For analysts, this change reduces historical barriers that limited the growth of these groups. The same content can be quickly adapted to different countries, languages, and cultural contexts, keeping the original narrative almost intact.

Voice cloning amplifies emotional impact...In the neo-Nazi far-right, voice cloning has become one of the most popular tools. Software trained with old recordings can recreate the voices of historical leaders and authors, giving "new life" to speeches from the past.

According to the Global Network on Extremism and Technology (GNet), English versions of historical speeches have already accumulated millions of views on social media. Commercial voice synthesis services, such as ElevenLabs, are used to generate these audios from old files.

Jihadist groups are following a similar path. In encrypted applications, ideological texts are being converted into audios narrated by artificial voices, which makes consumption easier and more emotionally engaging.

From ancient manifestos to audiobooks...Another recurring use of artificial intelligence is the adaptation of historical texts to modern formats. One case cited by researchers involves Siege, an insurgency manual written by James Mason, which gained new circulation after being transformed into an audiobook with the help of AI.

According to the Counter Extremism Project, this repurposing extends the lifespan of extremist propaganda and facilitates consumption by new audiences, especially young people accustomed to audio content.

A growing challenge for governments and platforms...Authorities and experts see this scenario as a warning. Artificial intelligence accelerates a constant race between those trying to contain online extremism and those seeking to exploit new technological loopholes.

Understanding how these tools are being used is essential to developing more effective moderation policies. AI does not create extremist ideologies, but it is making their propaganda faster, cheaper, and much harder to contain — a challenge that is only expected to grow in the coming years.

Digital Look Magazine


DIGITAL LIFE


How social media algorithms turn attention into addiction — and why this is affecting mental health

Social media has become a central part of everyday life. It shapes how we get our information, how we relate to each other, and even how we spend our free time. Apps like TikTok and Instagram offer constant entertainment and immediate interaction, but they also raise a growing concern among experts: excessive use can foster digital addiction and affect mental health.

Recent analyses published by GQ magazine indicate that the problem lies not only in the time spent online, but in how these platforms are designed. At the heart of this debate are the algorithms — systems that decide what we see, when we see it, and how long we stay connected.

Studies conducted by researchers at Harvard University show that social media algorithms are optimized to maximize engagement. Likes, comments, short videos, and endless scrolling act as immediate rewards, activating brain circuits linked to pleasure and anticipation.

This mechanism reinforces the habit of repeatedly checking the cell phone, creating a constant sense of urgency: fear of missing out, need for continuous updates, and difficulty interrupting its use. Over time, this pattern can evolve into compulsive behaviors, in which the person feels discomfort when disconnected.

Reports from the World Health Organization indicate that prolonged use of social media — especially more than three hours a day — is associated with increased symptoms of anxiety, depression, sleep disorders, and difficulties in interpersonal relationships.

The problem is not just quantitative. Continuous exposure to social comparisons, negative news, and emotionally charged content can generate feelings of inadequacy, low self-esteem, and mental fatigue. Research also indicates that intense use before bed interferes with sleep quality, worsening tiredness and irritability the next day.

Despite the risks, experts agree that technology is not, in itself, the villain. Social networks facilitate communication, broaden access to information, create educational opportunities, and help form communities around common interests.

The challenge lies in balance. Harvard studies show that when social media use replaces in-person activities, rest, or offline leisure time, the impact on emotional well-being tends to be negative. Isolation, decreased productivity, and feelings of emptiness are some of the observed effects.

Digital health experts and the WHO emphasize that understanding how algorithms work is essential to regaining control over usage. Knowing that content is personalized to capture attention helps users adopt a more conscious approach.

Among the most cited recommendations are:

-Setting specific times to access social media, avoiding use immediately upon waking or before bed.

-Disabling non-essential notifications to reduce constant interruptions.

-Setting aside daily time for offline activities, such as physical exercise, reading, or in-person meetings.

-Using digital control tools that limit the time spent using applications.

These small changes help break the cycle of automatic use and promote a more balanced relationship with technology.

The value of boredom and reflection...A point often ignored is the importance of boredom. The tendency to fill any free moment with cell phones reduces the capacity for introspection and creativity. According to the International Association for Positive Psychology, moments of pause and mental silence contribute to emotional well-being and self-knowledge.

In addition, the emotional impact of the content consumed matters as much as the time spent using it. Experts from Stanford University recommend filtering the feed, muting or blocking profiles that generate anxiety, and prioritizing content that informs or inspires.

Digital education as a protection tool...For children and adolescents, the role of digital education is even more crucial. Organizations such as UNICEF advocate for teaching critical thinking from an early age, helping young people recognize misinformation, deal with online social pressures, and strengthen self-esteem in the digital environment.

Family dialogue and the joint definition of clear limits also prove effective in preventing abuse and addiction.

Ultimately, social networks can be allies or sources of stress. The difference lies in awareness. Understanding how algorithms work, setting boundaries, and diversifying experiences outside the screen are fundamental steps to enjoying the benefits of the digital age without compromising mental health.

Source: Infobae

  TECH AMD Ryzen 9 9950X3D2 with 192MB L3 cache spotted in multiple benchmarks The long-awaited "double X3D" CPU, which AMD once d...