Sunday, December 28, 2025

 

DIGITAL LIFE


Generation Z created its own “digital detox”: turning off their cell phones and going back to writing letters, thankfully...

The history of communication seemed to follow a straight line towards total instantaneity. First came emails, then social networks, messaging apps, and finally, AI-generated automatic responses. But something curious is happening in 2025: a growing segment of Generation Z has decided to take a step back. Instead of another screen, they chose paper, pen, and time.

For decades, writing letters was the main way to maintain long-distance connections. With the arrival of the internet, this ritual became a relic. MSN Messenger, social networks, and instant messaging promised to shorten any distance—and they delivered. Today, photos, audios, and video calls cross the planet in seconds.

The problem is that speed has taken its toll. Messages accumulate, emails demand a response “yesterday,” and cell phones constantly warn that there is no more space. Communication has become abundant, but also noisy. For many young people, this constant flow has ceased to be liberating and has become exhausting.

The return of pen pals... It is in this context that pen pals—friends by correspondence—are resurfacing. According to data from Stamps.com, about 48% of Generation Z sends physical mail at least once a month. The number breaks the stereotype of the young person incapable of stepping away from the screen.

On social media, the phenomenon is visible. The hashtag #penpal has already surpassed 1.3 million posts on Instagram, while TikTok has become a showcase for calligraphy, wax seals, and handmade notebooks. It's not just about writing—it's about transforming the act into an experience.

Platforms like Pinterest have helped to boost this movement. Creative stationery has become a type of "performance art": special pens, personalized envelopes, decorative stamps, and hand lettering are part of the package.

In this scenario, the letter ceases to be just a means and becomes an object. A unique, physical item, impossible to copy or "forward." In an era where everything can be replicated infinitely, value lies precisely in the unrepeatable.

There is also a strong symbolic component. In times of artificial intelligence capable of producing thousands of texts in seconds, human handwriting gains the status of cultural resistance. A handwritten letter is not efficient — and that is exactly what makes it special.

It requires time, attention, and presence. It cannot be sent in a hurry or corrected after sending. It is the opposite of the logic of the attention economy, which competes for every second of our focus.

This rescue of paper also appears in hybrid formats. Platforms like FutureMe allow users to write messages today to be delivered years later. The proposal mixes technology and introspection, betting on a "realistic optimism": recording expectations, fears, and desires as a way to get through uncertain times.

For many young people, writing — whether for another person or for themselves in the future — has become an emotional survival strategy amidst climate, economic, and social crises.

A detox that doesn't reject technology...Interestingly, this movement isn't a total rejection of digital. Generation Z remains hyper-connected, but has begun to choose when and how to connect. The letter functions as a conscious pause, a space where there are no notifications, likes, or metrics.

Some things don't disappear—they just go on hold. In 2025, it seems that the sealed envelope has returned to offering something that fiber optics couldn't deliver: silence, anticipation, and the feeling that someone dedicated real time to saying "I'm thinking of you."

Source: Xataka

Saturday, December 27, 2025


TECH


Rainbow Six Siege hack forces Ubisoft to shut down servers after players receive free R6 credits

Servers for Ubisoft’s popular first-person shooter game are currently offline on consoles and PC. A suspected hack granted some Rainbow Six Siege players huge bundles of in-game currency. Other players have witnessed mass bans, prompting Ubisoft to take action while investigating.

Rainbow Six Siege players with free time during the holidays are having to change their plans. An apparent hack has rewarded some gamers with massive amounts of R6 credits or rare skins. Other fans found their accounts banned after attempting to log in. The situation is so grim that Ubisoft has completely shut down the servers and the Rainbow Six Siege marketplace.

On Rainbow Six Siege X social media, the publisher first announced that it was investigating reports of unusual activity. Two hours later, it explained that both servers and the marketplace had been intentionally taken offline. At the moment, fans await more news, as the company looks for the cause of the mayhem.

The game’s service status page confirms limited or no connectivity on multiple platforms. Still, critics are asking for more transparency from Ubisoft. Even though the events have all the telltale signs of a hack, it has yet to be confirmed as a security breach. Other followers believe the drastic measure should have come earlier in the day.

The hacker may have had evil intentions, but some players rejoiced after a sudden influx of R6 credits. While it’s tempting to spend that currency, veterans of the tactical shooter game urge caution. Players risk account bans if Ubisoft later decides that gamers exploited the state of affairs. In fact, they advise against signing into accounts at all for the time being.

New details about Rainbow Six Siege security breach...Cybersecurity resource Vx-Underground has uncovered details about the Rainbow Six Siege hack. One group may have “exploited a Rainbow Six Siege service allowing them ban players, modify inventory, etc.” It’s estimated that it handed out a staggering $339,960,000,000,000 to gamers.

Meanwhile, another group executed a database hack that could have wide-ranging implications. The individuals may have access to the source code of multiple Ubisoft games.

The last major threat to the shooter franchise was a 2023 cyberattack. Thieves targeted 900GB worth of internal data before Ubisoft halted their efforts. Fortunately, they were unsuccessful in acquiring information from Rainbow Six Siege accounts.

Adam Corsetti

 

DIGITAL LIFE


Fake videos made by AI: AI tools show how easy it is to manipulate public perception, creating an alternate 'reality' in seconds

A TikTok video from October appeared to show a woman being interviewed by a television reporter about the use of food assistance. The women weren't real. The conversation never happened. The video was generated by artificial intelligence. Still, people seemed to believe it was a real conversation about selling food assistance for cash, which would constitute a crime.

In the comments, many reacted to the video as if it were real. Despite subtle warning signs, hundreds began labeling the woman a criminal—some with explicit racism—while others attacked government assistance programs, just as a national debate intensified around President Donald Trump's planned cuts to the program.

Videos like this fake interview, created with OpenAI's new app, Sora, show how public perception can be easily manipulated by tools capable of producing an alternate reality from a series of simple commands.

In the two months since Sora's arrival, misleading videos have skyrocketed on TikTok, X, YouTube, Facebook, and Instagram, according to experts who monitor this type of content. The deluge has raised concerns about a new generation of misinformation and fabrications.

Most major social media companies have policies requiring disclosure of the use of artificial intelligence and broadly prohibit content intended to deceive. But these safeguards have proven utterly insufficient in the face of the technological leap represented by OpenAI's tools.

Vídeo de uma mãe pegando na mão de um bebê recém nascido — Foto: Reprodução/Sora

Video of a mother holding the hand of a newborn baby — Photo: Reproduction/Sora

While many videos are silly memes or cute—but fake—images of babies and pets, others aim to incite the kind of hostility that often marks online political debate. They have already appeared in foreign influence operations, such as Russia's ongoing campaign to demoralize Ukraine.

Researchers who track misleading uses say it is now up to companies to do more to ensure people know what is real and what is not.

“Could they do a better job moderating misinformation content? Yes, clearly they are not doing that,” said Sam Gregory, executive director of Witness, a human rights organization focused on the threats of technology. “Could they be more proactive in seeking out AI-generated information and labeling it themselves? The answer is also yes.”

The video about reselling food stamps was one of several that circulated as the standoff over the U.S. government shutdown dragged on, leaving actual beneficiaries of the Supplemental Nutrition Assistance Program (SNAP) in the United States struggling to feed their families.

Fox News dropped a similar fake video, treating it as an example of public outrage over alleged abuses of the food stamp program in an article that was later removed from the site. A Fox spokesperson confirmed the removal but did not provide further details.

The hoaxes have been used to ridicule not only poor people but also Trump. A video on TikTok showed the White House with what appeared to be a narration in Trump's voice reprimanding his cabinet for releasing documents involving Jeffrey Epstein, the discredited financier convicted of sex crimes.

According to NewsGuard, a company that monitors misinformation, the video — which was not labeled as AI — was viewed by more than 3 million people in just a few days.

Until now, platforms have relied primarily on creators to inform viewers that published content is not real — but they don't always do so. And while there are ways for platforms like YouTube and TikTok to detect that a video was made with artificial intelligence, this isn't always immediately signaled to viewers.

"They should have been prepared," said Nabiha Syed, executive director of the Mozilla Foundation, the technology security organization behind the Firefox browser, referring to social media companies.

The companies responsible for the AI ​​tools claim they are trying to make it clear to users what content is computer-generated. Sora and Google's competing tool, Veo, incorporate a visible watermark into the videos they produce.

Sora, for example, adds the label "Sora" to each video. Both also include invisible, machine-readable metadata that indicates the origin of each fake.

The emergence of realistic videos has been a boost for disinformation, hoaxes, and foreign influence operations. Sora videos have already appeared in recent Russian disinformation campaigns on TikTok and X.

One of them, with its watermarks crudely obscured, sought to exploit a growing corruption scandal among Ukraine's political leadership. Others created fake videos of frontline soldiers crying.

Two former members of a now-defunct State Department office that combated foreign influence operations, James P. Rubin and Darjan Vujica, argued in a new article in Foreign Affairs that advances in AI are intensifying efforts to undermine democratic countries and divide societies.

They cited AI videos in India that attacked Muslims to inflame religious tensions. One recent one, on TikTok, appeared to show a man preparing biryani rice in the street with water from a sewer. Although the video bore Sora's watermark and the creator claimed it was AI-generated, it was widely shared on X and Facebook, disseminated by accounts commenting as if it were real.

“They are creating things, and will continue to create things, that make the situation much worse,” Vujica said in an interview, referring to the new generation of AI-generated videos. “The barrier to using deepfakes as part of disinformation has crumbled, and once disinformation spreads, it’s difficult to correct the record.”

mundophone

Friday, December 26, 2025

 

TECH


AMD Ryzen 9 9950X3D2 with 192MB L3 cache spotted in multiple benchmarks

The long-awaited "double X3D" CPU, which AMD once denied would ever happen, has finally confirmed its existence with new leaks over at Geekbench and Passmark. We'll cut to the chase: neither benchmark result is particularly interesting, as both show performance nearly identical to the extant Ryzen 9 9950X3D in these specific benchmarks. However, the confirmation of this chip's existence is the more interesting detail, as it hadn't appeared in leaks until now.

As expected, the most significant improvement is in the L3 cache — capacity increases from 144 MB to 192 MB, a record number for a consumer processor. The secret, as rumored, lies in the use of the company's special technology in the two CPU chiplets that equip the 9950X3D2, instead of just one as the current generation offers.

This can be seen in Geekbench: the tool indicates 96 MB x2, reinforcing the presence of an additional chip at level 3 cache. This could also justify the lower clock speed, since memory is sensitive and still has overheating restrictions.

Regarding benchmark performance, the scores are as expected. In Geekbench, the new chip scores 3,456 points in single-core and 21,062 points in multi-core. In PassMark, the total is 71,585 points. As comparisons from the tests themselves demonstrate, we are in error territory compared to the 9950X3D, with speed losses in the range of 2%, reaching 8% compared to the conventional 9950X.

The lack of more significant gains is not surprising, since the extra cache usually only shines in games and specific professional tasks, such as file compression and decompression. We will have to wait for the chip's debut and the availability of independent reviews to know what the upgrade will really be able to provide.

To back up a moment, we're going to assume that you're familiar with AMD's 3D V-Cache. If you're not, check out our past coverage. People have been hoping for a Ryzen 9 processor with V-Cache on both CCDs for some time despite the somewhat questionable merits of such a processor. In terms of consumer workloads, gaming is the only thing that really benefits from the extra cache, and a single-CCD configuration with one stack of 3D V-Cache is typically the best performer in that arena.

Still, having doubled-up V-Cache does mean that you can enjoy a full sixteen-core CPU without ever having to worry about whether your game is running on the "correct" CCD. With the extant Ryzen 9 7950X3D and Ryzen 9 9950X3D, only half of the cores benefit from the 3D V-Cache, and so it can happen that games end up on the "wrong" cores, drastically reducing performance. That wouldn't be an issue with this chip, and it may also provide real performance benefits in tasks like code compilation, database analysis, computational fluid dynamics, and finite element analysis.

The leaks clearly come out of China, as the Geekbench result (at least), was run on a "Galaxy Microsystems" motherboard, better known as GALAX. Interestingly, GALAX hasn't actually released any motherboards based on the B850 chipset yet, so this is actually a motherboard leak, too. Nothing in the result is particularly surprising; the chip exactly matches the specifications that were leaked back in October by chi11eddog (@g01d3nm4ng0 on Xwitter.) Geekbench doesn't list TDP, but given that the rest of the specifications match up we wouldn't be surprised if this chip tops 200W for the first time since AMD's Centurion.

The Passmark result (hat tip @x86deadandback) is less informative, but it does confirm that the "X3D2" part doesn't seem to lose any performance versus the currently available part. The peak boost clock appears to be 100 MHz lower, at 5.6 GHz instead of 5.7 GHz, but the actual effect that has on the chip is a margin-of-error modification, and the Ryzen 9 9950X3D2 actually beats the older chip in multi-core, suggesting that the higher TDP is helping out there.

It will be fascinating to test this chip, even if we don't expect the results to be all that different for most users. It will be the sort of product where you need to find the use case, rather than the sort of thing that accelerates everything. For the few users who can really make use of such a chip, it will likely be a game-changer, but for everyone else, it's most likely going to be an expensive halo product for those with deep pockets.

mundophone

 

DIGITAL LIFE


Who should get paid when AI learns from creative work?

As generative AI systems become more deeply woven into the fabric of modern life—drafting text, generating images, summarizing news—debates over who should profit from the technology are intensifying.

A new paper in the Journal of Technology and Intellectual Property argues that the current copyright system is ill-equipped to handle a world in which machines learn from—and compete with—human creativity at unprecedented scale.

Frank Pasquale, professor of law at Cornell Tech and Cornell Law School, and co-authors Thomas W. Malone, professor of information technology at MIT Sloan School of Management, and Andrew Ting, professorial lecturer in law at George Washington University, describe a legal innovation called "learnright," a new intellectual property protection that would give creators the right to license their work specifically for use in AI training.

The basic idea behind learnright was first proposed by Malone in 2023. The new paper shows how the concept could actually work legally and economically.

Legal and ethical challenges of AI training...The researchers say the idea stems from a growing imbalance. Today's largest AI systems are trained on vast datasets scraped from the internet—millions of books, articles, songs, photographs, artworks and posts. Some of the authors of those works are now suing AI companies, arguing that using their copyrighted work to train a commercial model without permission is a violation of the law.

Yet court rulings on the issue remain unsettled, especially around whether training counts as fair use. While some judges have signaled skepticism toward AI companies, others have suggested that training may indeed be lawful if considered analogous to a human reading a book.

"The ongoing legal uncertainty here creates problems for both copyright owners and technologists," Pasquale said. "Legislation that mandates bargaining between them would promote fairer sharing in AI's bounty."

The stakes are hard to ignore. Artists complain that their signature styles can now be mimicked in seconds. Journalists are seeing readers peel away as chatbots summarize the news without sending traffic back to publishers. And white-collar workers in law, design, marketing and coding now worry that the next AI upgrade could automate portions of their jobs using the very work they once produced.

Arguments for a new intellectual property right...The authors argue that this is not simply a legal puzzle but a moral one. From a utilitarian perspective, they say, society benefits when creative work continues to be produced—and that requires maintaining incentives for humans to keep making it.

From a rights-based standpoint, they argue that tech companies vigorously protect their own intellectual property while dismissing the value of those whose work powers the models.

From the perspective of virtue ethics (which focuses on the type of character and habits that are essential to human well-being), they suggest that flourishing creative communities depend on norms of attribution and respect.

"Learnright law provides an elegant way of balancing all these competing perspectives," said Malone. "It provides compensation to the people who create the content needed for AI systems to work effectively. It removes the legal uncertainties about copyright law that AI companies face today. In short, it addresses a growing legal problem in a way that is simpler, fairer, and better for society than current copyright law."

How learnright could work in practice...The proposal would not replace copyright. Rather, it would add a seventh exclusive right to the six already granted to creators, specifically addressing machine learning. Just as copyright law recognizes special protections for digital audio transmissions, the authors say, Congress could extend a new protection for submitting a work to an AI training process.

Under such a regime, companies building generative AI tools would license the right to learn from specific datasets—much as some already do with news archives or stock photo libraries. The authors say that market negotiations would naturally set fair rates, and that clearinghouses or collective licensing organizations could replicate successful models from the music industry.

Critics may argue that such a right could slow innovation or burden startups. But the authors counter that unrestrained training could ultimately undermine the very creative ecosystem that AI depends on. They point to research suggesting that feeding models their own outputs over time can lead to "model collapse," reducing quality. Without a continually refreshed supply of human-generated art, journalism and scholarship, they say, AI's progress could stagnate.

Policy implications and the path forward...The paper arrives as lawmakers signal growing interest in regulating generative AI. A learnright, the authors argue, offers a clear path for policymakers: a middle ground that neither bans training nor leaves creators uncompensated.

"At present, AI firms richly compensate their own management and employees, as well as those at suppliers like NVIDIA," says Pasquale. "But the copyrighted works used as training data are also at the foundation of AI innovation. So it's time to ensure its creators are compensated as well. Learnright would be an important step in this direction."

Provided by Cornell University

Thursday, December 25, 2025

 

TECH


Multi-agent AI could change everything—if researchers can figure out the risks

You might have seen headlines sounding the alarm about the safety of an emerging technology called agentic AI.

That's where Sarra Alqahtani(image above) comes in. An associate professor of computer science at Wake Forest University, she studies the safety of AI agents through the new field of multi-agent reinforcement learning (MARL).

Alqahtani received a National Science Foundation CAREER award to develop standards and benchmarks to better ensure that multi-agent AI systems will continue to work properly, even if one of the agents fails or is hacked.

AI agents do more than sift through information to answer questions, like the large language model (LLM) technology behind tools such as ChatGPT and Google Gemini. AI agents think and make decisions based on their changing environment—like a fleet of self-driving cars sharing the road.

Multi-agent AI offers innumerable opportunities. But failure could put lives at risk. Here's how Alqahtani proposes to solve that problem.

What's the difference between the AI behind Chat GPT and the multi-agent AI you study?

Sarra Alqahtani: ChatGPT is trained on a huge amount of text to protect what the next word should be, what the next answer should be. It's driven by humans writing. For AI agents that I build—multi-agent reinforcement learning—they think, they reason and they make decisions based on the dynamic environment around them. So they don't only predict, they predict and they make decisions based on that prediction. They also identify the uncertainty level around them and then make a decision about that: Is it safe for me to make a decision or should I consult a human?

AI agents, they live in certain environments and they react and act in these environments to change the environments over time, like a self-driving car. ChatGPT still has some intelligence, but that intelligence is connected to the text, predictability of the text, and not acting or making a decision.

You teach teams of AI agents through a process called multi-agent reinforcement learning, or MARL. How does it work?

Sarra Alqahtani: There are a team of AI agents collaborating together to achieve a certain task. You can think of it as a team of medical drones delivering blood or medical supplies. They need to coordinate and make a decision, on time, what to do next—speed up, slow down, wait. My research focus is on building and designing algorithms to help them coordinate efficiently and safely without causing any catastrophic consequences to themselves and to humans.

Reinforcement learning is the learning paradigm that actually is behind even how us humans learn. It trains the agent to behave by making mistakes and learning from their mistakes. So we give them rewards and we give them punishments if they do something good or bad. Rewards and punishments are mathematical functions, or a number a value. If you do something good as an agent, I'll give you a positive number. That tells the agent's brain that's a good thing.

Us researchers, we anticipate the problems that could happen if we deploy our AI algorithms in the real world and then simulate these problems, deal with it, and then patch the security and safety issues, hopefully before we deploy the algorithms. As part of my research, I want to develop the foundational benchmarks and standards for other researchers to encourage them to work on this very promising area that's still underdeveloped.

It seems like multi-agent AI could offer many benefits, from taking on tasks that might endanger humans to filling in gaps in the health care workforce. But what are the risks of multi-agent AI?

Sarra Alqahtani: When I started working on multi-agent reinforcement learning, I noticed when we add small changes to the environment or the task description for the agents, they will make mistakes. So they are not totally safe unless we train them in the same exact task again and again like a million times. Also, when we compromise one agent, and by saying compromise, I mean we assume there's an attacker taking over and changing the actions from the optimal behavior of that agent, the other agents will also be impacted severely, meaning their decision-making will be disrupted because one of the team is doing something unexpected.

We test our algorithms in gaming simulations because they are safe and we have clear rules for the games so we can anticipate what's going to happen if the agents made a mistake. The big risk is moving them from simulations to the real world. That's my research area, how to still keep them behaving predictably and to avoid making mistakes that could affect them and humans.

My main concern is not the sci-fi part of AI, that AI is going to take over or AI is going to steal our jobs. My main concern is how are we going to use AI and how are we going to deploy AI? We have to test and make sure our algorithms are understandable for us and for end users before we deploy it out there in the world.

You have received an NSF CAREER award to make multi-agent AI safer. What are you doing?

Sarra Alqahtani: Part of my research is to develop standards, benchmarks, baselines that encourage other researchers to be more creative with the technology to develop new, cutting-edge algorithms.

My research is trying to solve the transitioning of the algorithms from simulation to the real world, and that involves paying attention to the safety of the agents and their trustworthiness. We need to have some predictability of their actions, and then at the same time, we want them to behave in a safe manner. So we want to not only optimize them to do the task efficiently, we want them also to do the task safely for themselves, as equipment, and for humans.

I'll test my MARL algorithms on teams of drones flying over the Peruvian Amazonian rainforest to detect illegal gold mining. The idea is to keep the drones safe while they are exploring, navigating and detecting illegal gold mining activities while avoiding being shot by illegal gold miners. I work with a team of diverse expertise—hardware engineers, researchers in ecology and biology and environmental sciences, and the Sabin Center for Environment and Sustainability at Wake Forest.

There's a lot of hype about the future of AI. What's the reality? Do you trust AI?

Sarra Alqahtani: I do trust AI that I work on, so I would flip the question and say, do I trust humans who work on AI? Would you trust riding with an Uber driver in the middle of the night in a strange city? You don't know that driver. Or would you trust the self-driving car that has been tested in the same situation, in a strange city?

I would trust the self-driving car in this case. But I want to understand what the car is doing. And that's part of my research, to provide explanations of the behavior of the AI system for the end users before they actually use it. So they can interact with it and ask it, what's going to happen if I do this? Or if I put you in that situation, how are you going to behave? When you interact with something and you ask these questions and you get to understand the system, you'll trust it more.

I think the question should be, do we have enough effort going into making these systems more trustworthy? Do we spend more effort and time to make them trustworthy?

Provided by Wake Forest University

Wednesday, December 24, 2025

 

TECH


Global memory crunch pushes modders to experiment with DIY DDR5 RAM

The unprecedented memory chip shortage isn't going to end anytime soon, but that doesn't mean DDR5 RAM sticks couldn't be cheaper than they are currently selling for. Modders are now seeing the DIY method as somewhat of a viable approach.

There have been a lot of talks surrounding the ongoing global DRAM and NAND crisis, and per recent reports, the situation will only become worse. IDC, for example, notes that current market analysis suggests that the memory shortage situation "could persist well into 2027."

While DDR4 RAM is being revisited, there's a chance that manufacturers will take advantage of the current situation and seek ways to increase their profits even through last-gen memory technology. Modders, however, are seeing a different method as somewhat of a viable approach to lower RAM costs in their builds.

They are now pitching the idea of DIY RAM sticks. At the surface, it's just like upgrading GPUs by soldering extra VRAM to the board. Of course, in terms of making your own memory sticks, the process would require sourcing the PCB board with the layout traced and memory ICs.

As Pro Hi-Tech highlights, these basic parts aren't challenging to source. Chinese sellers are already offering ready-to-be-soldered DDR5 PCBs. However, this DIY approach won't save users much money, at least not in the current state. An estimate calculated by Pro Hi-Tech and Viktor "Vik-On" points to a 16 GB stick costing around 12,000 rubles, around $151.

Compared to the current market condition, the estimated cost isn't much lower than what the average 16 GB DDR5 sticks are going for (Transcend 16GB DDR5 5600 MHz curr. $169.99 on Amazon). However, given how volatile the current memory market is, this DIY approach could eventually offer more notable savings in the not-so-distant future.

  DIGITAL LIFE Generation Z created its own “ digital detox ”: turning off their cell phones and going back to writing letters, thankfully.....