Saturday, April 25, 2026


TAG HEUER


TAG Heuer Formula 1 Solargraph arrives in five pastel shades

TAG Heuer unveiled the pastel collection of the TAG Heuer Formula 1 Solargraph 38mm, announced on April 21, 2026 in La Chaux-de-Fonds, Switzerland. The five new references expand the Formula 1 line with a unique color palette, combining the Calibre TH50-00 solar movement with cases in composite materials and sandblasted steel. Availability on exclusive e-commerce begins on April 28, with general sales starting on May 1, 2026.

2025 was indeed a major year for TAG Heuer; not only did they make a triumphant return as the official timekeeper for Formula 1, but they also rolled back the clock and reintroduced the Formula 1 Solargraph, an instant hit with TAG Heuer enthusiasts and collectors, but also a watch that works as a springboard for new TAG Heuer fans to enter the fold. A year on, and the watch world had just about gotten used to the fact that the colourful F1 is back in the catalogue, and so, it would be a perfect opportunity for TAG Heuer to stir things up once more. So, after plenty of stirring and playing with the Polylight colour options at the factory, the Formula 1 Solargraph is back for 2026, this time letting a pastel palette set the tone, without softening the attitude.

The welcomed, slightly beefier proportions of last year’s model carry over to this new pastel collection, measuring 38mm in diameter and 9.9mm thick, with two variations available. First, two models feature a sandblasted stainless steel case, and we’ll get onto why in a moment, while the other three feature cases made from TAG Heuer’s proprietary bio-polamide plastic, Polylight.

These three are finished in pastel blue, beige, and pink, and feature case-matching rubber straps and bidirectional-rotating Polylight bezels. For the sake of durability, all of the F1 Solargraphs come equipped with screw-down crowns and casebacks, ensuring a 100-metre water resistance.

The TAG Heuer Formula 1 Solargraph is powered by the Calibre TH50-00, the brand's proprietary solar movement. Two minutes of exposure to direct sunlight is enough to power the watch for an entire day. A full charge, achieved after less than 40 hours of light exposure, ensures up to ten months of autonomy in total darkness.

After a complete stop, the watch resumes operation with just ten seconds of light exposure. The battery has an estimated lifespan of 15 years. TAG Heuer does not detail in the press release the technical specifications that differentiate the Calibre TH50-00 from solar movements from manufacturers such as Citizen or Seiko, which have been on the market for decades at significantly lower prices.

Since its launch in 1986, the Formula 1 line was the first to bear the TAG Heuer name and introduced the use of composite material cases and bold colors in a sector dominated by conventional metals. The new pastel collection recovers this original chromatic spirit, reinterpreting it with soft tones: beige, pastel pink, pastel blue, pastel green, and lavender blue.

The choice of color palette is not merely aesthetic. By opting for colors that appeal to a younger audience and a female clientele, TAG Heuer is deliberately broadening the spectrum of potential buyers for the Formula 1 line, without abandoning the visual references that define it: the Mercedes pointer, the applied shields, and the bidirectional rotating bezel.

Five references, two price segments...The collection is divided into two distinct groups, with different materials, finishes, and prices.

The three references in TH Polylight cases, available in pastel blue, beige, and pastel pink with a matching rubber strap, are positioned in the entry-level segment of the collection. They are the most accessible and best evoke the original spirit of the 1980s Formula 1 line.

The two references in sandblasted steel, with a three-row bracelet and hour markers set with eight VS diamonds, elevate the design to a more formal level. The pastel green model and the lavender blue model with a pastel pink minute track combine luminosity and understated sophistication. 

The indicated print runs are global, according to the official TAG Heuer statement. The document does not specify distribution by market.

The exclusive online sale begins on April 28, 2026, on the official TAG Heuer website. General sales, in boutiques and authorized points of sale, begin on May 1, 2026. The reference prices by market are as follows: 1,850 to 2,650 CHF in Switzerland, 1,650 to 2,350 GBP in the United Kingdom and 1,950 to 2,800 USD in the United States.

The TAG Heuer Formula 1 Solargraph occupies a precise position in the brand's catalog: above the accessibility of large volume watchmaking groups, but below the barrier of the Carrera and Monaco segments. With prices between 1,950 and 2,800 euros and limited print runs, TAG Heuer creates conditions for controlled demand without compromising the perception of exclusivity. The integration of the Solargraph movement into a collection historically associated with color and youth culture is consistent with the brand's strategy. It remains to be seen whether this technology will be extended to other lines in the catalog in the medium term, or if it will remain a distinctive element of the Formula 1 line.




LINUX


Ubuntu 26.04 LTS: Canonical sets a new security bar

Canonical released Ubuntu 26.04 LTS, codenamed “Resolute Raccoon”, on April 23, 2026, marking the 11th extended support edition of the most widely deployed Linux distribution in enterprise infrastructures and cloud services. The release establishes full disk encryption anchored in TPM, post-quantum cryptography by default, and system tools rewritten in Rust as base configurations, with security becoming the default system state, without the need for manual administrator intervention.

For years, Linux distributions treated security as a layer to be activated during installation and rarely revisited. With Ubuntu 26.04 LTS, Canonical changes this logic: the new Security Center application transforms system protections into an inspectable and manageable surface after deployment, allowing administrators to review the TPM encryption status, Secure Boot configuration, and recovery mechanisms without the need for re-imaging.

Jon Seager, Vice President of Ubuntu Engineering at Canonical, defined resilience and memory security as the structuring priorities of this version. The technical argument is straightforward: most critical vulnerabilities in system software originate from memory management errors, and migrating to Rust eliminates this category of flaws by design of the language itself.

Regarding features implemented in Gnome specifically for Ubuntu 26.10, the left sidebar remains. The Ubuntu Dock, as it's called, now has an opaque background, no longer offering a translucent effect.

As always, there's also a new wallpaper package, which includes the default image referencing the codename of the new operating system version.

Another notable change is Showtime taking over as the system's default media player. The player has a minimalist look that, as such, makes it easier to use and tends to contribute to stability. Showtime was introduced in Gnome 49, but only now has it become the default in Ubuntu.

There are no other major visual changes, however, those with a more attentive eye will notice that the folders are no longer predominantly gray and have returned to being orange. Those who don't like this change can alter the color schemes in the system settings.

Another new feature—or "not new"—is the removal of the Software & Updates tool, which allowed users to update software and resources like drivers, but is now considered obsolete and insecure by Ubuntu developers. It's still possible to use this utility, but only if you install it manually.

The App Center, the distribution's official software manager, now officially handles Debian packages (.deb), not just Snaps.

This may please users who prefer to work directly with .deb packages, following the traditional approach. Snaps, it's worth noting, are a Canonical implementation that has the advantage of including each app's dependencies, but can be heavier or have slower startup times, among other potential disadvantages.

Base tools rewritten in Rust...The most visible replacement falls on utilities present in virtually all Linux systems. sudo-rs and uutils coreutils, Rust implementations of tools like sudo, ls, cp, and mv, become the default options. The original implementations in GNU coreutils and classic sudo remain available as a compatibility and fallback alternative, preserving operational continuity in environments with established dependencies.

Rust ensures memory safety by design, which eliminates vulnerabilities such as buffer overflows and use-after-free vulnerability (terms without established equivalents in Portuguese). For system administrators, the practical result is a reduced attack surface in the base utilities themselves, without altering the workflow. A relevant caveat: these implementations have a shorter production history than their classic C equivalents, justifying close monitoring during the initial production deployment cycles. [unverified data – requires editorial confirmation: comparative number of CVEs registered in sudo-rs versus classic sudo]

Post-quantum cryptography and hardware encryption...OpenSSH 10.2, included in this release, enables post-quantum hybrid key exchange mlkem768x25519-sha256 by default on all SSH connections, requiring no additional configuration. DSA support has been entirely removed, including DSA host key generation. Apache 2.4.66 disables TLS 1.0 and TLS 1.1 by default; Nginx 1.28.2 now only accepts TLS 1.2 and TLS 1.3, in accordance with RFC 8996, which deprecates older protocols.

Full disk encryption with TPM support has moved from experimental to general availability. The mechanism links encryption keys to the hardware's TPM chip and the Secure Boot state, making data extraction impossible without physical access to the original equipment. Canonical explicitly documents known incompatibilities, such as Absolute/Computrace, and kernel module requirements for certain storage configurations.

Ubuntu 26.04 LTS integrates full support, both in the host and guest systems, for AMD SEV-SNP and Intel TDX. These technologies allow running virtual machines with encrypted memory and processor-level integrity protection, being particularly relevant for public cloud providers, regulated industries, and artificial intelligence workloads with data sovereignty requirements.

In identity management, SSSD now runs as a dedicated user without privileges, abandoning execution as root. OpenLDAP operates in AppArmor application mode, with configurable PBKDF2 iteration control for password derivation. The version also introduces authd, an open-source authentication service that allows integrating Ubuntu systems with cloud identity providers, including Microsoft Entra ID and Google IAM, using OpenID Connect and supporting multifactor authentication.

Linux 7.0, GNOME 50, and the end of Xorg...This version includes the Linux 7.0 kernel and GNOME 50, completing the transition to Wayland as the only supported graphical environment in the LTS version. This is the first Ubuntu LTS version without an Xorg session as an alternative, with support for per-monitor scaling, native gestures, and the elimination of screen tearing. For most users, the transition is seamless; in enterprise environments with legacy software or hardware without full Wayland support, compatibility should be evaluated before any migration.

Canonical Livepatch, the service for applying kernel patches without system restarts, extends to Arm64 servers for the first time. For organizations running Ubuntu on Arm64 hardware, critical kernel updates will now be applied without service interruption. The official repositories will also include NVIDIA CUDA and AMD ROCm, the two dominant ecosystems in artificial intelligence and machine learning computing.

A platform for the next ten years...Standard support for this LTS release extends until April 2031, with extended coverage until 2036 for Ubuntu Pro subscribers. LTS cycles are the foundation upon which enterprises, governments, and cloud providers build infrastructure for consecutive years, amplifying the impact of every design decision made in this release.

The combination of post-quantum cryptography, hardware-bound cryptography, and system utilities with native memory security positions Ubuntu 26.04 LTS as a platform built for today's threat landscape, from ransomware to state espionage. Large-scale production over the next five years will tell if the maturity of sudo-rs and uutils coreutils in a full LTS cycle lives up to the confidence Canonical has placed in them.

FAQ:

-What is Ubuntu 26.04 LTS “Resolute Raccoon”?

Ubuntu 26.04 LTS, codenamed “Resolute Raccoon”, is the 11th extended support version of Canonical's Linux distribution, released on April 23, 2026. It includes standard security support until 2031 and extended coverage until 2036 with Ubuntu Pro. It is distinguished by its focus on security by default, with TPM encryption, post-quantum cryptography, and system tools rewritten in Rust.

-How does full disk encryption with TPM work in Ubuntu 26.04 LTS?

The encryption links cryptographic keys to the hardware's TPM chip and the Secure Boot state. Data is only accessible on the original equipment, with the correct boot configuration. Management is done through the Security Center, allowing you to add PINs or verify Secure Boot without reinstalling the system, even after deployment.

-What is the impact of removing Xorg in Ubuntu 26.04 LTS?

With this release, Ubuntu no longer includes Xorg sessions as an alternative to Wayland. For most users, the transition is seamless. Enterprise environments with applications or hardware dependent on Xorg should assess compatibility before migrating, as there is no native way back within this LTS release.

mundophone

Friday, April 24, 2026

 

TECH


JEDEC LPDDR6 roadmap signals major shift to memory-centric computing

Remember when LPDDR memory was just strictly meant for thin-and-light laptops and smartphones? Those days are officially over. While we originally saw JEDEC unleash the foundational LPDDR6 standard in July of last year to fuel faster mobile and AI devices, the standards group is already looking ahead to the next evolution. Today, JEDEC previewed a roadmap that completely reshapes LPDDR6, extending the standard heavily into datacenters and high-performance accelerated computing. 

We've been tracking this memory's blistering potential for a while, from our initial look at the massive speed boosts revealed for next-gen DDR6 and LPDDR6 back in 2024, to Innosilicon shipping the first commercial LPDDR6 IP at an insane 14.4Gbps per pin back in January. This latest update isn't just about raw speed, though; instead, it's about fundamentally changing how your PC's memory handles data.

The newly planned features JEDEC announced include massive capacities up to 512GB density, a new narrower x6 interface, support for processing-in-memory (PIM) and the SOCAMM2 form factor, and then a new flexible metadata carve-out.

Starting from the top, JEDEC expects to unlock staggering densities beyond the current maximums of LPDDR5 and LPDDR5X, targeting up to 512 GB. This massive scale-up is designed specifically to feed the ever-growing memory capacity requirements of AI training and inference workloads, of course. Considering those requirements are why you can't buy RAM at a reasonable price right now, that's a very good thing. To actually pull off those higher capacities, JEDEC is introducing a narrower per-die interface. Moving to a non-binary interface width (adding a new x6 sub-channel mode alongside x12, and the move from x16 to x24) allows manufacturers to cram more dies into a single package. This means higher memory capacities per component and per channel.

PIM is where the "memory-centric" part of our headline comes from. JEDEC is nearing the completion of an LPDDR6 Processing-in-Memory (PIM) standard. Essentially, by baking processing capabilities directly into the memory itself, this tech reduces the need to constantly shuttle data back and forth between the RAM and the CPU. The result is higher inference performance and much lower power consumption. This is pretty bleeding-edge stuff, but it's not completely novel; companies like Samsung and SK hynix have been talking about PIM for years now.

Also, JEDEC is actively developing an LPDDR6 SOCAMM2 module standard. This ensures the compact, serviceable module form factor has a clear upgrade path from today's LPDDR5X SOCAMM2 modules, which are currently used exclusively in massive datacenters and GPU clusters like NVIDIA's NVL72 racks. Hopefully it means that this form factor comes to the desktop as well, so we can keep getting socketed, upgradable memory without sacrificing LPDDR performance. Finally, another feature largely aimed at server farms: JEDEC is giving datacenters the option to balance their user capacity and metadata needs based on specific reliability requirements. The goal here is to implement these stability features while minimizing any hit to peak data throughput.

With rumors swirling that chips as early as AMD's upcoming Medusa Halo (expected to launch early next year) might leverage LPDDR6 for huge bandwidth gains, this JEDEC roadmap makes perfect sense. LPDDR is no longer just "low power" memory; rather, it is becoming a foundational building block for the next generation of high-capacity, insanely fast PCs and servers.

In the announcement, the company highlights the new in-memory processing architecture called PIM (processing-in-memory), which allows the RAM chip itself to perform some calculations, instead of constantly assigning this task to the processor. This avoids the constant transfer of information back and forth across the board, increasing speed and significantly reducing energy consumption.

To achieve this, significant physical modifications were made to the hardware. The internal memory communication path was widened, increasing from 16 to 24 channels. In practice, this change frees up space for more chips within the same component. The end result is memory with much greater capacity in a smaller space.

Furthermore, the design introduces the new, more compact SOCAMM2 socket format, which facilitates maintenance and allows for quick replacement in machines already using older generation technology.

Plenty of space for heavy tasks... With all these adjustments, the new memory standard can reach an impressive 512 GB of capacity, a huge leap entirely focused on handling the mountain of data required by complex tasks today. Electronics manufacturers will also have more freedom to configure memory and find the perfect balance between speed and information security for each product.

The president of JEDEC, Mian Quddus, warned that the organization is still working to finalize the last technical details before publishing the official standard. The market now awaits the start of factory testing to see how all this evolution will behave in the real world, and the expectation is that the technology will arrive in the next generation of AI servers.

Other companies have already released news about the LPDDR6 standard, such as SK Hynix, which promises 33% more speed in cell phones. Samsung and Qualcomm are already working on the next generation, LPDDR6X with up to 1 TB for AI chips, and some rumors speak of 14.4 Gbps RAM.

JEDEC's Board of Directors Chairman, Mian Quddus, noted to "stay tuned for more details" as the subcommittee evaluates these features for final publication. We'll be keeping a close eye on this as LPDDR6 gets ready to take over the hardware space!

by mundophone


DIGITAL LIFE


'Clearly me': AI drama accused of stealing faces

Christine Li is a model and influencer, but not an actor, so when she saw herself playing a cruel character in a Chinese microdrama she felt bewildered, then angry and afraid.

The 26-year-old is one of two people who told AFP their likenesses were cast without consent in the AI-generated show "The Peach Blossom Hairpin," which ran on Hongguo, a major microdrama app owned by Tiktok parent company ByteDance.

Li plans to sue the drama makers and the platform, highlighting new legal and regulatory gray areas created by artificial intelligence.

"I was genuinely shocked. It was clearly me," said Li, who lives in Hangzhou in eastern China.

"It was so obvious that they used a specific set of photos I took two years ago" and had posted on social media, she said.

Microdramas are ultra-short, online soap operas hugely popular in China and elsewhere.

When Li's fans alerted her to the series, she was horrified to find her digital twin shown slapping women and mistreating animals.

"I also felt a deep fear. I kept wondering what kind of person would do something like this," Li said.

Hongguo hosts thousands of free, bite-sized shows—both live-action and AI-generated—whose episodes are two or three minutes long.

As of October, the platform had around 245 million monthly active users, according to data cited by Wenwen Han, president of the Short Drama Alliance.

A Hongguo statement in early April said it had taken the series down because the producers had violated platform rules and contractual obligations.

"Sleazy" antagonist...AI's ability to mimic real people has sparked global concern for actors' jobs, and over such deepfakes being used for scams and propaganda.

Li and a man who says he was portrayed as her AI husband in the series, which became a hit last month on Hongguo, spoke out online about their separate unwelcome discoveries.

But even as their stories sparked a public outcry about AI ethics, AFP saw that "The Peach Blossom Hairpin" kept running for days before its removal, with the disputed characters quietly replaced.

The man, a stylist specialized in traditional Chinese clothing and make-up, had posted photos of himself in costume on the Instagram-like Xiaohongshu app.

Like Li, he was upset by the "ugly" portrayal of his likeness as a "sleazy" antagonist in the show.

"Will it have an impact on me, on my job, on my future work opportunities?" said the man, who asked to use the pseudonym Baicai.

To keep audiences hooked, microdramas are often full of shocking, larger-than-life moments.

Li and Baicai both showed AFP their original photos and the characters in "The Peach Blossom Hairpin," which bore a strong resemblance.

This photo illustration shows phones displaying the screenshots of Chinese Hanfu stylist Baicai's social media post (left) and the AI microdrama (right) accused of stealing his likeness.

This photo illustration taken in Hong Kong shows phones displaying screenshots of a video from Chinese model and influencer Christine Li accusing an AI microdrama of stealing her likeness without consent.

Legal risk...For low-budget AI microdramas, Chinese regulations say platforms must be the primary checkpoint for potentially dodgy content.

If they do not carry out mandatory content reviews, the videos will be forcibly taken down, according to the National Radio and Television Administration.

If the platforms were aware of any infringement but failed to act on it, parties affected can alert China's cyberspace authorities which can impose administrative penalties, according to Zhao Zhanling, a partner at Beijing Javy Law Firm.

Hongguo said in a second statement this month it would continue to strengthen how it reviews content and how it authorizes creators, among other steps.

It said it had dealt with 670 AI microdramas that violated regulations, with most taken down, and warned it would crack down on repeated breaches.

When approached for comment, parent company Bytedance referred AFP to the two Hongguo statements.

Li and Baicai say they need more information from Hongguo to confirm the identity of the drama's creator—with two companies potential candidates.

One is linked to a verified account on the Chinese version of TikTok that also published the series. Another is listed as the drama's producer on an official Chinese filing system.

AFP contacted both firms but received no response.

Using AI to slash costs may be tempting in the fast-growing, multibillion-dollar microdrama market.

But featuring someone in a demeaning way without permission "may constitute an infringement of both portrait rights and reputation rights," said Li's lawyer Yijie Zhao, from Henan Huailv Law Firm.

"Associated with controversy"...National regulations require microdrama makers to register to obtain a license—a step made mandatory for AI-generated animations from this month.

But producers could remain in the shadows by registering temporary outfits, Zhao said, while some allegedly use overseas servers to hide.

In 2024, a Beijing court ordered a company to apologize and pay compensation to a celebrity after its AI software enabled users to produce a virtual persona using his photos and name that could exchange intimate messages.

But lawyers told AFP that compensation for plaintiffs like Li likely won't amount to much due to the limited commercial value of an ordinary likeness.

Li worries that the saga may cost her opportunities in the modeling industry, as she is now "associated with controversy."

Baicai has not launched legal action, but hopes to see more measures from regulators and platforms to protect people like him.

"There are probably plenty of cases with unknown victims," he said.

© 2026 AFP 



Thursday, April 23, 2026


DIGITAL LIFE


How we got to the point where extreme power fits on your desktop

Two decades ago, reaching the pinnacle of computing required gigantic structures. Today, some of that same power is within reach at home — and the comparison reveals an impressive silent transformation.

The history of technology rarely advances linearly. In many cases, it takes leaps that we only notice when we look back. What once seemed unattainable, restricted to laboratories and large corporations, is beginning to emerge in everyday contexts. And few comparisons illustrate this change as well as the evolution of high-performance computing in the last two decades.

When power meant monumental scale...In the early 2000s, reaching the pinnacle of global computing was an achievement reserved for a select few. One of the most emblematic machines of this period was the IBM Blue Gene/L, a system that redefined the limits of performance at the time.

Installed in highly controlled environments, this supermachine occupied entire rooms and required a complex infrastructure to operate. Its power came from a massive architecture: more than 30,000 processors working together, distributed across thousands of interconnected nodes. By 2004 standards, its more than 70 teraflops represented the most advanced technology on the planet.

This kind of capacity wasn't just impressive—it was essential for cutting-edge scientific research, such as physics simulations, climate studies, and molecular modeling. Access, however, was extremely limited. High costs, intense energy consumption, and technical requirements made this type of technology inaccessible to the general public.

At that time, imagining that a significant fraction of that power could fit into a home computer seemed simply impossible. But the history of technology often defies this kind of prediction.

The silent turning point that changed everything...Fast forward two decades, and the scenario is radically different. A single modern GPU, like the NVIDIA GeForce RTX 4090, is already capable of achieving—and in some cases surpassing—the raw performance of that supermachine in specific tasks.

This board, which fits inside a standard computer case, can exceed 80 teraflops in parallel processing operations. What's most impressive isn't just the number, but the context: we're talking about a component accessible to consumers, not a multi-million dollar scientific facility.

Of course, the comparison has nuances. The old supercomputer was designed for highly distributed workloads and complex simulations, while the modern GPU excels in parallel tasks such as graphics, artificial intelligence, and data processing. Still, the symbolism is undeniable.

What once required thousands of components can now be partially replicated by a single piece of hardware.

This transformation didn't happen by chance. It's the result of several simultaneous revolutions: transistor miniaturization, advances in parallel architectures, improvements in energy efficiency, and constant evolution in manufacturing processes.

Furthermore, GPUs have ceased to be just tools for gaming. They have become central to areas such as artificial intelligence, data science, rendering, and simulation. In other words, they have not only become faster—they have come to play much broader roles.

Far beyond computers...This compression of scale isn't exclusive to high-performance computing. It reflects a broader trend in the technology industry.

Over the years, we've seen storage media evolve from floppy disks to tiny devices with thousands of times greater capacity. CDs have been replaced by ultra-fast SSDs. Complex photographic equipment has been, in part, incorporated into smartphones.

The logic repeats itself: less space, more power.

Interestingly, this evolution also brings new challenges. Modern GPUs, for example, have grown so much in performance—and also in physical size—that they don't always fit in every computer case. The advancement continues, but not without its own limitations.

Still, the big picture is clear. What once represented the absolute limit of technology is now, in part, within reach of any enthusiast.

And this is not just a historical curiosity.

It is a direct sign of how the future of computing will be built: more accessible, more compact, and potentially much more powerful than we can imagine today.

The ability to fit extreme computing power—capable of advanced AI, high-end gaming, and complex simulations—onto a desktop is the result of decades of exponential transistor miniaturization, the shift from sequential CPU processing to parallel GPU computing, and the rise of dedicated AI hardware. This transformation moved computing from room-sized mainframes to powerful, compact personal units. 

Here is how we arrived at this point(below):

1. The Foundation: Moore’s Law and Miniaturization (1960s-2000s)...Transistor Shrinking: The journey began with replacing vacuum tubes with transistors in the 1950s, followed by integrated circuits in the 1960s. Moore’s Law predicted that the number of transistors on a chip would double approximately every two years, shrinking their size while increasing power.

Microprocessors: The 1970s brought single-chip microprocessors (e.g., Intel 4004), allowing PCs to enter homes.

Nanometer Scaling: Modern transistors have shrunk to the 7-nanometer scale, allowing billions of transistors to fit on a chip no larger than a fingernail, drastically reducing power consumption while boosting speed. 

2. The shift: CPUs to GPUs (2000s-2010s)...Parallel processing: While Central Processing Units (CPUs) handle general tasks sequentially, Graphics Processing Units (GPUs) were developed to handle thousands of tasks simultaneously (parallel processing).

Gaming driving innovation: The demand for high-resolution graphics and 3D games in the 1990s and 2000s drove the rapid advancement of GPUs (e.g., Nvidia's GeForce 256).

CUDA cores: In the 2000s, the introduction of CUDA cores by Nvidia enabled GPUs to be used for general-purpose computing, not just graphics, paving the way for AI and complex simulations on desktop machines. 

3. The new era: specialized AI hardware (2020s-Present)...Neural processing units (NPUs): Modern desktops now feature specialized AI accelerators called Neural Processing Units (NPUs) or Tensor cores. These are designed specifically to run AI workloads locally rather than relying on the cloud.

Local AI capabilities: This allows for instantaneous response for AI tasks like content creation, real-time translation, and voice assistance, enhancing privacy and reducing latency.

"Ultimate performance" tuning: Operating systems like Windows now feature hidden "Ultimate Performance" modes that eliminate CPU idle states, keeping hardware at maximum performance levels consistently, which is useful for workstation-level tasks on a desktop. 

4. Enabling technologies...Solid-state drives (SSDs): Faster, smaller storage replaced traditional hard drives.

64-bit processors & DDR memory: Improvements in memory bandwidth and processing capacity allowed for larger, more complex applications.

UEFI: Replaced the older BIOS, improving boot times and hardware management. 

mundophone


DIGITAL LIFE


How AI bias can creep into online content moderation

A University of Queensland study has shown large language models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality. A team led by data scientist Professor Gianluca Demartini from UQ's School of Electrical Engineering and Computer Science used persona prompting to test the tendency of AI chatbots to encode and reproduce political biases, and found significant behavioral shifts.

Testing chatbots through political personas...The research team asked six LLMs—including vision models—to moderate thousands of examples of hateful text and memes through the lens of different ideologically diverse AI personas. The results are published in the journal ACM Transactions on Intelligent Systems and Technology.

Professor Demartini said the exercise revealed that AI political personas, even without significantly altering overall accuracy, were prone to introducing consistent ideological biases and divergences in chatbot content moderation judgments.

"It has already been established that persona conditioning can shift the political stance expressed by LLMs," Professor Demartini said. "It demonstrates a need to rigorously examine the ideological robustness of AI systems used in tasks where even subtle biases can affect fairness, inclusivity and public trust."

How the ideological personas were built...The AI personas used in the study were from a database of 200,000 synthetic identities ranging from schoolteachers to musicians, sports stars and political activists. Each persona was put through a political compass test to determine their ideological positioning, with 400 of the more "extreme" positions asked to identify hateful online content.

Professor Demartini said his team found that assigning a persona to an LLM chatbot altered its precision and recall in line with ideological leanings, rather than change the overall accuracy of hate speech detection.

Ideological cohesion in larger language models...However, the team found LLMs—especially larger models—exhibited strong ideological cohesion and alignment between personas from the same ideological "region."

Professor Demartini said this suggested larger AI models tend to internalize ideological framings, as opposed to smoothing them out or 'neutralizing' them.

"As LLMs become more capable at persona adoption, they also encode ideological 'in-groups' more distinctly," Professor Demartini said. "On politically targeted tasks like hate speech detection, this manifested as partisan bias, with LLMs judging criticism directed at their ideological in-group more harshly than content aimed at their opponents."

In-group protection and defensive bias...Professor Demartini said larger LLMs also displayed more complex patterns, including a tendency towards defensive bias.

"Left personas showed heightened sensitivity to anti-left hate, and right-wing personas were more sensitive to anti-right hate speech," Professor Demartini said. "This suggests that ideological alignment not only shifts detection thresholds globally, but also conditions the model to prioritize protection of its 'in-group' while downplaying harmfulness directed at opposing groups."

Why neutral oversight still matters...Researchers said the project highlighted that it was crucial for high-stakes content moderation tasks to be overseen by neutral arbiters so that fairness and public trust are maintained and the health and well-being of vulnerable demographics is protected.

"People interact with AI programs trusting and believing they are completely neutral," Professor Demartini said. "In content moderation the outputs of these models reflect embedded ideological biases that can disproportionately affect certain groups, potentially leading to unfair treatment of billions of users."

AI bias in content moderation isn't usually the result of a single "glitch." Instead, it typically "creeps in" through several interconnected layers of the system—from the data used for training to the way the software is designed and how it interacts with users. 

1. Training Data: "Garbage In, Garbage Out"...The most common entry point for bias is the data used to train the AI. 

Historical biases: If a platform’s past moderation decisions (made by humans) were biased, the AI will learn and reproduce those same patterns. For example, if certain groups were historically flagged more often, the AI may incorrectly "learn" that their content is inherently more problematic.

Lack of diversity: Models trained primarily on data from Western, English-speaking users often fail to understand the cultural nuances or slang of other groups, leading to "context blindness".

Proxy variables: Even if an AI isn't explicitly told a user's race or gender, it can use "proxies" like ZIP codes, dialects (e.g., AAVE), or specific cultural references to unintentionally target certain demographics. 

2. Algorithmic design: technical choices...How a model is built and fine-tuned can inadvertently bake in bias. 

Ideological alignment: Recent studies (April 2026) show that Large Language Models (LLMs) used for moderation can adopt "ideological personas". This leads to a "defensive bias" where the AI prioritizes protecting its own "in-group" while being less sensitive to harm directed at opposing groups.

Translation errors: Many global platforms use AI to translate content before moderating it. These systems often strip away the very context—like irony, satire, or reclaimed slurs—needed to judge if a post is truly harmful. 

3. Human & interaction bias...Human decisions at every stage influence the final outcome. 

Subjective labeling: The humans who "label" the training data (deciding what counts as "hate speech" or "harassment") bring their own personal and cultural biases to the task.

Automation bias: Human moderators who oversee AI-flagged content may become overly reliant on the machine's judgment, assuming it is "objective" and failing to catch its errors.

Weaponized reporting: Bad actors sometimes exploit "reactive moderation" by mass-reporting legitimate content from marginalized groups, tricking the AI into flagging it as "abusive" based on the volume of reports. 

Impact on marginalized groups...Because of these issues, marginalized communities often face "over-moderation". 

LGBTQ+ Content: Valid health or identity discussions may be incorrectly flagged as "sexually explicit" because of rigid or biased nudity and policy filters.

Activists: Posts related to movements like Black Lives Matter or Indigenous rights have been mistakenly removed due to automated systems failing to distinguish political speech from "violence" or "hate speech"


Provided by University of Queensland

Wednesday, April 22, 2026


DIGITAL LIFE


Half of the major digital platforms fail in transparency regarding advertising and user data, according to an international study

A survey conducted by researchers from Brazil and the United Kingdom reveals that social networks operate with low levels of transparency, hindering independent investigations and the fight against disinformation. In the Brazilian scenario, limitations are even more evident and worrying.

The influence of digital platforms on the flow of information has never been so evident — and, at the same time, so difficult to examine closely. A new international study sheds light on this paradox by showing that, although these companies collect enormous volumes of data on users, they offer little visibility into their own practices.

The research, entitled Data Not Found, was conducted by NetLab, from the Federal University of Rio de Janeiro, in partnership with the Minderoo Centre for Technology & Democracy, in the United Kingdom. The objective was to analyze, in an unprecedented way, how large digital platforms make data on content and advertising available.

Fifteen platforms operating in Brazil, the European Union, and the United Kingdom were evaluated, including popular names such as TikTok, Instagram, Facebook, YouTube, Kwai, and Telegram. Comparing these regions allows us to understand how different regulatory contexts influence access to information.

The European Union, for example, has one of the most advanced legislations in the world, notably the Digital Services Act (DSA), which establishes stricter transparency rules. The United Kingdom, on the other hand, adopts a more flexible approach, based on specific assessments by regulatory authorities. Brazil, in turn, still faces a developing regulatory landscape.

Limited transparency and incomplete data...To measure the level of openness of the platforms, the researchers used the Social Media Transparency Index, which assesses factors such as availability, quality, and accessibility of data.

The results point to a widespread problem: in virtually all the platforms analyzed, the data is incomplete, difficult to access, and poorly standardized. This includes flaws in ad libraries, lack of clarity on campaign financing and targeting, as well as obstacles to tracking essential information.

In Brazil, the situation is even more critical. Some tools available in other countries simply do not exist here, or function in a more limited way. This significantly reduces the ability of independent researchers to analyze the impact of these platforms.

An opaque system by nature...According to the study, the lack of transparency is not isolated, but structural. Even when mechanisms for accessing data exist, they are often inconsistent and unreliable.

This scenario creates a clear imbalance: while platforms accumulate detailed information about their users, the internal workings of these companies remain virtually inaccessible to the public. In practice, the platforms themselves define what can or cannot be investigated about them.

Many of these transparency initiatives end up functioning more as image strategies than as real tools for accessing information. The result is an appearance of openness that does not translate into useful data for analysis.

Impacts for research, regulation, and society...The opacity of platforms has direct consequences for different sectors. Researchers face difficulties in validating studies and investigating social impacts, while regulatory authorities lack information to conduct audits or open investigations.

This prevents, for example, the effective mapping of disinformation campaigns, abusive advertising practices, or the exposure of vulnerable audiences—such as children and adolescents—to harmful content.

Without reliable and accessible data, it becomes almost impossible to understand the true extent of these problems or to develop effective public policies to address them.

The global debate on the power of digital platforms has reinforced the importance of transparency as a central element in ensuring the integrity of information. Organizations such as the UN already recognize that access to quality data is essential for accurate diagnoses of the digital environment.

However, the study highlights that it is not enough to simply release data: it is essential that it be complete, standardized, and truly useful for analysis. Currently, many available tools offer limited resources, hindering deeper investigations.

Furthermore, even in regions with advanced legislation, such as the European Union, access to data still largely depends on the decision of the platforms themselves—which represents a significant limitation.

An urgent and global challenge...Faced with this scenario, researchers advocate for the creation of more robust and effective regulations, especially in countries like Brazil. At the same time, they suggest that the platforms themselves adopt higher standards of transparency on a voluntary basis.

The lack of uniformity across regions also exacerbates inequalities: while some researchers gain access to data, others—especially in the Global South—remain excluded, even when dealing with more vulnerable contexts.

Ultimately, transparency cannot be treated as a corporate choice. In a world increasingly dependent on digital platforms for information and public debate, it needs to be seen as an essential condition for protecting the collective interest.

Research indicates that around half or more of major digital platforms (including Meta, Google, TikTok, X, and others) fall short in providing adequate transparency regarding advertising and user data. A 2024 analysis found that opacity is the norm rather than the exception, particularly regarding how user data is used for targeting and the lack of accessible advertising repositories for independent researchers. 

Key findings on ad transparency failures(below):

Lack of repositories: Several major platforms, including Telegram, TikTok, X (Twitter), and Spotify, have failed to provide functional, comprehensive, and public advertising repositories in many regions, notably in the Global South.

Pinterest: Pinterest has faced significant scrutiny and legal complaints regarding its transparency in user data tracking and advertising practices, particularly within the European Union. Critics, including digital rights advocacy group noyb (None Of Your Business), have accused the platform of violating GDPR by engaging in "secret tracking" and failing to provide adequate information on how data is shared with third parties.

Inadequate data access: Even where libraries exist, such as Meta’s Ad Library, the provided data is often limited, providing insufficient information on ad targeting, total spend, or reach.

"Transparency-washing": Researchers argue that platforms often employ "transparency-washing," creating limited, self-regulated tools to avoid stricter, mandatory, and more comprehensive oversight.

API restrictions: Social media platforms are increasingly restricting access to their Application Programming Interfaces (APIs), which are essential for independent data collection, effectively blocking researchers from auditing their systems.

Ephemeral ads: Ephemeral (short-lived) ads are often missed by transparency tools, creating significant blind spots for monitoring disinformation or illegal content. 

Source: The Conversation

TAG HEUER TAG Heuer Formula 1 Solargraph arrives in five pastel shades TAG Heuer unveiled the pastel collection of the TAG Heuer Formula 1 S...