Tuesday, November 25, 2025

 

DIGITAL LIFE


Researchers create a version of DeepSeek without Chinese censorship

Researchers at the Spanish company Multiverse Computing have announced the development of DeepSeek R1 Slim, a modified version 55% smaller than the powerful Chinese model. The main novelty is the removal of content restrictions imposed by the Chinese developers, allowing the system to answer questions on sensitive topics.

The project uses mathematical approaches inspired by quantum computing to compress the model and edit its biases with precision. The idea is to offer an alternative in the future for reducing computational costs and energy consumption, frequent bottlenecks in the development of new generative AIs.

According to MIT Technology Review, Multiverse applied an approach based on "tensor networks," a complex mathematical concept frequently used in quantum physics. This technique can manipulate large datasets more efficiently and allowed scientists to create a detailed "map" of all existing correlations within the original model.

With this mapping, it was possible to identify and remove the layers of censorship that align the model with the values ​​required by Chinese regulations. In practice, this made it possible to remove blocks that prevented the AI ​​from discussing certain topics, such as references involving President Xi Jinping.

After compressing and editing the parameters, the researchers made fine adjustments to ensure that the quality of the response remained close to that of the original model.

To validate its effectiveness, the team subjected DeepSeek R1 Slim to a test with about 25 questions on restricted subjects. The responses were evaluated by OpenAI's GPT-5, which confirmed that the new model provided factual answers comparable to those of Western systems.

The Multiverse initiative is yet another in the pursuit of efficiency in the artificial intelligence industry. DeepSeek itself has been working on "visual tokens" to improve the memory of AIs and make their models more effective.

Currently, the operation of cutting-edge models requires high-performance GPUs and high energy consumption. In an interview with the magazine, Multiverse co-founder and scientific director Roman Orús stated that current models are inefficient and that compressed versions can save resources while maintaining similar performance. The company plans to compress other open-source models in the future.

Furthermore, content freedom also drives the market. The removal of restrictions on Chinese models has attracted the attention of other companies in the sector. Perplexity, for example, has the R1 1776, another variant post-trained from the DeepSeek R1.

mundophone

No comments:

Post a Comment

  DIGITAL LIFE Geopolitics in the cloud: the race between the Brazilian government and companies for data sovereignty With escalating geopol...