DIGITAL LIFE
DeepSeek: the tool that is a propaganda tool of the Chinese communist party
The Chinese artificial intelligence (AI) tool DeepSeek, despite its rapid development and popularity, operates under information control – censorship – that raises questions about freedom of expression.
Its capabilities, although advancing in areas such as logical reasoning and complex calculations, are accompanied by restrictions on the content it makes available.
Independent analyses and tests indicate that the model, especially after updates such as R1-0528, actively avoids or censors answers to questions considered sensitive by the Chinese government.
This practice distinguishes it from many Western models, which, although they have limitations, do not demonstrate such direct political alignment.
The DeepSeek model refuses to address topics such as the Tiananmen Square massacre, the situation of the Uighurs in Xinjiang, the protests in Hong Kong or Taiwan independence.
Questions about the Chinese Communist Party and its leaders are also frequently sidestepped or ignored. In some cases, the chatbot will initiate a response, then delete the content and suggest a change of subject.
DeepSeek’s R1-0528 version has been shown to be less permissive on these topics.
This content moderation is in line with Chinese law, which requires AI tools to adhere to “core socialist values” and not generate content that undermines “national unity” or “social harmony.”
Since 2023, China has implemented regulations requiring companies to conduct security reviews and obtain approvals before publicly releasing their AI products. These rules may involve subjecting models to extensive testing to ensure they do not answer questions deemed “unsafe.”
In addition, DeepSeek’s privacy policy states that user data is stored on servers in China, raising concerns about government access to this information.
Such informational control impacts free access to information and could restrict academic debate. While techniques such as leetspeak can sometimes circumvent these restrictions, the official version of the model remains aligned with state directives.
Conclusion...In short, DeepSeek's behavior illustrates the application of state control over information in the field of artificial intelligence in China.
The restrictions imposed, in line with government directives, limit the model's ability to provide a full spectrum of information, especially on political and human rights issues, in contrast to the approach of models developed in contexts with different freedom of expression frameworks.
mundophone
No comments:
Post a Comment