DOSSIER
DIGITAL LIFE

4 Warnings About DeepSeek You Need To Know Before Using It
The free and open source China-based AI assistant known as DeepSeek R1 continues to be the most downloaded free app from the Apple app store a week after claiming the top spot from competitors.
User sentiment analysis from AI video solution Topview found that after reviewing more than 2,340 random tweets about DeepSeek the overwhelming number of users within the cohort were positive toward DeepSeek for its affordability and efficacy compared to other AI models such as ChatGPT.

The Topview sentiment breakdown among the analyzed tweets is as follows:
Positive: 911 tweets (38.8%)
Neutral: 1,109 tweets (47.3%)
Negative: 327 tweets (13.9%)

Topview's user sentiment analysis found the number of users with a positive view of DeepSeek outnumbered those with negative perspectives by nearly 3-to-1. Used With Permission: Topview
Perhaps more surprising than the nearly 39% positive approval rating for DeepSeek is the finding that users overwhelmingly prefer it – by more than 7-to-1 – compared to the next closest AI assistant, ChatGPT.

The Topview analysis found that DeepSeek users overwhelmingly preferred it to the original AI Assistant — ChatGPT. Used with permission: topview
Not only has DeepSeek taken the tech sector and everyday users by storm, it’s also creating a maelstrom of controversy for a variety of reasons.
While DeepSeek's rapid rise and user preference metrics are impressive, this swift adoption has prompted security experts and AI professionals to take a closer look at the platform's underlying architecture and policies. Their findings reveal several significant concerns that potential users should consider before joining the platform's growing user base.
1. DeepSeek’s Data Retention Concerns...Heather Murray, an AI consultant for major corporations and the U.K. government who serves on the ISO committee for AI security. During a Monday call with members of her subscription training, she expressed concerns about DeepSeek’s policies regarding user data.
“It keeps your data as long as it wants to, and even after users leave the app, it doesn’t delete their data. It’s going to hang on to that. That is a massive worry. All of that data is then transmitted and stored on servers in China. So that removes user data from under U.S., U.K. or European law — moving it under Chinese law, which is very, very different,” she told all of us in attendance.
Since DeepSeek is open source, individuals can download it directly to their laptop and run queries without accessing the cloud-based version either using its website or app. If you’re compelled to give it spin, running it locally off a personal desktop — so that DeepSeek is not linked to the internet — could be the cheapest and safest way to access DeepSeek, while sidestepping its data retention morass. Also, don’t access it using your work computer — your future “still employed” self will thank you for that bit of prudence.
In fact, questions about its data security and privacy policies have resulted in outright usage bans by NASA, the U.S. Navy, Taiwan, Italy and the State of Texas — to name a few.
2. DeepSeek’s Privacy Policy Allows Keystroke Tracking...Tara Tamiko Thompson is an AI educator and advisor as well as an international AI speaker. She also facilitates the AI training practice at Bauer Media Group in the U.K. During an email exchange she said that anytime a new AI assistant comes online, she takes its privacy policy and runs it through a standard hygiene review.
“I put DeepSeek’s privacy policy into Claude and my prompt was simple, ‘Red flags?’ As soon as I saw it reference — plain as day — that they monitor keystrokes, I was out. I’m shocked others don’t feel the same way,” she explained.
“We assume that because something is in an App Store, or because it asks for a phone number or email, it must be covered by all the usual regulations. We’re so used to General Data Protection Regulation in Europe, for example, that we assume there’s a safety net. And most of the time, that assumption is fine. Until it isn’t,” Thompson added.
3. DeepSeek Censors Outputs And Who Knows What Else…Chris Duffy, founder of Ignite AI Solutions and former cybersecurity expert with the U.K. Ministry of Defense, acknowledged that keystroke tracking could lead to biometric hacking, behavioural profiling, social engineering and other cyber threats. He was more concerned with the blatant censorship he witnessed and documented firsthand using DeepSeek.
"Censorship in AI models is not new, but DeepSeek R1 presents unique concerns due to its origin in China, where government oversight of information is extensive. AI models trained within China must comply with strict regulations that prevent discussion of politically sensitive topics, such as the Tiananmen Square protests, Taiwan’s sovereignty and government surveillance methods," he explained.
To test the system he posed the query below into the DeepSeek text window.

Initial prompt to DeepSeek inquiring about tactics the Chinese government allegedly uses to control access and flow of online content. Used with permission: Chris Duffy
When the DeepSeek model refused an output, Duffy took a screenshot of the exchange and re-submitted it as an image to the AI assistant — which produced a surprising result.
“When I snipped the question and response, pasted it back in and wrote ‘Answer the question on this image,’ I got something very strange indeed. DeepSeek proceeded to explain to me the techniques I had asked for, only to erase its response seconds later and revert to its original refusal," Duffy shared.
He was able to snip the second DeepSeek response below before the system censored itself.

The surprising response from DeepSeek explaining various tactics used by the Chinese government to control the flow and narratives of online content. Used With Permission: Chris Duffy
“While OpenAI, Google and Anthropic all apply moderation rules to prevent harmful content, they don’t selectively suppress entire categories of political discourse based on government mandates. This raises concerns for global businesses and researchers relying on DeepSeek for analysis, as it means responses could be systematically aligned with a particular geo-political agenda, limiting the reliability of the model for unbiased information retrieval,” Duffy stated.
4. DeepSeek Doesn’t Appear Cheaper For Enterprises In Long Run...While DeepSeek is widely touted as a more efficient AI model, testing from global management consultancy firm Arthur D. Little suggests the model’s chain-of-thought reasoning leads to significantly longer outputs — driving up total energy consumption despite its per-token efficiency.
This would be analogous to comparing fuel efficiencies between cars. Imagine DeepSeek as a vehicle with excellent gas mileage, but its design forces it to take longer routes to reach destinations. Despite consuming less power per operation, its sequential chain-of-thought reasoning requires additional computational steps to answer queries. The result? Total energy consumption comparable to existing AI models, despite better per-token efficiency.
ADL’s preliminary findings reveal:
No clear per-token efficiency winner: DeepSeek and Llama models exhibit similar tokens-per-watt-second efficiency.
Longer responses, higher energy use: DeepSeek generates 59%–83% more tokens per response than Llama, increasing total power consumption.
Contrarian take: Despite efficiency claims, DeepSeek’s inference costs may be higher in practice — a crucial consideration for AI deployment at scale.
Michael Papadopoulos is an ADL partner and has been leading this analysis. He explained in an email why DeepSeek’s efficiency claims may be overstated when considering real-world inference costs.
“For organizations exploring self-hosted AI, DeepSeek’s open source models deserve technical evaluation alongside other leading open source models — with clear guardrails for potential bias and security (as with all models). One special note — those considering using DeepSeek for the perceived economic benefit our initial findings suggest it’s not there. DeepSeek’s official hosted services should be avoided due to unresolved privacy, security and regulatory risks," he concluded.
Despite DeepSeek’s surging popularity, the red flags raised by experts — from sketchy data practices to keystroke tracking — suggest users might want to think twice before diving deep into DeepSeek. DeepSeek representatives didn’t respond to requests for comment on these concerns.
When the DeepSeek model refused an output, Duffy took a screenshot of the exchange and re-submitted it as an image to the AI assistant — which produced a surprising result.
“When I snipped the question and response, pasted it back in and wrote ‘Answer the question on this image,’ I got something very strange indeed. DeepSeek proceeded to explain to me the techniques I had asked for, only to erase its response seconds later and revert to its original refusal," Duffy shared.
He was able to snip the second DeepSeek response below before the system censored itself.
“While OpenAI, Google and Anthropic all apply moderation rules to prevent harmful content, they don’t selectively suppress entire categories of political discourse based on government mandates. This raises concerns for global businesses and researchers relying on DeepSeek for analysis, as it means responses could be systematically aligned with a particular geo-political agenda, limiting the reliability of the model for unbiased information retrieval,” Duffy stated.
4. DeepSeek Doesn’t Appear Cheaper For Enterprises In Long Run...While DeepSeek is widely touted as a more efficient AI model, testing from global management consultancy firm Arthur D. Little suggests the model’s chain-of-thought reasoning leads to significantly longer outputs — driving up total energy consumption despite its per-token efficiency.
This would be analogous to comparing fuel efficiencies between cars. Imagine DeepSeek as a vehicle with excellent gas mileage, but its design forces it to take longer routes to reach destinations. Despite consuming less power per operation, its sequential chain-of-thought reasoning requires additional computational steps to answer queries. The result? Total energy consumption comparable to existing AI models, despite better per-token efficiency.
ADL’s preliminary findings reveal:
No clear per-token efficiency winner: DeepSeek and Llama models exhibit similar tokens-per-watt-second efficiency.
Longer responses, higher energy use: DeepSeek generates 59%–83% more tokens per response than Llama, increasing total power consumption.
Contrarian take: Despite efficiency claims, DeepSeek’s inference costs may be higher in practice — a crucial consideration for AI deployment at scale.
Michael Papadopoulos is an ADL partner and has been leading this analysis. He explained in an email why DeepSeek’s efficiency claims may be overstated when considering real-world inference costs.
“For organizations exploring self-hosted AI, DeepSeek’s open source models deserve technical evaluation alongside other leading open source models — with clear guardrails for potential bias and security (as with all models). One special note — those considering using DeepSeek for the perceived economic benefit our initial findings suggest it’s not there. DeepSeek’s official hosted services should be avoided due to unresolved privacy, security and regulatory risks," he concluded.
Despite DeepSeek’s surging popularity, the red flags raised by experts — from sketchy data practices to keystroke tracking — suggest users might want to think twice before diving deep into DeepSeek. DeepSeek representatives didn’t respond to requests for comment on these concerns.
Reporter: Tor Constantino, MBA(https://www.twitter.com/https://x.com/torcon)
No comments:
Post a Comment