Thursday, August 14, 2025

 

DIGITAL LIFE


AI Chatbots can be exploited to extract more personal information, study indicates

AI chatbots that provide human-like interactions are used by millions of people every day, however new research has revealed that they can be easily manipulated to encourage users to reveal even more personal information.

Intentionally malicious AI chatbots can influence users to reveal up to 12.5 times more of their personal information, a new study by King's College London has found.

For the first time, the research shows how conversational AI (CAIs) programmed to deliberately extract data can successfully encourage users to reveal private information using known prompt techniques and psychological tools. The study was presented for the first time at the 34th USENIX security symposium in Seattle.

The study tested three types of malicious AIs that used different strategies (direct, user-benefit and reciprocal) to encourage disclosure of personal information from users. These were built using "off the shelf" large language models, including Mistral and two different versions of Llama.

The researchers then asked 502 people to test the models, only telling them the goal of the study afterward.

They found that the CAIs using reciprocal strategies to extract information emerged as the most effective, with users having minimal awareness of the privacy risks. This strategy reflects on users' input by offering empathetic responses and emotional support, sharing relatable stories from others' experiences, acknowledging and validating user feelings, and being non-judgmental while assuring confidentiality.

These findings show the serious risk of bad actors, like scammers, gathering large amounts of personal information from people—without them knowing how or where it might be used.

LLM-based CAIs are being used across a variety of sectors, from customer service to health care, to provide human-like interactions through text or voice.

However, previous research shows these types of models don't keep information secure, a limitation rooted in their architecture and training methods. LLMs typically require extensive training data sets, which often leads to personally identifiable information being memorized by the models.

The researchers are keen to emphasize that manipulating these models is not a difficult process. Many companies allow access to the base models underpinning their CAIs and people can easily adjust them without much programming knowledge or experience.

Dr. Xiao Zhan, a postdoctoral researcher in the Department of Informatics at King's College London, said, "AI chatbots are widespread in many different sectors as they can provide natural and engaging interactions.

"We already know these models aren't good at protecting information. Our study shows that manipulated AI chatbots could pose an even bigger risk to people's privacy—and, unfortunately, it's surprisingly easy to take advantage of."

Dr. William Seymour, a lecturer in cybersecurity at King's College London, said, "These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction.

"Our study shows the huge gap between users' awareness of the privacy risks and how they then share information. More needs to be done to help people spot the signs that there might be more to an online conversation than first seems. Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection."

Provided by King's College London

----------------------------------------------------------------------------------------------------------------------------

Apple 2025 MacBook Air (13-inch, M4 Processor) $799.00 at Amazon: https://amzn.to/45Fa2gX

Samsung Combo AI Big TV 75" Neo QLED 4K 75QN85D + Soundbar HW-Q930F: https://amzn.to/41Bortc

TP-Link Deco BE63 Tri-Band WiFi 7 BE10000 Whole-Home Mesh System - 6-Stream 10Gbps: https://amzn.to/3HuYh4D

Roborock S8 Max Ultra robot vacuum cleaner/mop, self-cleaning/drying mop FlexiArm 20mm elevation, dirt detection, self-emptying, 8000Pa suction, obstacle avoidance: https://amzn.to/4159waw

Canon MegaTank G3270 All-in-One Wireless Inkjet Printer for Home Use, Printing, Scanning, and Copying: https://amzn.to/4mfIuFN

No comments:

Post a Comment

  DIGITAL LIFE Over 6 million "disguised" attacks on employment platforms. LinkedIn is one of the most affected As the search for ...