Sunday, August 10, 2025

 

DIGITAL LIFE


Poisoned invitations turned Google's Gemini into a weapon for cyberattacks

In a new study, a team of cybersecurity researchers demonstrated that Gemini, Google's AI assistant, could be hacked, giving attackers the ability to control smart devices in victims' homes and perform other malicious actions.

The team, comprised of researchers from the Israel Institute of Technology (Technion), Tel Aviv University, and the cybersecurity firm SafeBreach, explains that it all starts with a poisoned invitation in Google Calendar, with instructions to turn on the devices at a specific time.

When someone asks Google Assistant to summarize their calendar tasks and events, the instructions are activated, turning on the devices. Poisoned invitations utilize a tactic known as indirect prompt injection.

Through this, an external source can insert malicious instructions that are invisible to humans but readable to an AI system. These instructions can be embedded, for example, in a text or website and, when processed by the system, lead you to perform a set of actions that victims weren't expecting.

But attacks targeting smart devices are just one example presented by researchers in a new study. In total, the team discovered 14 attacks that use indirect prompt injection tactics against Gemini, including in the smartphone app and the web version of the assistant.

Gemini was manipulated to perform malicious actions such as sending spam links, generating offensive content, automatically initiating calls, stealing data through the browser, or downloading files to the smartphone.

For the team, the discovery, reported to Google in February, marks what they believe to be the first time that an attack on a generative AI system has had real-world consequences.

Ben Nassi, a researcher at Tel Aviv University, told Wired that "large-scale language models (LLMs) are about to be integrated into humanoid robots, semi- and fully autonomous cars, and we really need to know how to secure them before integrating them into these types of machines."

In an interview with the magazine, Andy Wen, one of Google Workspace's security chiefs, stated that although the detected vulnerabilities have not yet been exploited by attackers, the technology company takes them "extremely seriously" and has already implemented several fixes.

-----------------------------------------------------------------------------------------------------------------------------

Apple iPhone 16e---https://amzn.to/3HuPnE9

Apple iPhone 16 Pro Max (512 GB) – Titânio natural---https://amzn.to/3JqW2zH

Smart TV LG AI 77" - 4K OLED α9 AI Processor---https://amzn.to/3J2Csdc

Samsung Combo AI TV 65" OLED 4K 2024 + Soundbar HW-Q600C---https://amzn.to/4mFrrNu

mundophone

No comments:

Post a Comment

  DIGITAL LIFE AI Chatbots can be exploited to extract more personal information, study indicates AI chatbots that provide human-like intera...