DIGITAL LIFE
Anthropic's AI is already being weaponized by hackers in sophisticated cyberattacks
Artificial intelligence continues to prove to be an ambiguous technology, transforming businesses and people, while also reporting its use for more harmful purposes. As the BBC reports, Anthropic itself stated that its technologies, specifically the AI model Claude, were being weaponized by hackers to carry out sophisticated cyberattacks.
The company claims the tools were being used by hackers to commit large-scale theft and extortion of personal data. According to Anthropic, artificial intelligence helped write the code used in cyberattacks, but there is also a record of another case, involving scams conducted in North Korea that used Claude to fraudulently obtain jobs at leading companies in the United States.
Despite this, Anthropic claims it was able to stop the hackers, having reported the cases to authorities, while also adding improved detection tools to Claude's AI tools. The model's advanced code-writing capabilities have become popular because they are accessible.
In advanced examples, the company cites a case called "vibe hacking," where AI was used to write code for an attack on 17 different organizations, including government agencies. In this case, it points out that AI was used at an unprecedented level, with Claude being used to make tactical and strategic decisions, including what type of data to exfiltrate and how to psychologically shape extortion demands from victims. He also suggested amounts to ask for from victims.
Regarding the use of AI for job applications, it was used to write letters, and after being hired, it was used to help translate messages and write code.
"Vibe hacking" and new forms of extortion... The report highlights the unprecedented sophistication of these actions. One of the techniques used, dubbed "vibe hacking," involved using AI to create psychologically impactful extortion messages to pressure victims into paying large ransoms in exchange for not disclosing sensitive data.
According to Anthropic, the tool was also used to make strategic decisions in real time, according to Engadget, such as what information to extract from compromised systems and how to maximize the impact of attacks. These capabilities demonstrate the potential of "agent" AIs, capable of acting semi-autonomously and quickly, without direct human supervision.
Transnational fraud and geopolitical implications...In another case described in the report, hackers linked to North Korea used Claude to fraudulently apply for remote jobs at major US technology companies. The AI assisted with creating resumes and performing post-hire duties, including writing code and communicating with colleagues. These incidents raise serious concerns about violations of international sanctions and the use of AI to circumvent corporate and diplomatic security systems. According to cybersecurity experts, this type of operation represents a new phase in digital crime, in which technical knowledge is no longer an essential barrier.
Measures taken and implications for the AI sector...After identifying the abuses, Anthropic stated that it had deactivated the accounts associated with the attacks and shared the information with the appropriate authorities. The company also developed an automated screening tool to detect misuse and announced improvements to its internal monitoring methods. However, technical details about these mechanisms were not disclosed.
Experts interviewed by outlets such as the BBC and Engadget emphasized that the use of AI by criminals represents a significant operational shift. According to cybersecurity consultant Alina Timofeeva, the time needed to exploit vulnerabilities has been decreasing with AI support, which requires defense mechanisms to become more preventative than reactive, she said in an interview with the BBC. Anthropic's report also reinforces that the use of generative models in fraud and attacks can reduce criminals' reliance on technical skills, making these crimes more accessible and widespread. The company claims to continue monitoring new abuse attempts in real time.
mundophone
No comments:
Post a Comment