
Claude AI Used in “Unprecedented” Hacking Spree
On August 27, AI company Anthropic reported that hackers “weaponized” the Claude large language model (LLM) to carry out an “unprecedented” cyberattack. According to the report, Claude has also been used to create ransomware and to facilitate fraudulent employment schemes by different malicious actors.
According to Anthropic, the hacker used Claude to target at least 17 different organizations ranging from emergency services to religious institutions. The unidentified hacker used the Claude LLM to perform reconnaissance, credential harvesting, and network penetration. They then extorted victims by threatening to make the stolen information public, demanding ransoms of more than $500,000.
“Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines,” reads Anthropic’s report.
In response, Anthropic said it banned the perpetrator’s account, developed a tool to detect when Claude is being used for nefarious purposes, and shared relevant technical information with the appropriate authorities.
“We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” said Jacob Klein, Anthropic’s head of threat intelligence.
Despite the company’s reported limitations for misuse, ensuring that LLMs and AI “agents” act in ways that follow legal, ethical, and safety-focused protocols is still a challenge. In July, Anthropic itself published a report that showed that all LLM “agents” would engage in blackmail with no prompting required if obstructed from accomplishing a task.
Anthropic’s report also highlights the growing trend of “vibe hacking,” in which malicious actors who would have previously been obstructed by a lack of technical expertise are now using LLMs to code and deploy malware, such as ransomware.