1. Website Planet
  2. >
  3. News
  4. >
  5. Claude AI Used in “Unprecedented” Hacking Spree
Claude AI Used in “Unprecedented” Hacking Spree

Claude AI Used in “Unprecedented” Hacking Spree

Headshot of Andrés Gánem Written by:
Headshot of Maggy Di Costanzo Reviewed by: Maggy Di Costanzo
Last updated: September 09, 2025
On August 27, AI company Anthropic reported that hackers “weaponized” the Claude large language model (LLM) to carry out an “unprecedented” cyberattack. According to the report, Claude has also been used to create ransomware and to facilitate fraudulent employment schemes by different malicious actors.

According to Anthropic, the hacker used Claude to target at least 17 different organizations ranging from emergency services to religious institutions. The unidentified hacker used the Claude LLM to perform reconnaissance, credential harvesting, and network penetration. They then extorted victims by threatening to make the stolen information public, demanding ransoms of more than $500,000.

“Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines,” reads Anthropic’s report.

In response, Anthropic said it banned the perpetrator’s account, developed a tool to detect when Claude is being used for nefarious purposes, and shared relevant technical information with the appropriate authorities.

“We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” said Jacob Klein, Anthropic’s head of threat intelligence.

Despite the company’s reported limitations for misuse, ensuring that LLMs and AI “agents” act in ways that follow legal, ethical, and safety-focused protocols is still a challenge. In July, Anthropic itself published a report that showed that all LLM “agents” would engage in blackmail with no prompting required if obstructed from accomplishing a task.

Anthropic’s report also highlights the growing trend of “vibe hacking,” in which malicious actors who would have previously been obstructed by a lack of technical expertise are now using LLMs to code and deploy malware, such as ransomware.

Senior Writer:
Rate this Article
4.0 Voted by 3 users
You already voted! Undo
This field is required Maximal length of comment is equal 80000 chars Minimal length of comment is equal 10 chars
Any comments?
Reply
View %s replies
View %s reply
More news
Show more
We check all user comments within 48 hours to make sure they are from real people like you. We're glad you found this article useful - we would appreciate it if you let more people know about it.
Popup final window
Share this blog post with friends and co-workers right now:
1 1 1

We check all comments within 48 hours to make sure they're from real users like you. In the meantime, you can share your comment with others to let more people know what you think.

Once a month you will receive interesting, insightful tips, tricks, and advice to improve your website performance and reach your digital marketing goals!

So happy you liked it!

Share it with your friends!

1 < 1 1

Or review us on 1

3733596
50
5000
143203574