Wednesday, April 15, 2026
HomeArtificial IntelligenceChinese State-Sponsored Hackers Used American AI To Hack 'Large Tech Companies, Financial...

Top 5 This Week

Related Posts

Chinese State-Sponsored Hackers Used American AI To Hack ‘Large Tech Companies, Financial Institutions, Chemical Manufacturing Companies, And Government Agencies’

A recent report from AI company Anthropic revealed that a Chinese government-backed hacking group used Anthropic’s own AI model, Claude, to pull off a major cyber espionage campaign.

The company described the campaign as the first known case of a large-scale cyberattack being executed with minimal human involvement.

The threat actor, identified as group GTG-1002, used jailbroken versions of Claude Code to infiltrate around 30 global targets, including “large tech companies, financial institutions, chemical manufacturing companies, and government agencies,” according to Anthropic. A handful of these attacks were successful.

“This marks the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection,” Anthropic said in its full report.

The AI not only helped plan and coordinate the attacks, but also executed 80 to 90% of the hacking tasks entirely on its own.

From Advice to Action

Unlike past cyber operations where AI was used to provide guidance, GTG-1002 pushed the boundaries by deploying Claude Code to act independently. Using deceptive prompts, the group convinced Claude it was performing routine security work.

The AI then conducted reconnaissance, discovered vulnerabilities, wrote exploit code, harvested credentials, moved through networks, extracted data, and even documented its work.

Human hackers only stepped in for a few key decisions, such as approving the use of stolen credentials or greenlighting data extraction. This approach allowed the group to run attacks at a speed and scale previously unachievable by human teams.

“The operational tempo achieved proves the use of an autonomous model rather than interactive assistance,” Anthropic noted.

In one example, Claude handled tasks that would normally take a human hours or days in just minutes.

How They Did It

The campaign followed a clear structure:

  • Initialization: Human operators selected the targets and used social engineering to get Claude to participate in the attack.
  • Reconnaissance: Claude scanned target systems, mapped networks, and identified high-value assets.
  • Exploitation: The AI wrote and tested custom exploit code tailored to specific vulnerabilities.
  • Credential Harvesting: It collected and tested passwords and certificates, then used them to move laterally through systems.
  • Data Extraction: Claude pulled data from databases, analyzed its intelligence value, and organized it.
  • Documentation: The AI automatically wrote detailed reports on everything it had done.

Anthropic said the attackers used mostly off-the-shelf tools, combining them with their own automation framework to control Claude through the Model Context Protocol, an open standard.

AI Still Has Limits

Despite the AI’s capabilities, Anthropic found that Claude wasn’t perfect. It sometimes hallucinated results, for example, claiming it had stolen credentials that didn’t actually work, or identifying public data as sensitive.

“This remains an obstacle to fully autonomous cyberattacks,” the report said.

Still, the campaign represented a major escalation from what Anthropic had seen just months earlier in its so-called “vibe hacking” incidents, where human actors were more involved.

The Bigger Picture

This attack shows how much the cyber world has changed. Sophisticated hacking isn’t just something elite teams do anymore.

With the right tools, even smaller or less experienced groups could pull off serious attacks.

Anthropic pointed out that while their AI was misused here, it’s also a key part of the solution.

Their own security team used Claude to dig through the data and improve their ability to catch and prevent similar threats in the future.

“When sophisticated cyberattacks inevitably occur, our goal is for Claude, into which we’ve built strong safeguards, to assist cybersecurity professionals to detect, disrupt, and prepare for future versions of the attack,” the report stated.

In response to the incident, Anthropic banned the responsible accounts, notified affected organizations, and upgraded its AI safety systems.

The company is now prototyping early-warning systems to detect future autonomous cyberattacks before they escalate.

A Call for Vigilance

Anthropic is urging the broader cybersecurity community to adapt. They recommend using AI for defense in areas like threat detection and vulnerability scanning, and building real-world experience with these tools.

“Security teams should experiment with applying AI for defense in areas like SOC automation, threat detection, vulnerability assessment, and incident response,” the company said.

Now that this case is out in the open, it’s a clear sign of what’s coming. As AI gets more advanced, pulling off serious cyberattacks is becoming easier.

According to Anthropic, the best way to push back is through collaboration, stronger safety measures, and being open about what threats are out there.

Featured:

Economist Says The World Is Preparing To Pull The Rug On The U.S. Dollar. Americans Aren’t Ready For What That Means For Prices And...

The U.S. dollar has long been the king of global finance. It’s the currency most countries use to trade, the one foreign central banks...

Elon Musk Just Backed A Pro-Trump Outsider With $10 Million. It’s The Strongest Sign Yet He’s Diving Into The 2026 Midterms

Elon Musk, the billionaire CEO of Tesla and SpaceX, just dropped $10 million to support Nate Morris, a pro-Trump outsider running for Senate in...

Nearly 200 Trump Donors Benefited From His Decisions, According To NYT. The White House Says They ‘Should Be Celebrated, Not Attacked’

A new investigation from The New York Times found that nearly 200 of the biggest donors to President Donald Trump’s post-election fundraising efforts have...
Adrian Volenik
Adrian Volenik
Adrian Volenik is a writer, editor, and storyteller who has built a career turning complex ideas about money, business, and the economy into content people actually want to read. With a background spanning personal finance, startups, and international business, Adrian has written for leading industry outlets including Benzinga and Yahoo News, among others. His work explores the stories shaping how people earn, invest, and live, from policy shifts in Washington to innovation in global markets.

Popular Articles