An alarming watershed for artificial intelligence, or an overhyped threat?
AI startup Anthropic’s recent announcement that it detected the world’s first artificial intelligence-led hacking campaign has prompted a multitude of responses from cybersecurity experts.
Recommended Stories
list of 4 itemsend of list
While some observers have raised the alarm about the long-feared arrival of a dangerous inflection point, others have greeted the claims with scepticism, arguing that the startup’s account leaves out crucial details and raises more questions than answers.
In a report on Friday, Anthropic said its assistant Claude Code was manipulated to carry out 80-90 percent of a “large-scale” and “highly sophisticated” cyberattack, with human intervention required “only sporadically”.
Anthropic, the creator of the popular Claude chatbot, said the attack aimed to infiltrate government agencies, financial institutions, tech firms and chemical manufacturing companies, though the operation was only successful in a small number of cases.
The San Francisco-based company, which attributed the attack to Chinese state-sponsored hackers, did not specify how it had uncovered the operation, nor identify the “roughly” 30 entities that it said had been targeted.
Roman V Yampolskiy, an AI and cybersecurity expert at the University of Louisville, said there was no doubt that AI-assisted hacking posed a serious threat, though it was difficult to verify the precise details of Anthropic’s account.
“Modern models can write and adapt exploit code, sift through huge volumes of stolen data, and orchestrate tools faster and more cheaply than human teams,” Yampolskiy told Al Jazeera.
“They lower the skills barrier for entry and increase the scale at which well-resourced actors can operate. We are effectively putting a junior cyber-operations team in the cloud, rentable by the hour.”
Yampolskiy said he expected AI to increase both the frequency and the severity of attacks.
Jaime Sevilla, director of Epoch AI, said he did not see much new in Anthropic’s report, but past experience dictated that AI-assisted attacks were both feasible and likely to become increasingly common.
“This is likely to hit medium-sized businesses and government agencies hardest,” Sevilla told Al Jazeera.
“Historically, they weren’t valuable enough targets for dedicated campaigns and often underinvested in cybersecurity, but AI makes them profitable targets. I expect many of these organisations to adapt by hiring cybersecurity specialists, launching vulnerability-reward programmes, and using AI to detect and patch weaknesses internally.”
While many analysts have expressed their desire for more information from Anthropic, some have been dismissive of its claims.
After United States Senator Chris Murphy warned that AI-led attacks would “destroy us” if regulation did not become a priority, Meta AI chief scientist Yann LeCun called out the lawmaker for being “played” by a company seeking regulatory capture.
“They are scaring everyone with dubious studies so that open source models are regulated out of existence,” LeCun said in a post on X.
Anthropic did not respond to a request for comment.
A spokesperson for the Chinese embassy in Washington, DC, said China “consistently and resolutely” opposed all forms of cyberattacks.
“We hope that relevant parties will adopt a professional and responsible attitude, basing their characterisation of cyber incidents on sufficient evidence, rather than unfounded speculation and accusations,” Liu Pengyu told Al Jazeera.
Toby Murray, computer security expert at the University of Melbourne, said that Anthropic had business incentives to highlight both the dangers of such attacks and its ability to counter them.
“Some people have questioned Anthropic’s claims that suggest that the attackers were able to get Claude AI to perform highly complex tasks with less human oversight than is typically required,” Murray told Al Jazeera.
“Unfortunately, they don’t give us hard evidence to say exactly what tasks were performed or what oversight was provided. So it’s difficult to pass judgement one way or the other on these claims.”
Still, Murray said he did not find the report particularly surprising, considering how effective some AI assistants are at tasks such as coding.
“I don’t see AI-powered hacking changing the kinds of hacks that will occur,” he said.
“However, it might usher in a change of scale. We should expect to see more AI-powered hacks in the future, and for those hacks to become more successful.”
While AI is set to pose growing risks to cybersecurity, it will also be pivotal in bolstering defences, analysts say.
Fred Heiding, a Harvard University research fellow who specialises in computer security and AI security, said he believes AI will provide a “significant advantage” to cybersecurity specialists in the long term.
“Today, many cyber-operations are held back by a shortage of human cyber-professionals. AI will help us overcome this bottleneck by enabling us to test all our systems at scale,” Heiding told Al Jazeera.
Heiding, who described Anthropic’s account as broadly credible but “overstated”, said the big danger is that hackers will have a window of opportunity to run amok as security experts struggle to catch up with their exploitation of increasingly advanced AI.
“Unfortunately, the defensive community is likely to be too slow to implement the new technology into automated security testing and patching solutions,” he said.
“If that is the case, attackers will wreak havoc on our systems with the press of a button, before our defences have had time to catch up.”
https://www.aljazeera.com/economy/2025/11/19/a-dangerous-tipping-point-ai-hacking-claims-prompt-cybersecurity-debate?traffic_source=rss

