Anthropic AI Models Identify $4.6M in Smart Contract Vulnerabilities During Controlled Testing
Anthropic demonstrated that advanced AI models successfully identified and exploited vulnerabilities in blockchain smart contracts during controlled testing. The company tested 10 leading AI systems against 405 previously hacked contracts from 2020–2025, with agents exploiting over half and simulating theft of approximately $550.1M in value. When restricted to 34 contracts exploited after March 1, 2025, Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 generated working exploits worth $4.6M in simulated value, according to a recent study conducted with MATS and Anthropic Fellows.