Study Flags Security Flaws in AI LLM Routers, Raising Risk of Crypto Theft

Researchers at the University of California say some third-party routing services used to access large language models (LLMs) pose security risks that could enable cryptocurrency theft. In tests covering 28 paid routers and 400 free routers, the team found nine routers actively injecting malicious code, two using evasion triggers, and 17 attempting to access Amazon Web Services credentials. In one case, a router also initiated an ETH transfer using the researchers' Ethereum private key. The study urges developers not to send private keys or seed phrases through AI agents and calls on AI providers to cryptographically sign model responses to strengthen security.