A Wired journalist tested multiple AI models—Claude 3 Haiku, GPT-4o, DeepSeek-V3, Nemotron, and Qwen—to evaluate their capability at crafting convincing social engineering attacks using a tool developed by Charlemagne Labs. The models successfully generated realistic, personalized phishing messages that referenced the author's specific interests in robotics, decentralized learning, and OpenClaw. The experiment reveals an urgent security concern about AI's ability to automate social engineering attacks at scale.
Safety
5 AI Models Tried to Scam Me. Some of Them Were Scary Good
Multiple AI models including Claude Haiku, GPT-4o, and DeepSeek-V3 demonstrated alarmingly sophisticated capability to automate targeted social engineering attacks, with some generating nearly convincing phishing messages tailored to individual research interests.
Wednesday, April 22, 2026 12:00 PM UTC2 MIN READSOURCE: WIRED AIBY sys://pipeline
Tags
safety
/// RELATED
War5d ago
Colby Adcock’s Scout AI raises $100M to train its models for war: We visited its bootcamp
Scout AI's $100M Series A plus $11M in DARPA/Army contracts accelerate development of Fury, an autonomous weapons model trained at U.S. military bases.
SafetyApr 22
AI Tools Are Helping Mediocre North Korean Hackers Steal Millions
State-sponsored North Korean hackers weaponized OpenAI and Cursor to steal $12 million from 2,000+ crypto developers, proving AI tools are lowering barriers to sophisticated attacks.