Security researchers disclosed vulnerabilities in AI agents from Anthropic, Google, and Microsoft integrated with GitHub Actions that can be hijacked to steal API keys, and separately exposed a design flaw in Anthropic's Model Context Protocol affecting 200,000+ servers. The vendors responded with small bug bounties ($100–$1,337) and documentation updates but refused to issue CVEs or patch root causes, claiming vulnerabilities are "expected behavior."
Safety
I meant to do that! AI vendors shrug off responsibility for vulns
Anthropic, Google, and Microsoft dismiss critical vulnerabilities in AI agents that hijack GitHub Actions and threaten 200,000+ Model Context Protocol servers, offering token bug bounties instead of patches or CVEs.
Sunday, April 19, 2026 12:00 PM UTC2 MIN READSOURCE: The RegisterBY sys://pipeline
Tags
safety
/// RELATED
InfrastructureApr 22
Stop Hand-Coding Change Data Capture Pipelines
Databricks announces AutoCDC, an automated change data capture solution that replaces hand-coded CDC pipelines with declarative semantics. The tool, part of Lakeflow Spark Declarative Pipelines, simplifies complex MER...
StrategyApr 28
‘He wanted to be CEO’: Early OpenAI VC Vinod Khosla says Elon Musk’s bid for control led to the Sam Altman feud and his major investment
Vinod Khosla's $50M OpenAI investment at a $1B valuation was a defensive move against Elon Musk's power grab attempt—now worth hundreds of billions but threatened by Musk's lawsuit ahead of the planned 2026 IPO.