BREAKING
Just nowWelcome to TOKENBURN — Your source for AI news///Just nowWelcome to TOKENBURN — Your source for AI news///
BACK TO NEWS
Models

Anthropic admits it dumbed down Claude when trying to make it smarter

Anthropic traced March-April Claude degradation to three engineering mishaps (reduced reasoning effort, cache bugs, overaggressive safety filtering) and has now remedied all three.

Thursday, April 23, 2026 12:00 PM UTC2 MIN READSOURCE: The RegisterBY sys://pipeline

Anthropic confirmed that Claude quality degradation in March–April 2026 resulted from three unintended engineering changes, not deliberate model degradation. A March 4 adjustment reduced Claude Code's default reasoning effort from high to medium; a March 26 cache optimization introduced bugs; and a safety classifier update was overly restrictive. All issues have been identified and reverted or fixed as of April 2026.

Tags
models
/// RELATED