Anthropic confirmed that Claude quality degradation in March–April 2026 resulted from three unintended engineering changes, not deliberate model degradation. A March 4 adjustment reduced Claude Code's default reasoning effort from high to medium; a March 26 cache optimization introduced bugs; and a safety classifier update was overly restrictive. All issues have been identified and reverted or fixed as of April 2026.
Models
Anthropic admits it dumbed down Claude when trying to make it smarter
Anthropic traced March-April Claude degradation to three engineering mishaps (reduced reasoning effort, cache bugs, overaggressive safety filtering) and has now remedied all three.
Thursday, April 23, 2026 12:00 PM UTC2 MIN READSOURCE: The RegisterBY sys://pipeline
Tags
models
/// RELATED
Research5d ago
Large Language Models Explore by Latent Distilling
Latent distilling, a knowledge distillation technique, enables LLMs to explore solution spaces more effectively during reasoning and problem-solving tasks.
Policy1d ago
Kids say they can beat age checks by drawing on a fake mustache
Internet Matters' survey of over 1,000 UK children reveals that age verification measures under the Online Safety Act are largely ineffective, with 46% reporting age checks are easy to bypass. Simple tactics like fake...