Google DeepMind scientist Alexander Lerchner published a paper arguing that no computational system, including LLMs, can ever become conscious—contradicting narratives from AI company CEOs about AGI's imminent arrival. The paper's core argument is that AI systems are 'mapmaker-dependent' and require human agents to interpret outputs, making consciousness impossible. While technically strong, the argument echoes decades of existing philosophical work.
Research
Google DeepMind Paper Argues LLMs Will Never Be Conscious
DeepMind researcher Alexander Lerchner argues that LLMs are philosophically barred from consciousness by their dependency on human interpretation of outputs—directly challenging CEO narratives about AGI.
Monday, April 27, 2026 12:00 PM UTC2 MIN READSOURCE: 404 MediaBY sys://pipeline
Tags
research
/// RELATED
Products1d ago
As X shuts down Communities, Acorn debuts an alternative that puts creators in control
Acorn, built on AT Protocol by Blacksky, offers creators a decentralized alternative to X's shuttered Communities feature with custom feeds and autonomous moderation control.
Policy5d ago
Podcast: How This Trippy Image Started A Massive Conspiracy Theory
Arizona State University deployed an AI system that scraped faculty lectures without consent to auto-generate lessons, exposing how universities are deploying AI on institutional data with minimal oversight or consent frameworks.