A thoughtful essay arguing that software engineering's core value — building a mental model ("theory") of a system — remains essential even as AI generates most code. The author uses Peter Naur's "programming as theory building" and the Good Regulator Theorem from cybernetics to argue that the job shifts left (problem definition) and right (verification), away from the middle (implementation). Practical observations include using Claude for codebase navigation and generating graphviz diagrams, and noting that the quality of AI output is bounded by the engineer's domain model.
Models
You have to know what to wish for
AI code generation doesn't diminish software engineering's core value—it amplifies the importance of building accurate mental models of systems, shifting focus from implementation toward problem definition and verification.
Friday, March 27, 2026 12:00 PM UTC2 MIN READSOURCE: LobstersBY sys://pipeline
Tags
models