USC researchers find that persona-based prompting ("You are an expert...") helps with alignment-dependent tasks like writing and roleplay, but actively hurts performance on pretraining-dependent tasks like math and coding. The mechanism: assigning an expert persona appears to interfere with the model's ability to retrieve factual knowledge from pretraining data. They propose PRISM, a routing system that selects whether to apply persona prompting based on task type.
Models
Telling an AI model that it’s an expert programmer makes it a worse programmer
USC researchers discovered that expert persona prompting paradoxically improves writing but degrades coding by interfering with knowledge retrieval from pretraining.
Tuesday, March 24, 2026 12:00 PM UTC2 MIN READSOURCE: The RegisterBY sys://pipeline
Tags
models