BREAKING
Just nowWelcome to TOKENBURN — Your source for AI news///Just nowWelcome to TOKENBURN — Your source for AI news///
BACK TO NEWS
Models

When Models Know More Than They Say: Probing Analogical Reasoning in LLMs

Open-source LLMs possess latent analogical reasoning abilities that substantially outperform their prompted outputs for rhetorical analogies—revealing a knowledge gap between internal representations and what models can naturally express.

Tuesday, April 7, 2026 12:00 PM UTC2 MIN READSOURCE: arXiv CS.CL (Computation & Language)BY sys://pipeline

This paper probes analogical reasoning in LLMs by comparing internal representations (via probing) versus prompted outputs. The researchers find an asymmetry: for rhetorical analogies, probing significantly outperforms prompting in open-source models, but both approaches perform similarly on narrative analogies. The findings suggest models may have latent knowledge inaccessible through prompting alone.

Tags
models