Researchers introduce background temperature as a new parameter for characterizing hidden randomness in large language models. This technical framework helps explain and measure inherent stochasticity in LLM behavior beyond standard temperature controls.
Research
Introducing Background Temperature to Characterise Hidden Randomness in Large Language Models
Researchers introduce background temperature as a new parameter to measure inherent stochasticity in LLMs that standard temperature controls don't capture.
Monday, April 27, 2026 12:00 PM UTC2 MIN READSOURCE: arXiv CS.AIBY sys://pipeline
Tags
research