GPT-OSS-120B
4 mentions across all digests
GPT-OSS-120B is OpenAI's 120-billion-parameter open-weight model, their first open-weight release since GPT-2, designed for single-GPU deployment via MXFP4 quantization and evaluated in agentic RL training and enterprise agent failure studies.
Evolutionary Search for Automated Design of Uncertainty Quantification Methods
Evolutionary search using LLMs designs uncertainty quantification methods 6.7% better than hand-crafted baselines, but reveals divergent model strategies—Claude evolves complex estimators while GPT prefers simpler schemes, with Opus 4.6 unexpectedly regressing.
From GPT-2 to gpt-oss: Analyzing the Architectural Advances
OpenAI releases gpt-oss-120b and gpt-oss-20b with MXFP4 quantization, enabling single-GPU deployment and marking a strategic openness shift after five years of closed models.
IBM and UC Berkeley Diagnose Why Enterprise Agents Fail Using IT-Bench and MAST
gpt-oss-120b & gpt-oss-20b Model Card