BREAKING
7h agoAnthropic introduces "dreaming," a system that lets AI agents learn from their own mistakes///7h agoZAYA1-8B Technical Report///7h agoEMO: Pretraining mixture of experts for emergent modularity///7h agoThe back office problem that explains why specialists never call you back///7h agoMojo 1.0 Beta///7h ago[AINews] GPT-Realtime-2, -Translate, and -Whisper: new SOTA realtime voice APIs///7h agoCaligra c100 Developer Terminal///7h agoClojureScript Gets Async/Await///7h agoSee what happens when creative legends use AI to make ads for small businesses///7h agoClaude Code, Codex and Agentic Coding #8///7h agoResearchers discover advanced language processing in the unconscious human brain///7h agoPartial Evidence Bench: Benchmarking Authorization-Limited Evidence in Agentic Systems///7h agoPRISM: Perception Reasoning Interleaved for Sequential Decision Making///7h agoAgentic Retrieval-Augmented Generation for Financial Document Question Answering///7h agoFrom History to State: Constant-Context Skill Learning for LLM Agents///7h agoAgentic Discovery of Exchange-Correlation Density Functionals///7h agoLANTERN: LLM-Augmented Neurosymbolic Transfer with Experience-Gated Reasoning Networks///7h agoAre Flat Minima an Illusion?///7h agoSAT: Sequential Agent Tuning for Coordinator Free Plug and Play Multi-LLM Training with Monotonic Improvement Guarantees///7h agoPhysics-Informed Neural Networks with Learnable Loss Balancing and Transfer Learning///7h agoHorizon-Constrained Rashomon Sets for Chaotic Forecasting///7h agoAdaGATE: Adaptive Gap-Aware Token-Efficient Evidence Assembly for Multi-Hop Retrieval-Augmented Generation///7h agoCounterargument for Critical Thinking as Judged by AI and Humans///7h agoGenerating Query-Focused Summarization Datasets from Query-Free Summarization Datasets///7h agoSLAM: Structural Linguistic Activation Marking for Language Models///7h agoReaComp: Compiling LLM Reasoning into Symbolic Solvers for Efficient Program Synthesis///7h agoAuthorization Propagation in Multi-Agent AI Systems: Identity Governance as Infrastructure///7h agoGNU IFUNC is the real culprit behind CVE-2024-3094///7h agoMedQA: Fine-Tuning a Clinical AI on AMD ROCm — No CUDA Required///7h agoThe biggest U.S. power grid is under strain from AI — and no one is happy///7h ago5% GPU utilization: The $401 billion AI infrastructure problem enterprises can't keep ignoring///7h agoLaTA: A Drop-in, FERPA-Compliant Local-LLM Autograder for Upper-Division STEM Coursework///7h agoTwo Home Affairs officials suspended after AI 'hallucinations' found///7h agoShinyHunters claims data theft from 8,800 schools (Instructure/Canvas)///7h agoCanvas Breach Disrupts Schools & Colleges Nationwide///7h agoHardening Firefox with Claude Mythos Preview///7h agoUnderstanding Annotator Safety Policy with Interpretability///7h agoWhen Helpfulness Becomes Sycophancy: Sycophancy is a Boundary Failure Between Social Alignment and Epistemic Integrity in Large Language Models///7h agoThe Geopolitics of AI Safety: A Causal Analysis of Regional LLM Bias///7h agoIntentionality is a Design Decision: Measuring Functional Intentionality for Accountable AI Systems///7h agoHow Go Players Disempower Themselves to AI///7h agoThe New Wild West of AI Kids’ Toys///7h agoBehind the Blog: Storage Woes and RSS///7h agoDid xAI just concede the AI race?///7h agoMusk vs. Altman Evidence Shows What Microsoft Executives Thought of OpenAI///7h agoAnthropic introduces "dreaming," a system that lets AI agents learn from their own mistakes///7h agoZAYA1-8B Technical Report///7h agoEMO: Pretraining mixture of experts for emergent modularity///7h agoThe back office problem that explains why specialists never call you back///7h agoMojo 1.0 Beta///7h ago[AINews] GPT-Realtime-2, -Translate, and -Whisper: new SOTA realtime voice APIs///7h agoCaligra c100 Developer Terminal///7h agoClojureScript Gets Async/Await///7h agoSee what happens when creative legends use AI to make ads for small businesses///7h agoClaude Code, Codex and Agentic Coding #8///7h agoResearchers discover advanced language processing in the unconscious human brain///7h agoPartial Evidence Bench: Benchmarking Authorization-Limited Evidence in Agentic Systems///7h agoPRISM: Perception Reasoning Interleaved for Sequential Decision Making///7h agoAgentic Retrieval-Augmented Generation for Financial Document Question Answering///7h agoFrom History to State: Constant-Context Skill Learning for LLM Agents///7h agoAgentic Discovery of Exchange-Correlation Density Functionals///7h agoLANTERN: LLM-Augmented Neurosymbolic Transfer with Experience-Gated Reasoning Networks///7h agoAre Flat Minima an Illusion?///7h agoSAT: Sequential Agent Tuning for Coordinator Free Plug and Play Multi-LLM Training with Monotonic Improvement Guarantees///7h agoPhysics-Informed Neural Networks with Learnable Loss Balancing and Transfer Learning///7h agoHorizon-Constrained Rashomon Sets for Chaotic Forecasting///7h agoAdaGATE: Adaptive Gap-Aware Token-Efficient Evidence Assembly for Multi-Hop Retrieval-Augmented Generation///7h agoCounterargument for Critical Thinking as Judged by AI and Humans///7h agoGenerating Query-Focused Summarization Datasets from Query-Free Summarization Datasets///7h agoSLAM: Structural Linguistic Activation Marking for Language Models///7h agoReaComp: Compiling LLM Reasoning into Symbolic Solvers for Efficient Program Synthesis///7h agoAuthorization Propagation in Multi-Agent AI Systems: Identity Governance as Infrastructure///7h agoGNU IFUNC is the real culprit behind CVE-2024-3094///7h agoMedQA: Fine-Tuning a Clinical AI on AMD ROCm — No CUDA Required///7h agoThe biggest U.S. power grid is under strain from AI — and no one is happy///7h ago5% GPU utilization: The $401 billion AI infrastructure problem enterprises can't keep ignoring///7h agoLaTA: A Drop-in, FERPA-Compliant Local-LLM Autograder for Upper-Division STEM Coursework///7h agoTwo Home Affairs officials suspended after AI 'hallucinations' found///7h agoShinyHunters claims data theft from 8,800 schools (Instructure/Canvas)///7h agoCanvas Breach Disrupts Schools & Colleges Nationwide///7h agoHardening Firefox with Claude Mythos Preview///7h agoUnderstanding Annotator Safety Policy with Interpretability///7h agoWhen Helpfulness Becomes Sycophancy: Sycophancy is a Boundary Failure Between Social Alignment and Epistemic Integrity in Large Language Models///7h agoThe Geopolitics of AI Safety: A Causal Analysis of Regional LLM Bias///7h agoIntentionality is a Design Decision: Measuring Functional Intentionality for Accountable AI Systems///7h agoHow Go Players Disempower Themselves to AI///7h agoThe New Wild West of AI Kids’ Toys///7h agoBehind the Blog: Storage Woes and RSS///7h agoDid xAI just concede the AI race?///7h agoMusk vs. Altman Evidence Shows What Microsoft Executives Thought of OpenAI///
BACK TO NEWS
War

Mira Murati tells the court that she couldn’t trust Sam Altman’s words

OpenAI's former CTO testified in the Musk lawsuit that CEO Altman lied about legal approval for bypassing safety reviews, revealing a pattern of executive dishonesty that undermined governance at the company.

Wednesday, May 6, 2026 12:00 PM UTC2 MIN READSOURCE: The VergeBY sys://pipeline

In the ongoing Musk v. Altman trial, OpenAI's former CTO Mira Murati testified under oath that CEO Sam Altman lied about whether the company's legal department approved bypassing safety review for a new model. Murati said Altman repeatedly undermined her authority and created misalignment with other executives like Chief Strategy Officer Jason Kwon. Co-founder Ilya Sutskever's board memo alleges Altman exhibits a "consistent pattern of lying" and executive undermining.

Tags
war
/// RELATED