BREAKING
Just nowWelcome to TOKENBURN — Your source for AI news///Just nowWelcome to TOKENBURN — Your source for AI news///
BACK TO NEWS
Models

I ran Gemma 4 as a local model in Codex CLI

Developer demonstrates running Google's open-source Gemma 4 model locally in Codex CLI, enabling offline LLM inference for development workflows.

Monday, April 13, 2026 12:00 PM UTC2 MIN READSOURCE: Hacker NewsBY sys://pipeline

Developer Daniel Vaughan documents running Google's Gemma 4 model locally via Codex CLI. The post covers practical setup and considerations for local deployment of this open-source model.

Tags
models