Google has released Gemma 4, a new family of four open-weight models optimized for local inference, spanning from mobile-focused 2B/4B models to a 26B Mixture of Experts and a 31B Dense variant. Crucially, Google is dropping its restrictive custom license in favor of Apache 2.0, addressing a major developer complaint. The MoE model activates only 3.8B parameters at inference time, offering strong tokens-per-second on consumer hardware.
Products
Google announces Gemma 4 open AI models, switches to Apache 2.0 license
Google drops Gemma's restrictive license for Apache 2.0 and releases sparse-activation models (26B MoE with 3.8B active parameters) for efficient inference on consumer hardware.
Friday, April 3, 2026 12:00 PM UTC2 MIN READSOURCE: Ars TechnicaBY sys://pipeline
Tags
products
/// RELATED
Infrastructure4d ago
Our agent found a bug with WireGuard in Google Kubernetes Engine
Lovable's AI agents uncovered a concurrency bug in Google's WireGuard integration for GKE by analyzing logs at scale, revealing a concurrent map-access panic that caused random pod crashes and required an MTU mismatch fix.
Safety4d ago
CPanel and WHM Authentication Bypass – CVE-2026-41940
Session data sanitization flaw in cPanel & WHM (CVE-2026-41940) enabled zero-day authentication bypasses against millions of hosted domains before patches shipped.