Gemma
6 mentions across all digests
Gemma is Google's family of open-weight AI models used for on-device applications including speech recognition, medical AI (MedGemma), and multimodal fine-tuning on consumer hardware like Apple Silicon.
Google quietly launched an AI dictation app that works offline
Google brings on-device speech recognition to iOS with Edge Eloquent, an offline-first dictation app using Gemma that filters filler words and optionally enhances text via Gemini cloud.
MedGemma 1.5 Technical Report
Google's MedGemma 1.5 extends specialized healthcare-focused AI models, optimizing the Gemma family for medical domain applications with improved domain alignment.
Show HN: Gemma 4 Multimodal Fine-Tuner for Apple Silicon
New LoRA-based fine-tuning toolkit enables full multimodal training of Google Gemma models on Apple Silicon without requiring NVIDIA GPUs.
Steerable but Not Decodable: Function Vectors Operate Beyond the Logit Lens
A 4,032-pair study across Llama, Gemma, and Mistral reveals function vectors steer LLM outputs via early-layer computational instructions—even when logit lens interpretability can't decode them, exposing a fundamental gap between how models execute tasks and how we can currently explain them.
Google releases Gemma 4 open models
Google's Gemma 4 open models bring native agentic capabilities directly to developers, enabling autonomous agents to plan and execute tasks with built-in function calling.
At least one Fortune 500 company will publicly announce migration of a production AI workload from a frontier model API (OpenAI, Anthropic, or Google) to an open-weight alternative (Llama, Gemma, Mistral), citing cost as the primary driver, within 8 weeks.
Google's internal tension between Cloud (substrate) and DeepMind (models) will surface publicly within 8 weeks, likely as a reorganization or leadership change. Google's entity momentum (+49, largest absolute gain) is driven entirely by infrastructure plays (TPU deal with Anthropic, Scion OSS, Gemma Apache 2.0) — not by Gemini product wins. When your biggest week is about powering your competitor's models, the product org is losing the internal argument.
Google will announce a hosted Gemma-based coding agent product (not just model weights) — a direct competitor to Claude Code and Cursor — within 10 weeks, leveraging the Apache 2.0 licensing as a differentiation point for enterprise on-prem deployment.
At least 3 open-source local coding agent projects built on Gemma 4 + llama.cpp will each exceed 1,000 GitHub stars within 6 weeks, offering fully offline alternatives to Claude Code and Copilot with zero API costs or subscription fees.
Google's Gemma 4 Apache 2.0 license shift will trigger Meta to relicense Llama 4 (or Llama 5) under a permissive OSI-approved license within 8 weeks, as the restrictive Llama license becomes a competitive disadvantage against both Gemma and Chinese open-weight models.