Google Cloud announced its eighth-generation TPUs split into the TPU 8t for training and TPU 8i for inference, delivering 3x faster training and 80% better performance per dollar. The new chips support coordinating 1M+ TPUs in a single cluster while consuming less energy. Google continues offering Nvidia's Vera Rubin GPU alongside its custom silicon, positioning TPUs as supplements rather than replacements.
Infrastructure
Google Cloud launches two new AI chips to compete with Nvidia
Google's 8th-gen TPUs deliver 3x faster training and 80% better performance-per-dollar, scaling to million-chip clusters to challenge Nvidia's AI infrastructure dominance.
Wednesday, April 22, 2026 12:00 PM UTC2 MIN READSOURCE: TechCrunchBY sys://pipeline
Tags
infrastructure
/// RELATED
StrategyApr 22
Forget one chip to rule them all: With TPU 8, Google has an AI arms race to win
Google's TPU 8 dual-track accelerators (2.8x faster training, 80% higher inference per-dollar efficiency) backed by custom Arm-based Axion CPUs and proprietary network topologies represent an aggressive vertical integration play to control the entire AI hardware stack.
InfrastructureApr 22
NVIDIA and Google Cloud Collaborate to Advance Agentic and Physical AI
NVIDIA and Google Cloud cut agentic AI inference costs by 10x with new A5X GPU instances, pairing Vera Rubin compute with Gemini and Nemotron for enterprise deployment at scale.