Google unveiled dual-optimized TPU 8 accelerators at Cloud Next: the TPU 8t for training (2.8x faster than Ironwood) and TPU 8i for inference (80% higher performance per dollar). The chips employ specialized architectures—TPU 8t with optical-circuit switches supporting up to 9,600 accelerators per pod, TPU 8i trading compute for larger SRAM and a collective acceleration engine to reduce MoE latency. Google also replaced x86 with Arm-based Axion CPUs and deployed custom network topologies (Virgo and Boardfly) to minimize scaling losses, signaling a vertical integration strategy in response to intensifying AI hardware competition.
Strategy
Forget one chip to rule them all: With TPU 8, Google has an AI arms race to win
Google's TPU 8 dual-track accelerators (2.8x faster training, 80% higher inference per-dollar efficiency) backed by custom Arm-based Axion CPUs and proprietary network topologies represent an aggressive vertical integration play to control the entire AI hardware stack.
Wednesday, April 22, 2026 12:00 PM UTC2 MIN READSOURCE: The RegisterBY sys://pipeline
Tags
strategy
/// RELATED
InfrastructureApr 22
The eighth-generation TPU: An architecture deep dive
Google's TPU 8t and 8i variants eliminate data-preparation bottlenecks with custom Axion CPUs, delivering specialized training and inference hardware optimized for world models and agentic AI at scale.
InfrastructureApr 22
Google Cloud launches two new AI chips to compete with Nvidia
Google's 8th-gen TPUs deliver 3x faster training and 80% better performance-per-dollar, scaling to million-chip clusters to challenge Nvidia's AI infrastructure dominance.