Wise's infrastructure uses NVIDIA Blackwell GPUs achieving 1.63× throughput gains over H100 for LLM inference at 100+ concurrent requests. The company's sophisticated deployment automation system automatically rolled back hundreds of risky releases in 2024, routing traffic gradually and monitoring business metrics without human intervention.
Infrastructure
The Tech Stack Powering Wise
Wise's automated deployment system rolled back hundreds of risky releases in 2024 using unsupervised traffic routing and business metric monitoring, while NVIDIA Blackwell GPUs achieved 1.63× inference throughput gains over H100.
Thursday, April 30, 2026 12:00 PM UTC2 MIN READSOURCE: ByteByteGoBY sys://pipeline
Tags
infrastructure