DeepSeek released preview versions of its V4 series: V4-Pro (1.6T parameters, 49B active) and V4-Flash (284B total, 13B active), both with 1-million token context and open MIT licensing. The Pro model is the industry's largest open-weights model, exceeding competitors like Kimi K2.6 and GLM-5.1, and both offer aggressive pricing ($0.14–$3.48 per million tokens) that significantly undercuts existing alternatives.
Models
DeepSeek V4 - almost on the frontier, a fraction of the price
DeepSeek's open-weights V4-Pro (1.6T parameters) matches frontier capabilities while pricing at 10-50x cheaper than proprietary models, forcing a reckoning on the economic viability of closed-source AI.
Friday, April 24, 2026 12:00 PM UTC2 MIN READSOURCE: Simon WillisonBY sys://pipeline
Tags
models
/// RELATED
ModelsApr 24
DeepSeek's new models are so efficient they'll run on a toaster ... by which we mean Huawei's NPUs
DeepSeek's open-weights V4 matches frontier model performance while slashing inference costs through novel efficiency techniques, now optimized for Huawei's Ascend NPUs—a major competitive threat to proprietary incumbents.
Research3d ago
Reducing ML-KEM-768 encapsulation key sizes by 24 octets
Bit-packing optimization trims ML-KEM-768 post-quantum cryptography encapsulation keys by 24 octets, enabling better UDP packet alignment for practical PQC deployment.