DeepSeek V4 Goes Live and Open-Source, Claiming Parity With the World's Best Closed Models at a Fraction of the Cost
DeepSeek's new V4 family — a 1.6 trillion parameter model with only 49 billion active — ships with a native 1M context window, runs on Huawei chips, and undercuts frontier model pricing by an order of magnitude. It launched on the same day OpenAI shipped GPT-5.5.
DeepSeek dropped the most consequential open-source model release of the year on Friday, shipping DeepSeek-V4 Preview with full open weights and a claim that stunned even jaded model-watchers: performance rivaling the world's top closed-source models. As @deepseek_ai announced, the V4-Pro variant packs 1.6 trillion total parameters but routes through just 49 billion active parameters at inference time, a mixture-of-experts architecture that delivers frontier-grade output at a cost structure that makes competitors' pricing look absurd.
The numbers circulating on X are staggering, if early. @sdrzn called it "the cheapest SOTA model available at 1/20th the cost of Opus 4.7." @cryptopunk7213 — hyperbolic but directionally representative of the sentiment — claimed V4 matches GPT-5.5 at 86% less cost. Independent benchmarks will take days to settle, but the initial self-reported results place V4 at or near state-of-the-art on agentic coding and mathematical reasoning tasks, the two domains where frontier models earn their keep in production.
Get our free daily newsletter
Get this article free — plus the lead story every day — delivered to your inbox.
Want every article and the full archive? Upgrade anytime.
No spam. Unsubscribe anytime.