Deepseek Previews New Ai Model That ‘closes The Gap’ With Frontier Models

Trending 2 days ago

Chinese AI laboratory DeepSeek has launched 2 preview versions of its newest ample connection model, DeepSeek V4, a much-awaited update to past year’s V3.2 exemplary and nan accompanying R1 reasoning model that took nan AI world by storm.

The institution says some DeepSeek V4 Flash and V4 Pro are mixture-of-experts models pinch discourse windows of 1 cardinal tokens each — capable to let ample codebases aliases documents to beryllium utilized successful prompts. The mixture-of-experts attack involves activating only a definite number of parameters per task to little conclusion costs.

The Pro exemplary has a full of 1.6 trillion parameters (49 cardinal active), which makes it nan biggest open-weight exemplary available, outstripping Moonshot AI’s Kimi K 2.6 (1.1 trillion), MiniMax’s M1 (456 billion), and much than double DeepSeek V3.2 (671 billion). The smaller, V4 Flash has 284 cardinal parameters (13 cardinal active).

DeepSeek says some models are much businesslike and performant than DeepSeek V3.2 owed to architectural improvements, and person almost “closed nan gap” pinch existent starring models, some unfastened and closed, connected reasoning benchmarks.

The institution claims its caller V4-Pro-Max exemplary outperforms its open-source peers crossed reasoning benchmarks, and outstrips OpenAI’s GPT-5.2 and Gemini 3.0 Pro connected immoderate tasks. In coding title benchmarks, DeepSeek said some V4 models’ capacity is “comparable to GPT-5.4.”

However, nan models look to autumn somewhat down frontier models successful knowledge tests, specifically OpenAI’s GPT-5.4 and Google’s latest Gemini 3.1 Pro. This lag suggests a “developmental trajectory that trails state-of-the-art frontier models by astir 3 to 6 months,” nan laboratory wrote.

Both V4 Flash and V4 Pro support matter only, dissimilar galore of its closed-source peers, which connection support for knowing and generating audio, video, and images.

Techcrunch event

San Francisco, CA | October 13-15, 2026

Notably, DeepSeek V4 is overmuch much affordable than immoderate frontier exemplary disposable today. The smaller V4 Flash exemplary costs $0.14 per cardinal input tokens and $0.28 per cardinal output tokens, undercutting GPT-5.4 Nano, Gemini 3.1 Flash, GPT-5.4 Mini, and Claude Haiku 4.5. The larger V4 Pro model, meanwhile, costs $0.145 per cardinal input tokens and $3.48 per cardinal output tokens, besides undercutting Gemini 3.1 Pro, GPT-5.5, Claude Opus 4.7, and GPT-5.4.

The motorboat comes a time aft nan U.S. accused China of stealing American AI labs’ IP connected an business standard utilizing thousands of proxy accounts. DeepSeek itself has been accused by Anthropic and OpenAI of “distilling,” fundamentally copying, their AI models.

When you acquisition done links successful our articles, we whitethorn gain a mini commission. This doesn’t impact our editorial independence.

More