Chinese AI startup DeepSeek has launched two advanced AI models, DeepSeek-V3.2 and DeepSeek-V3.2-Speciale, claiming performance on par with or superior to OpenAI’s GPT-5 and Google’s Gemini-3.0-Pro. The Speciale variant excelled in prestigious competitions like the 2025 International Mathematical Olympiad and ICPC World Finals, earning top honors. Utilizing a novel Sparse Attention mechanism, these 685-billion-parameter models can process 128,000 tokens efficiently, reducing inference costs by 70%. Notably, DeepSeek features “thinking in tool-use,” allowing multi-step reasoning while interacting with coding and search tools, an advancement over previous models. Released under the permissive MIT license, the models are openly accessible on Hugging Face, challenging traditional proprietary AI business models. However, regulatory hurdles in Europe and the US present barriers to DeepSeek’s global adoption amid concerns over data privacy and national security. This launch signals a shift in AI competition, emphasizing open-source innovation, cost efficiency, and geopolitical tension.