What is everyone waiting for? The answer is simple—it’s an unmatched powerhouse of innovation and capability…

⭐ Model Features: Deep Thought Meets Lightning Speed

🚀 Introducing Qwen3, the world’s most powerful open-source model, surpassing DeepSeek R1 in every possible way. It marks a historic milestone as the first domestic model to achieve comprehensive superiority over R1, whereas earlier models could only match its performance.

🤖 Qwen3 is China’s first hybrid inference model, designed to deliver both profound insights for complex questions 🧠 and instant responses for straightforward queries ⚡. By seamlessly switching between modes, it enhances intelligence while conserving computational resources—a true game-changer.

💡 Deployment requirements have been revolutionized. The flagship model can now be deployed locally with just 4 H20 GPUs 💻, slashing costs by over 60% compared to R1.
🧑💻 Agent capabilities have reached new heights, with native support for the MCP protocol, significantly boosting coding abilities. Domestic Agent tools are eagerly anticipating its release.
🌏 Supporting an impressive 119 languages and dialects—including regional tongues like Javanese and Haitian Creole—Qwen3 ensures that AI accessibility knows no borders.
📊 Trained on a staggering 36 trillion tokens, double the amount used for Qwen2.5, the training data encompasses not only web content but also extensive PDF material and synthesized code snippets.
💰 Deployment has never been more cost-effective. The flagship model requires just 4 H20 GPUs, one-third of what R1 demands, making high-performance AI deployment accessible to a broader audience.
🏠 Meet the Qwen3 Family:
A total of 8 models are being open-sourced, including 2 MoE models and 6 Dense models.
– 2 MoE Models:
– Flagship Qwen3-235B-A22B, featuring only 22B active parameters, reducing deployment costs to one-third of DeepSeek R1.
– Mini Qwen3-30B-A3B, boasting just 3B active parameters, offering performance on par with Qwen2.
5-32B, perfect for consumer-grade GPU deployment.
– 6 Dense Models: 0.6B, 1.7B, 4B, 8B, 14B, 32B
– The lightweight 0.6B model can even be deployed on smartphones, bringing advanced AI directly into your pocket.
Qwen3 is here, fully open-sourced and ready for you to explore. Discover it today on the official website or Github.