Alibaba Unveils Preview of QwQ-Max Deep Reasoning Model

2025-02-25

Recently, Alibaba has launched QwQ-Max, a deep reasoning model based on the Qwen2.5-Max architecture, which is part of the Qwen series. As a preview version, QwQ-Max Preview is now available to the public, offering a glimpse into the capabilities of the upcoming official release. This model excels in deep reasoning, mathematical calculations, programming skills, and task management across various domains, particularly demonstrating its potential in intelligent agent workflows.

The core functionalities of QwQ-Max are extensive. In terms of reasoning ability, it can swiftly and accurately address complex logical problems and knowledge-based questions, outperforming DeepSeek R1 in tests. Additionally, the model boasts code generation capabilities, producing high-quality code to assist developers in enhancing their productivity. By integrating external tools such as web searches, image and video generation, QwQ-Max can invoke these tools based on user instructions, providing a more comprehensive service. These features make QwQ-Max suitable for multiple applications, including programming assistance, content creation, and knowledge-based inquiries, catering to diverse user needs.

In terms of performance, according to evaluations by LiveCodeBench, the QwQ-Max Preview matches the performance of o1-medium and surpasses DeepSeek R1. This indicates its efficiency and accuracy in handling complex tasks.

The process of using QwQ-Max is straightforward. Users first need to visit the QwQ-Max official website, then activate the "Deep Thinking" feature from the web interface. Following this, users can input questions or tasks into the dialogue box, such as math problems, code generation requests, or creative writing prompts. The model will generate appropriate responses or solutions based on the inputs.

It's worth noting that QwQ-Max is planned to be open-sourced under the Apache 2.0 license in the future, along with the development of related applications and smaller inference models like QwQ-32B. This initiative aims to cater to the needs of different user groups and promote broader applications and advancements in this technology.