Qwen 3

Qwen 3 — released by Alibaba Cloud on 28 April 2025 — redefines what an open-source large-language model can do. From a pocket-friendly 0.6 B-parameter model to the flagship 235 B-parameter Mixture-of-Experts, every variant ships under the permissive
Apache 2.0 license. Trained on a colossal 36 trillion-token corpus and fluent in 119 languages, Qwen 3 competes head-to-head with GPT-4-class systems — yet remains fully transparent and free to fine-tune. Looking for the bigger picture? Explore the complete ecosystem on the Qwen AI homepage.

Stand-out upgrades include a dual-mode Hybrid Reasoning Engine, sparsely-activated MoE layers for maximum FLOPS-per-dollar, and monstrous context windows — 32 K native and up to 128 K with YaRN. Below you’ll find deep-dive specs, benchmark results, and a step-by-step local install using Ollama. Prefer the hosted route? Fire up Qwen AI Chat and start exploring.


Alibaba Cloud Qwen 3 model family overview

Table of Contents

Qwen 3 Model Line-up: Dense and MoE

Choose from lean dense models for edge devices to heavyweight MoE titans for research labs. All share identical tokenizer and instruction format, so you can swap models without rewriting code.

Dense Models (0.6 B → 32 B)

Perfect for chatbots, real-time RAG, and multi-lingual assistants. The 8 B variant runs in <10 GB VRAM with GGUF quantization, yet scores >80 on MT-Bench.

Mixture-of-Experts Variants

  • Qwen3-30B-A3B — 30.5 B total, ≈3.3 B active. One high-end RTX 4090 handles it in real time.
  • Qwen3-235B-A22B — 235 B total, 22 B active (8 of 128 experts/token). Hits GPT-4-level reasoning while halving inference cost versus a monolithic dense model.

Specification Matrix

ModelTypeTotal ParamsActive ParamsNative CtxYaRN Ctx
Qwen3-0.6BDense0.6 B32 K
Qwen3-1.7BDense1.7 B32 K
Qwen3-4BDense4 B32 K128 K
Qwen3-8BDense8.2 B32 K128 K
Qwen3-14BDense14 B32 K128 K
Qwen3-32BDense32.8 B32 K128 K
Qwen3-30B-A3BMoE30.5 B≈3.3 B32 K128 K
Qwen3-235B-A22BMoE235 B≈22 B32 K128 K

Official specs — April 2025 launch.

Getting Started with Qwen 3

Cloud: Hit Qwen endpoints on Alibaba Cloud Model Studio / DashScope with OpenAI-compatible calls.
Local: Download weights from Hugging Face, ModelScope or GitHub, then serve with vLLM, llama.cpp, SGLang, or Ollama. Quantised GGUF / AWQ / GPTQ builds slash VRAM needs by 70–80 %.

Run Qwen 3 Locally with Ollama

  1. Install Ollama for Windows/macOS/Linux.
  2. Pull a model:
    ollama run qwen3:8b-q4_K_M or
    ollama run qwen3:30b-a3b-q4_K_M.
  3. Chat: type your prompt and watch Qwen 3 think.
  4. Exit: /bye.

Hardware Requirements

  • 0.6–1.7 B: 4 GB VRAM or even CPU-only for prototype bots.
  • 4–8 B: 8–16 GB VRAM — gaming laptop territory.
  • 14–32 B & 30 B-MoE: 24 GB+ VRAM; runs comfortably on a single 4090 or A6000.
  • 235 B-MoE: multi-GPU H100 or A100 cluster; Kubernetes + vLLM recommended.

Architectural Highlights

Qwen 3 keeps the Transformer backbone but layers on Grouped Query Attention for speed, QK-Norm for stability, and a 128-expert MoE grid with load-balancing loss to avoid expert collapse. SwiGLU activations and RoPE-scaling extend context without hurting throughput.

Inside the Hybrid Reasoning Engine

Thinking Mode — emits explicit <think> chains, ideal for maths, STEM, multi-hop logic.
Fast Mode — bypasses chain-of-thought to cut latency for support chat and search.
Thinking Budget — hard-cap reasoning tokens: e.g. max_thought_tokens=128.

Training & Alignment

Stage 1: 30 T mixed-quality web+code at 4 K seq → broad world knowledge.
Stage 2: 5 T high-quality STEM, coding & reasoning → analytic strength.
Stage 3: 32 K seq long-context bootstrapping with hundreds of B tokens.
Post-training layers include multi-round SFT, reward-model RLHF, reasoning-focused RL and
on-policy distillation that shrinks compute by 90 % vs pure RL.

Context and Model Context Protocol (MCP)

Native 32 K context (GPU-friendly) extends to 128 K with YaRN. MCP lets Qwen 3 call JSON-defined tools out-of-the-box:

{
  "name": "lookupWeather",
  "description": "Get current weather",
  "parameters": { "type": "object", "properties": { "city": { "type": "string" } }, "required": ["city"] }
}

Your agent passes the call; Qwen 3 ingests the JSON response—no custom parsing needed.

Benchmarks & Comparisons

Qwen3-235B-A22B (thinking mode)
• Arena-Hard 95.6 (≈Gemini 2.5 Pro)
• AIME’24 85.7 (math)
• LiveCodeBench 70.7 (coding)
• BFCL 70.8 (tool use)
Qwen3-30B-A3B hits Arena-Hard 91.0 at 1⁄8th the GPU budget of flagship dense models.
Compact 8 B dense equals Llama 3 8 B on MMLU but beats it on C-Eval and MGSM.

Prime Use Cases

  • Enterprise RAG: combine 128 K context with vector search for instant SOP answers.
  • Autonomous Agents: MCP makes tool orchestration trivial.
  • Code Co-pilots: 30 B-MoE solves 70 % of LiveCodeBench tasks out-of-the-box.
  • Multilingual CX: 119-language fluency for support chat and localisation.
  • Long-form Analysis: ingest 500-page PDFs without chunking.
  • Robotics / Edge: 1.7 B dense on Jetson Orin for low-latency control.

Next Steps

Qwen 3 delivers open-source SOTA with commercial-grade licensing. Fork it on GitHub, fine-tune on domain data, or scale instantly via Alibaba Cloud APIs. Need multimodal? Pair with Qwen 2.5-Omni