Qwen
Best Way to Run 31B Models on a Laptop? Treat Them Like Databases
LARQL decompiles transformer weights into a queryable graph called a vindex. The project pitches a new shape for local inference: walk a subgraph, patch facts, stream from disk. Here's what's real, what's claimed, and what's still research.
Qwen's Architect Just Walked Out the Door
Junyang Lin, the technical lead and public face of Qwen, has left Alibaba. Two other senior team members gone with him. What this means for the model family that runs on half the local AI setups in the world.
OpenClaw Model Combinations: What to Pair for Each Task
Stop running one model for everything in OpenClaw. Pair Qwen 2.5 Coder 32B for autocomplete, Qwen 3.5 27B for planning, and Qwen3-Coder-Next for agentic coding. Combos by VRAM tier.
Local AI for Small Business: Email, Invoicing, and Customer Support Without Monthly Subscriptions
A 5-person team spends $1,500-3,000/year on AI subscriptions. A $600 mini PC running Ollama replaces all of them. Here's the setup, the workflows, and the math.
Run Your Coding Agent on Local Models with PI Agent + Ollama
PI Agent is a free, open-source coding agent that works with any model. Set up PI + Ollama to run a private coding agent on Qwen 3.5 or Qwen3-Coder-Next with zero API costs.
Qwen 3.5 Locally — 27B vs 35B-A3B vs 122B, Which Model Fits Your GPU
Qwen 3.5 and 3.6 on local hardware. 27B dense vs 35B-A3B MoE vs 122B compared. VRAM tables, community tok/s on RTX 3090, and which to pick for your card.
Best Qwen 3.5 Setup: Which Model Fits Your GPU (Complete Cheat Sheet)
Pick the right Qwen 3.5 or 3.6 model for your hardware. Covers 0.8B through 397B with VRAM requirements, quant recommendations, and benchmarks for every GPU tier. Updated April 2026 with Qwen 3.6-35B-A3B coverage.
Qwen vs Llama vs Mistral: Which Model Family Should You Build On?
Qwen has 201 languages and a model for every task. Llama has the biggest community. Mistral pioneered efficient MoE. Decision framework for choosing your model family in 2026.
Running 70B Models Locally — Exact VRAM by Quantization
Llama 3.3 70B needs 43GB at Q4, 75GB at Q8, 141GB at FP16. Here's every quant level, which GPUs fit, real speeds, and when 32B is the smarter choice.
Local LLMs vs Claude: When Each Actually Wins
Qwen 3 32B matches Claude on daily tasks at zero marginal cost. Claude still wins on 200K-token documents and multi-step debugging. Benchmarks, pricing, and when to use each.
Best Qwen Models Ranked: Which to Run Locally
Complete Qwen models guide covering Qwen 3.5, Qwen 3, Qwen 2.5 Coder, and Qwen-VL. VRAM requirements, Ollama setup, Gated DeltaNet architecture, and benchmarks vs Llama and DeepSeek.
Best Local LLMs for Math & Reasoning: What Actually Works
The best local LLMs for math and reasoning tasks, ranked by VRAM tier. AIME and MATH benchmarks for DeepSeek R1, Qwen 3 thinking, and Phi-4-reasoning.