Reasoning
DeepSeek V3.2 Guide: What Changed and How to Run It Locally
DeepSeek V3.2 competes with GPT-5 on benchmarks. The full model needs 350GB+ VRAM. But the R1 distills run on a $200 used GPU — and they're shockingly good.
DeepSeek Models Guide: R1, V3, and Coder
Complete DeepSeek models guide covering R1, V3, and Coder locally. Which distilled R1 to pick for your GPU, VRAM requirements, and benchmarks vs Qwen 3.
Best Local LLMs for Math & Reasoning: What Actually Works
The best local LLMs for math and reasoning tasks, ranked by VRAM tier. AIME and MATH benchmarks for DeepSeek R1, Qwen 3 thinking, and Phi-4-reasoning.