Model-Guide
Phi Models Guide: Microsoft's Small but Mighty LLMs
Phi-4 14B scores 84.8% on MMLU — matching models 5x its size — and fits on a 12GB GPU at Q4. The full Phi lineup from 3.8B to 14B with VRAM needs, benchmarks, and honest weaknesses.
Gemma Models Guide: Google's Lightweight Local LLMs
Gemma 3 27B beats Gemini 1.5 Pro on benchmarks and runs on a single GPU. The 4B outperforms Gemma 2 27B. Full lineup from 1B to 27B with VRAM needs, speeds, and honest comparisons.