Run vision language models locally with Ollama. Qwen2.5-VL, Gemma 3, Llama 3.2 Vision, and Moondream compared with VRAM requirements and real benchmarks.