Gemma 4 Just Dropped: What Local AI Builders Need to Know
Google's Gemma 4 is here -- dense and MoE variants, Apache 2.0, multimodal with vision and audio. VRAM requirements, benchmarks, and how it compares to Qwen 3.5.
TurboQuant Explained: How Google's KV Cache Trick Cuts Memory 6x With Zero Quality Loss
Google's TurboQuant compresses the KV cache 6x with zero accuracy loss. Here's what it actually does, how it works in llama.cpp and MLX, and what it means for running bigger models on your GPU.
Gemma Models Guide: Google's Lightweight Local LLMs
Gemma 3 27B beats Gemini 1.5 Pro on benchmarks and runs on a single GPU. The 4B outperforms Gemma 2 27B. Full lineup from 1B to 27B with VRAM needs, speeds, and honest comparisons.