Gemma
Best Way to Run 31B Models on a Laptop? Treat Them Like Databases
LARQL decompiles transformer weights into a queryable graph called a vindex. The project pitches a new shape for local inference: walk a subgraph, patch facts, stream from disk. Here's what's real, what's claimed, and what's still research.
Gemma Models Guide: Google's Lightweight Local LLMs
Gemma 3 27B beats Gemini 1.5 Pro on benchmarks and runs on a single GPU. The 4B outperforms Gemma 2 27B. Full lineup from 1B to 27B with VRAM needs, speeds, and honest comparisons.