Raspberry-Pi
Run LLMs on Old Phones: A Practical Guide to Mobile AI Inference
That old Pixel 6 or Galaxy S21 in your drawer can run a local LLM. Realistic tok/s by phone tier, Termux setup, app options, and an honest phone vs Raspberry Pi comparison.
Home Assistant + Local LLM: Voice Control Your Smart Home Without the Cloud
Set up fully local voice control with Home Assistant, Ollama, Whisper, and Piper. No Alexa, no cloud, no subscriptions. Wyoming protocol pipeline, model picks, and hardware options.
OpenClaw on Raspberry Pi: What Actually Works (and What Doesn't)
Pi 5 with 8GB RAM runs OpenClaw as a gateway with cloud APIs. Local LLMs hit 2-7 tok/s on 1.5B-3B models. Step-by-step setup for llama.cpp, Ollama, and OpenClaw on ARM64.
Best Models Under 3B: Small LLMs That Work
The best models under 3B parameters for laptops, old GPUs, Raspberry Pi, and phones. What works, what doesn't, and which tiny LLM to pick for your use case.