Tutorial
Intent Engineering for Local AI Agents: A Practical Guide
Stop telling your agent to 'be helpful.' Start encoding specific goals, decision boundaries, and value hierarchies it can actually act on. Starter template included.
Run Qwen2.5-VL Vision in LM Studio (Setup)
Get Qwen2.5-VL running in LM Studio in 5 minutes. Covers the mmproj file most people miss, correct download links, and how to analyze images and PDFs locally.
How to Update Models in Ollama — Keep Your Local LLMs Current
Ollama doesn't auto-update models. Run ollama pull model:tag to grab the latest version — only changed layers download. Use ollama show to check what you have, and a simple loop to update everything at once.
Session-as-RAG: Teaching Your Local AI to Actually Remember
Build persistent conversation memory for local LLMs. Chunk sessions, embed in ChromaDB, retrieve relevant past exchanges at query time. Full Python implementation with topic splitting and date citations.
Running AI Offline: Complete Guide to Air-Gapped Local LLMs
Ollama works fully offline after one download. Pull models, disconnect the network, and your AI keeps running — no accounts, no APIs, no internet. Setup steps, offline RAG, and portable laptop kits.
Managing Multiple Models in Ollama: Disk Space, Switching, and Organization
Five 7B models eat 20GB before you notice. Check what's using space with ollama list, clean up with ollama rm, and set OLLAMA_KEEP_ALIVE to control memory. A practical cleanup and organization guide.
Embedding Models for RAG: Which to Run Locally
nomic-embed-text is still the default for most local RAG setups — 274MB, 8K context, runs on CPU. But Qwen3-Embedding 0.6B just changed the game. Model picks, VRAM needs, speed numbers, and the chunking mistakes that break retrieval.
Text Generation WebUI Setup Guide (2026)
Install Oobabooga text-generation-webui, load GGUF/GPTQ/EXL2 models, and configure GPU offloading. Covers the settings most guides skip and common error fixes.
Run Your First Local LLM in 15 Minutes
Install Ollama, pull a model, and chat with AI offline—all in 15 minutes. Works on any Mac, Windows, or Linux machine with 8GB RAM. No accounts, no API keys, no fees.