Docker
Docker for Local AI: The Complete Setup Guide for Ollama, Open WebUI, and GPU Passthrough
Run Ollama and Open WebUI in Docker with GPU passthrough. Five copy-paste compose files for NVIDIA, AMD, multi-GPU, and CPU-only setups, plus the Mac gotcha most guides skip.
WSL2 + Ollama on Windows: Complete Setup Guide (GPU Passthrough Included)
Install Ollama in WSL2 with full GPU acceleration in 20 minutes. GPU passthrough, Open WebUI, Docker Compose, VPN fixes, and the gotchas that will waste your afternoon.
Open WebUI Not Connecting to Ollama? Every Fix
Docker networking, wrong OLLAMA_BASE_URL, localhost confusion, WSL2 isolation, missing models, random disconnects. Every Open WebUI + Ollama connection problem with the exact fix.
Ollama API Connection Refused: Quick Fixes
Ollama API returning connection refused? Check if it's running, fix the port, open it to the network, and solve Docker and WSL2 connectivity issues.
Razer AIKit Guide: Multi-GPU Local AI on Your Desktop
Open-source Docker stack bundling vLLM, Ray, LlamaFactory, and Grafana into 1 container. Auto-detects GPUs, supports 280K+ HuggingFace models, and handles multi-GPU parallelism.
Open WebUI Setup Guide: ChatGPT UI for Local AI
1 Docker command gives you a ChatGPT-like interface for any Ollama model. 120K+ GitHub stars, built-in RAG, voice chat, and multi-model switching—all running locally.