ROCm
ROCm vs CUDA for Local AI in 2026: The Software Gap Nobody Talks About
AMD GPUs have the bandwidth. They have the VRAM. They still lose by 2x on inference speed. Here's why, what actually works on ROCm 7.2, and whether RDNA 4 fixes anything.
Ubuntu 26.04 Is Built for Local AI — What Actually Changes
Ubuntu 26.04 LTS packages NVIDIA CUDA and AMD ROCm in official repos. No more external downloads or dependency nightmares. What's confirmed and what it means for local AI.
ROCm Not Detecting GPU: AMD Troubleshooting Guide
AMD GPU not detected in ROCm? Check supported GPUs, fix rocminfo errors, HSA_OVERRIDE hack for unsupported cards, and Ollama/llama.cpp ROCm build fixes.
llama.cpp Build Errors: Common Fixes for Every Platform
llama.cpp won't build? CMake too old, CUDA not found, Metal not enabling, Visual Studio missing. Exact error messages and one-liner fixes for every platform.
AMD vs NVIDIA for Local AI: Is ROCm Finally Ready?
RX 7900 XTX delivers 85-95% of RTX 4090 performance with 24GB VRAM at $700-950. ROCm 6.x finally works on Linux. Honest benchmarks and the real compatibility gaps.