AMD
RTX 5090 vs DGX Spark vs AMD: The Ultimate Local LLM Benchmark (2026)
Real llama.cpp benchmarks across RTX 5090, DGX Spark, and AMD AI395 with ROCm and Vulkan. Token speeds, VRAM usage, and which hardware wins for local AI.
ROCm vs CUDA for Local AI in 2026: The Software Gap Nobody Talks About
AMD GPUs have the bandwidth. They have the VRAM. They still lose by 2x on inference speed. Here's why, what actually works on ROCm 7.2, and whether RDNA 4 fixes anything.
ROCm Not Detecting GPU: AMD Troubleshooting Guide
AMD GPU not detected in ROCm? Check supported GPUs, fix rocminfo errors, HSA_OVERRIDE hack for unsupported cards, and Ollama/llama.cpp ROCm build fixes.
Ollama Not Using GPU: Complete Fix Guide
Ollama running on CPU instead of GPU? Diagnose with ollama ps and nvidia-smi, then fix CUDA drivers, ROCm setup, VRAM limits, and Docker GPU passthrough.
AMD vs NVIDIA for Local AI: Is ROCm Finally Ready?
RX 7900 XTX delivers 85-95% of RTX 4090 performance with 24GB VRAM at $700-950. ROCm 6.x finally works on Linux. Honest benchmarks and the real compatibility gaps.