Distributed Inference
From 178 Seconds to 19: How a WiFi Laptop Borrowed a GPU's Brain
A WiFi laptop with no GPU ran inference in 19 seconds by borrowing an RTX 3090 across the network. The same query took 178 seconds on CPU. Here's how mycoSwarm's Tailscale mesh made it work.
Building a Distributed AI Swarm for Under $1,100
A complete bill of materials for a three-node distributed AI cluster: RTX 3090 workstation, ThinkCentre M710Q for light inference, Raspberry Pi 5 coordinator. Every part sourced used or cheap, total cost under $1,100.
Why mycoSwarm Was Born
From Claude Code envy to OpenClaw's 440,000-line JavaScript nightmare to nanobot routing my 'local' queries to Chinese cloud servers. The path to building something different.
mycoSwarm vs Exo vs Petals vs Nanobot: What's Actually Different
Exo distributes inference across Macs. Petals shares GPUs with strangers. Nanobot routes your queries to Chinese clouds without asking. The real question: who controls where your prompts go?
Razer AIKit Guide: Multi-GPU Local AI on Your Desktop
Open-source Docker stack bundling vLLM, Ray, LlamaFactory, and Grafana into 1 container. Auto-detects GPUs, supports 280K+ HuggingFace models, and handles multi-GPU parallelism.