Hallucination
Why Your Local LLM Lies to You (And the Neurons Responsible)
Less than 0.1% of neurons cause hallucinations in LLMs. Tsinghua researchers found they control sycophancy, not knowledge. Smaller models are 26% more affected.
Why Your AI Keeps Lying: The Hallucination Feedback Loop
How one bad memory poisoned our entire RAG pipeline — and the immune system we built to fix it. Real code from mycoSwarm's self-correcting retrieval system.