Evergreen reference pages on AI agent security, attack vectors, and how to protect your agent stack.
Why input guardrails fail for agents, why post-execution detection is too late, and how pre-execution blocking works at the decision gate.
Known CVEs (including CVE-2025-59536), CLAUDE.md attack vectors, indirect prompt injection, and practical steps to secure Claude Code CLI, Cowork, and Dispatch.
MCP tool poisoning embeds malicious instructions in tool descriptions the LLM silently executes. Attack variants, real CVEs, and prevention steps.
Tool permissions, ClawHub skill supply chain risks, typosquatting, multi-agent blast radius, and unattended agent protection.
Threat taxonomy, computer use attack vectors, tool call interception, and an honest assessment of LLM agent security tools including NeMo, Lakera, ClawMoat, and Shoofly.