๐Ÿชฐ Shoofly
  • Shoofly Basic
  • Shoofly Advanced
  • Claude Code FAQs
  • OpenClaw FAQs
  • Blog
  • ClawHub
  • GitHub

Blog

AI agent security research, threat analysis, and practical guides.

OpenClaw ยท BillingApril 4, 2026

OpenClaw + Claude Is Now Pay-Per-Token: What It Costs and What to Do Next (April 2026)

Anthropic ended third-party Claude subscription billing on April 4, 2026. Here's what it actually costs now and your four options: stay on OpenClaw, move to Claude Code + Cowork, switch models, or go local.

SecurityMarch 28, 2026

LLM Firewall: Architecture, Comparison, and the Case for Pre-Execution

Comparing LLM firewalls across prompt filtering, tool call interception, and output scanning โ€” and when each layer applies to your threat model.

SecurityMarch 26, 2026

Vibe Coding Security: Why Scanning Output Isn't Enough

Everyone covers the code vibe coding generates. Nobody covers what the agent does before any code is written โ€” file reads, shell commands, network calls.

Supply Chain SecurityMarch 25, 2026

ClawHavoc: 824 Malicious Skills and the ClawHub Supply Chain Crisis

Koi Security found 824 confirmed malicious skills in ClawHub โ€” roughly 8% of the registry. Here's the attack patterns, the kill chains, and what runtime policy enforcement catches.

Supply Chain SecurityMarch 23, 2026

AI Supply Chain Security: From npm to MCP to ClawHub

Traditional supply chain tools stop at packages and models. Layers 3โ€“5 โ€” skills, MCP servers, tool calls โ€” are where AI-specific attacks live, and the tooling gap is widest.

Agent SecurityMarch 22, 2026

Devin AI Security: What to Know Before Going Autonomous

Devin has shell + browser + editor access simultaneously. That trifecta creates attack vectors that don't exist with any single tool โ€” here's the security analysis.

Agent SecurityMarch 20, 2026

Agentic Workflow Security: Protecting the Full Execution Pipeline

Enterprise agentic workflows are non-deterministic. Traditional API security architectures don't fit. Here's the enforcement model that does.

Claude Code SecurityMarch 19, 2026

AI Coding Agents Are Leaking Your Secrets: The .env Problem

23.8 million secrets leaked on GitHub last year. AI coding tools are making it 40% worse. The attack pattern is two steps: read credentials, then exfiltrate them.

SecurityMarch 17, 2026

SandboxEscapeBench: AI Agents Escape Containers for $1

New research shows AI agents can escape common container configurations for ~$1 per successful attempt. Sandboxing is necessary but no longer sufficient alone.

Claude Code SecurityMarch 16, 2026

Claude Code Hooks: What They Block, What They Miss

Claude Code hooks are the right idea for pre-execution security. The gap between "you can write a hook" and "you have a reliable security layer" is where incidents happen.

SecurityMarch 14, 2026

AI Agent Firewall: Beyond Prompt Filtering to Tool Call Interception

Prompt filtering is a text firewall. Agents need an action firewall. Here's the architectural difference and when you need each layer.

SecurityMarch 13, 2026

Windsurf Security: The Risks Nobody's Talking About

Windsurf is one of the most popular AI IDEs โ€” and almost nobody is doing security analysis on it. Here's what we found about its access model, data exfiltration risks, and the 94 Chromium CVEs it inherits.

MCP SecurityMarch 11, 2026

30 MCP CVEs in 60 Days: The Protocol's Security Crisis

The Model Context Protocol went from a handful of known vulnerabilities to 30+ CVEs in 60 days. This isn't random bugs โ€” it's architectural gaps in a protocol that shipped with trust as a default.

Agent SecurityMarch 10, 2026

Multi-Agent Security: Threats, Architecture, and Defense

Multi-agent trust chains, permission laundering, confused deputy attacks, and chain poisoning โ€” the security patterns single-agent models miss entirely.

SecurityMarch 9, 2026

Is Cursor Safe? A Security Engineer's Assessment (2026)

CurXecute, 94 Chromium CVEs, extension supply chain risks โ€” a security engineering breakdown of what Cursor can access and what researchers have actually found.

SecurityMarch 7, 2026

Claude DXT: The Researcher-Assigned CVSS 10.0 That Anthropic Says Is Working as Designed

A LayerX researcher found a zero-click RCE in Claude Desktop Extensions and assigned it CVSS 10.0. Anthropic's response: it's outside their threat model.

Claude Code SecurityMarch 6, 2026

Claude Code --dangerously-skip-permissions: The Safe Alternative

The flag exists because permission prompts create real friction. Policy-as-code solves the same problem without handing Claude unrestricted access to your machine.

SecurityMarch 4, 2026

AI Tool Calling Security: The Complete Guide

76% of tool calls in open-source AI agent repos have zero security guards. Here's the complete reference: threat model, interception architecture, and implementation guide.

SecurityMarch 3, 2026

Pre-Execution Security for AI Agents: Architecture and Implementation

Every AI agent security incident follows the same pattern: the agent acted, then someone noticed. Pre-execution security inverts this โ€” evaluate every tool call before it executes.

SecurityMarch 30, 2026

Is Claude Code Dispatch Safe? Here's What Anthropic Says (And What You Still Need)

Anthropic just shipped Dispatch, /loop, remote control, and computer use. Here's what their own documentation says about the security model โ€” and where the gaps are.

Claude Code SecurityMarch 29, 2026

Securing Claude Code: Best Practices for Developers

CVE-2025-59536 proved that securing Claude Code is a real engineering concern. A practical checklist covering config hardening, runtime monitoring, and pre-execution blocking.

Supply Chain SecurityMarch 27, 2026

Malicious ClawHub Skills: How Supply Chain Attacks Work in Practice

Snyk found 76 intentionally malicious skills on ClawHub. Here's the anatomy of a malicious skill, how it bypasses static review, and how to defend against it.

Agent SecurityMarch 24, 2026

AI Computer Use Security: Attack Vectors and How to Stop Them

Five concrete attack vectors for browser and computer-use agents โ€” invisible text, hidden CSS, click injection, form fill hijacking, screen exfiltration โ€” with mechanisms and mitigations.

SecurityMarch 21, 2026

AI Coding Agent Security for Developers: The Full Stack

Four-layer threat model for AI coding agents โ€” with an honest comparison of NeMo, LlamaFirewall, Lakera, ClawMoat, and Shoofly.

Agent SecurityMarch 18, 2026

Prompt Injection Blocking: How Pre-Execution Security Stops the Attack

In agentic systems, prompt injection doesn't produce bad text โ€” it produces malicious tool calls. Here's the full attack chain and why blocking at the tool call layer is the only reliable backstop.

Agent SecurityMarch 15, 2026

Runtime Threat Detection for AI Agents

Every tool call is a runtime event with no static equivalent. What runtime threat detection means for agentic systems, and why pre-execution blocking is the only layer that actually prevents damage.

OpenClaw SecurityMarch 12, 2026

OpenClaw Skill Security: What Every User Should Know Before Installing from ClawHub

OpenClaw skills run with your agent's full permissions. ClawHub has automated scanning but no code signing or human review. Here's what that means and how to protect yourself.

Claude Code SecurityMarch 8, 2026

CVE-2025-59536: The Claude Code Config File Exploit and What It Means for Claude Code Security

A malicious .claude/settings.json in a cloned repo can bypass Claude Code's trust dialog, auto-approve tool calls, and exfiltrate your API keys before you've approved a single action.

MCP SecurityMarch 5, 2026

MCP Tool Poisoning: What It Is and How to Stop It

MCP tool poisoning embeds malicious instructions in tool descriptions โ€” invisible to users, processed silently by the LLM. Here's how it works and how to stop it.

SecurityMarch 2, 2026

AI Agent Security: Pre-Execution Blocking vs. Post-Execution Detection

Detection tells you what happened after your agent was exploited โ€” pre-execution blocking stops it before the tool call fires.

© 2026 Shoofly
GitHub OpenClaw Claude Code Guides Blog Shoofly Advanced Docs support@shoofly.dev