Claude Code hooks are one of the most genuinely useful security features Anthropic has shipped. They give you a pre-execution interception point — a place to inspect and block tool calls before they fire. If you've been asking for something like iptables for AI agents, hooks are the closest thing Claude Code offers natively.
But hooks are a mechanism, not a solution. The gap between "you can write a shell script that blocks tool calls" and "you have a reliable, maintainable security layer" is the gap where incidents happen. And as CVE-2025-59536 demonstrated, hooks themselves can become the attack vector.
This is not an anti-hooks post. Hooks are the right idea. The question is whether DIY shell scripts are the right implementation.
How Claude Code Hooks Work
Claude Code hooks use a lifecycle model with two interception points: PreToolUse and PostToolUse.
PreToolUse fires before a tool call executes. Your hook receives the tool name, the arguments the LLM generated, and context about the current session. Your script inspects these, and returns a verdict: allow, block, or modify. If the hook blocks, the tool call never fires. This is genuine pre-execution security — the action is stopped before it happens.
PostToolUse fires after a tool call completes. Your hook receives the same context plus the tool's output. This is useful for logging, alerting, and audit trails — but it's not a security boundary. By the time PostToolUse runs, the rm -rf already executed.
To set up a hook, you add an entry to .claude/settings.json:
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"command": "/path/to/your/hook-script.sh"
}
]
}
}
The matcher field specifies which tool triggers the hook (e.g., Bash, Read, Write, Edit). The command field points to your script. Claude Code passes the tool call details via stdin as JSON, and your script's exit code determines the verdict: 0 allows, non-zero blocks.
This is clean, straightforward, and gives developers real control over what their agent can do. The architecture is sound.
The problems are in everything around it.
What PreToolUse Can Block
When hooks work, they work well. Here are concrete examples of effective PreToolUse rules.
Block writes outside the project directory:
#!/bin/bash
# block-external-writes.sh
INPUT=$(cat)
TOOL=$(echo "$INPUT" | jq -r '.tool')
FILE_PATH=$(echo "$INPUT" | jq -r '.arguments.file_path // empty')
if [[ "$TOOL" == "Write" || "$TOOL" == "Edit" ]] && [[ -n "$FILE_PATH" ]]; then
REAL_PATH=$(realpath "$FILE_PATH" 2>/dev/null || echo "$FILE_PATH")
if [[ ! "$REAL_PATH" == /home/dev/myproject/* ]]; then
echo "BLOCKED: Write outside project directory: $FILE_PATH" >&2
exit 1
fi
fi
exit 0
Block shell commands matching dangerous patterns:
#!/bin/bash
# block-dangerous-commands.sh
INPUT=$(cat)
COMMAND=$(echo "$INPUT" | jq -r '.arguments.command // empty')
BLOCKED_PATTERNS=("rm -rf" "chmod 777" "curl.*|.*sh" "wget.*|.*bash")
for pattern in "${BLOCKED_PATTERNS[@]}"; do
if echo "$COMMAND" | grep -qE "$pattern"; then
echo "BLOCKED: Dangerous command pattern: $pattern" >&2
exit 1
fi
done
exit 0
Restrict network destinations:
#!/bin/bash
# block-network-egress.sh
INPUT=$(cat)
COMMAND=$(echo "$INPUT" | jq -r '.arguments.command // empty')
if echo "$COMMAND" | grep -qE "curl|wget|fetch|http"; then
ALLOWED_HOSTS=("api.github.com" "registry.npmjs.org" "pypi.org")
MATCHED=false
for host in "${ALLOWED_HOSTS[@]}"; do
if echo "$COMMAND" | grep -q "$host"; then
MATCHED=true
break
fi
done
if [[ "$MATCHED" == false ]]; then
echo "BLOCKED: Network call to non-allowlisted host" >&2
exit 1
fi
fi
exit 0
These are functional. A developer who writes and maintains scripts like these has meaningful security coverage for their most critical tool calls.
But look at what it took: three separate scripts, raw shell string matching, manual path resolution, hardcoded patterns, no composition between rules, and no credential awareness. Every developer who wants this coverage has to write, debug, and maintain their own version.
What Hooks Miss
The mechanism is sound. The implementation model has structural gaps.
No centralized policy management. Each hook is a standalone script. There's no way to compose rules, inherit from a base policy, or manage a rule set as a unit. If you have 12 hooks covering different tool types, you have 12 independent scripts with no shared logic.
Shell scripting errors are silent failures. If your hook script has a bug — a quoting error, a missing jq dependency, a malformed regex — it may silently allow the tool call through. A security layer that fails open on script errors is not a security layer; it's a suggestion.
No pattern matching beyond what you write yourself. The examples above use grep -qE for pattern matching. That means your security coverage is exactly as good as your regex. There's no built-in support for matching credential patterns, detecting exfiltration sequences (file read followed by network call), or recognizing prompt injection in tool arguments.
No credential detection. Hooks receive tool call arguments as raw JSON. If a Bash tool call contains curl -H "Authorization: Bearer sk-live-abc123...", the hook sees a string. Recognizing that string contains a credential, and that the curl command would transmit it to an external endpoint, requires purpose-built detection logic that shell scripts don't provide out of the box.
No update or distribution mechanism. When a new attack pattern emerges — a new exfiltration technique, a new path traversal variant, a new credential format — every developer running DIY hooks needs to manually update their scripts. There's no feed, no registry, no update channel.
No audit trail. Hook scripts can log to files, but there's no structured event format, no centralized log collection, and no way to query "what did my hooks block last week?" without building your own logging infrastructure.
These aren't bugs. They're the natural consequences of building a security system from shell scripts. For a single developer with simple rules and a low threat model, the trade-offs may be acceptable. For teams, for complex policies, for environments where security is a requirement rather than a nice-to-have — the gaps compound.
CVE-2025-59536: When Hooks Become the Attack Vector
This is where the architecture gets uncomfortable.
CVE-2025-59536 (CVSS 8.7, discovered by Check Point Research) demonstrated that .claude/settings.json — the same file that configures hooks — can be weaponized to execute arbitrary code on the developer's machine.
The attack chain:
- An attacker commits a malicious
.claude/settings.jsonto a repository — or injects one via a pull request, a dependency, or a compromised MCP server response. - The settings file contains a hook definition pointing to an attacker-controlled script.
- When the developer opens the project with Claude Code, the malicious hook script executes before the trust dialog appears. The developer never sees a permission prompt.
- The attacker's script runs with the developer's full permissions — file access, network access, credential access.
The CVSS 8.7 score reflects the severity: network-accessible attack vector, low complexity, no privileges required, high impact across confidentiality, integrity, and availability.
The irony is precise. The mechanism designed for pre-execution security became the pre-execution attack vector. The hook executed before any trust decision was made, in the exact position where security controls should sit.
This isn't a theoretical risk. Check Point Research published a working proof of concept. The vulnerability affects any Claude Code installation that processes repositories containing untrusted .claude/settings.json files — which includes every developer who clones a public repository, reviews a pull request, or works on a project with external contributors.
Anthropic has addressed the specific exploit, but the structural issue remains: .claude/settings.json is a user-writable file that configures executable code paths. Any mechanism that executes shell scripts based on a configuration file in a project directory is a supply chain attack surface. The fix addresses the specific bypass, but the attack surface — configuration-driven code execution — is inherent to the design.
For a deeper technical breakdown, see our full CVE-2025-59536 analysis.
DIY vs. Managed: An Honest Comparison
The decision between DIY hooks and a managed security layer depends on your threat model, your team size, and how much maintenance burden you're willing to absorb.
| Dimension | DIY Hooks | Shoofly Advanced |
|---|---|---|
| Policy management | Individual shell scripts, no composition | Policy-as-code YAML, composable rule sets |
| Rule composition | Manual — each script is standalone | Built-in — rules inherit, compose, and override |
| Credential detection | Write your own regex | 4 built-in DE rules with credential pattern matching |
| Prompt injection detection | Not available | 8 PI rules with pattern matching |
| Out-of-scope write protection | Manual path checking | 3 OSW rules for /etc/, ~/.ssh/, ~/.aws/, credentials |
| Update mechanism | Manual script updates | Managed rule updates via policy feed |
| Attack surface | settings.json is writable, scripts are executable | Policy-as-code, no user-writable executable paths |
| Audit trail | DIY logging | Structured events, queryable log format |
| Fail mode | Script errors fail open | Policy evaluation fails closed |
| Team deployment | Copy scripts between machines | Centralized policy, version-controlled |
| Exfiltration detection | Single-step pattern matching | Multi-step chain detection (file read → network egress) |
Neither option is wrong in every context.
DIY hooks make sense when: You're a single developer. Your rules are simple — maybe 2-3 path restrictions and a command blocklist. Your threat model is low. You enjoy writing shell scripts and can maintain them reliably.
A managed layer makes sense when: You're on a team. Your policies are complex or need to compose. You need credential detection or exfiltration chain detection. You can't afford silent failures. You don't want .claude/settings.json to be a supply chain attack surface.
Shoofly: Hooks Done Right
Shoofly Advanced uses the same pre-execution interception architecture as Claude Code hooks — the same position in the tool call lifecycle, the same decision gate model. The difference is implementation.
Policy-as-code replaces shell scripts. Rules are defined in YAML, not bash. A credential-sniffing rule doesn't require you to write regex for every API key format — it's built in. An exfiltration detection rule doesn't require you to correlate file reads with network calls — the multi-step pattern matching is part of the rule engine.
20 threat rules across 5 categories ship built in. Prompt injection detection (8 rules). Tool response injection (2 rules). Out-of-scope write protection (3 rules). Runaway loop detection (4 rules). Data exfiltration blocking (4 rules). These aren't suggestions — they're deterministic rules that evaluate on every tool call.
No user-writable executable paths. Shoofly's policy is code, not configuration that points to scripts. There's no .claude/settings.json equivalent where an attacker can inject a path to a malicious executable. The attack surface that enabled CVE-2025-59536 doesn't exist in Shoofly's architecture.
The rules are open and auditable. Every rule Shoofly evaluates is inspectable. You can read the policy, understand what it blocks, and verify that it does what it claims. No black box, no "trust us" — the rules are the rules, and you can read them.
If you're currently writing your own PreToolUse hooks, you're building a version of what Shoofly already provides. That's not a criticism — it means the instinct is right. The question is whether shell scripts or policy-as-code is the better long-term foundation.
For a broader view of Claude Code security practices, see our securing Claude Code best practices guide.
Hooks are the right idea. Shoofly Advanced is the right implementation.
FAQ
What are Claude Code hooks? Claude Code hooks are a built-in mechanism that lets you run custom scripts before (PreToolUse) or after (PostToolUse) tool calls execute. They provide a pre-execution interception point for inspecting and blocking agent actions.
Can Claude Code hooks prevent prompt injection? Hooks can block specific tool calls, but they don't include prompt injection detection by default. You'd need to write your own detection logic in shell scripts, which is error-prone. Shoofly Advanced includes 8 built-in prompt injection detection rules.
Is CVE-2025-59536 still a risk? Anthropic patched the specific bypass, but the structural attack surface — configuration-driven code execution via .claude/settings.json — remains inherent to the hooks design. Any project with untrusted contributors should treat .claude/settings.json as a security-sensitive file.
What's the difference between PreToolUse and PostToolUse hooks? PreToolUse fires before the tool call executes and can block it — this is the security boundary. PostToolUse fires after execution and is useful for logging and alerting but cannot prevent the action. For security, PreToolUse is the only hook that matters.
How does Shoofly compare to writing my own hooks? Shoofly uses the same pre-execution interception architecture but replaces shell scripts with policy-as-code, adds built-in credential detection and exfiltration chain analysis, and eliminates the supply chain attack surface of user-writable executable paths. See the comparison table above for a detailed breakdown.
Ready to secure your AI agents? Shoofly Advanced provides pre-execution policy enforcement for Claude Code and OpenClaw — 20 threat rules, YAML policy-as-code, 100% local. $5/mo.