Andrej Karpathy coined the term in February 2025: "vibe coding." You describe what you want in natural language, the AI writes the code, you run it, you iterate. You don't read every line. You go by vibes. [NEEDS SOURCE: confirm Karpathy's original post URL and exact date โ widely attributed to an X/Twitter post, February 2025]
A year later, vibe coding isn't a novelty โ it's how a significant chunk of development happens. Junior devs use it to scaffold entire projects. Senior devs use it to skip boilerplate. Startups ship MVPs in days instead of weeks. The speed is real.
So is the security gap. But not the one everyone's talking about.
The Security Gap Everyone's Covering
Every vibe coding security article you've read so far says the same thing: the AI writes insecure code. SQL injection. Hardcoded secrets. Missing input validation. Use SAST tools. Review the output. Don't blindly trust the generated code.
This is correct. And it's necessary. Run your linters. Use Semgrep, CodeQL, Snyk Code, whatever fits your stack. Review the diffs before you commit. These are table-stakes practices whether a human or an AI wrote the code.
But here's what these articles miss: they all assume the dangerous part is the output. The code that gets written. The files that get committed. The vulnerabilities that ship to production.
That's the second-order risk. The first-order risk is what the agent does before any code gets written at all.
The Security Gap Nobody's Covering
When you vibe code, you're not just generating code. You're giving an autonomous agent access to your terminal, your file system, and potentially your network. The agent doesn't just write files โ it reads them, runs commands, installs packages, modifies configurations, and calls external tools. All of this happens before any SAST scanner sees the output.
Think about the sequence:
- You type: "Set up a new Express server with Postgres and deploy it to my staging environment."
- The agent reads your existing config files (including
.envwith database credentials). - It runs
npm installfor a dozen packages. - It creates files, modifies your
package.json, updates your Docker config. - It runs shell commands to test the setup.
- It might call MCP tools to interact with your cloud provider.
Steps 2 through 6 all happen before you have "output" to scan. The agent is executing actions on your machine, with your permissions, at the speed of API calls. If any of those actions are destructive, malicious (via prompt injection in a dependency), or simply wrong โ the damage is done before any scanner runs.
SAST scans code. It doesn't scan rm -rf. It doesn't scan curl to an attacker's endpoint. It doesn't scan a credential read from ~/.aws/credentials that gets stuffed into a tool call payload. These are execution-layer threats, and the entire "just scan the output" advice model has nothing to say about them.
The rm -rf Pattern
This isn't theoretical. The pattern is documented and recurring.
A developer asks an AI coding agent to "clean up the project directory" or "remove unused files" or "reorganize the folder structure." The agent interprets the instruction, decides which files to remove, and executes rm -rf on paths it considers unnecessary. Sometimes those paths include source files. Sometimes they include the home directory. Sometimes they include iCloud-synced folders where the deletion propagates to every connected device.
GitHub issues #29082, #10077, #24196, #15951 for Claude Code document variations of this pattern. Someone built a dedicated recovery tool โ Claude-File-Recovery โ specifically to extract files from .claude session data after destructive operations.
The critical point: no amount of output scanning catches this. The destructive action is the execution. There's no code to review. There's no diff to inspect. The files are gone before any human or automated system has a chance to intervene.
At interactive speed โ where you're watching the terminal and confirming each action โ you'd catch the rm -rf /Users/you/ before it fires. At vibe coding speed, where the whole point is not watching every action? You won't.
Pre-Execution at Vibe Speed
The security model for vibe coding needs to match the speed of vibe coding. That means enforcement that operates at the execution layer, on every tool call, without requiring a human to review each one.
This is what pre-execution policy enforcement does:
Before the agent runs a command, a policy engine evaluates it against a set of rules. Does the command match a dangerous pattern (rm -rf with paths outside the project directory)? Block it. Does a file read target credential stores? Block it. Does a network request go to a domain not on the allowlist? Block it.
The rules are deterministic. They don't depend on the LLM's judgment about whether an action is safe. They fire on pattern matches, every time, regardless of how convincing the agent's reasoning is about why it needs to delete your home directory.
The key constraint: no human bottleneck. If pre-execution enforcement required you to approve every flagged action, it would be no different from Claude Code's built-in permission prompts โ and you'd turn it off for the same reason you use --dangerously-skip-permissions. The whole point of vibe coding is speed.
Policy rules solve this by being precise enough to auto-approve safe actions and auto-block dangerous ones, with a narrow band of ambiguous cases that can either block-and-notify or prompt, depending on your configuration. The 95% of tool calls that are clearly fine (write a file in the project directory, run npm test, read a source file) execute without interruption. The 1% that are clearly dangerous (delete outside project root, read SSH keys, egress to unknown domains) get blocked automatically. You keep the speed. You lose the rm -rf.
Shoofly Advanced implements this as a hook in the Claude Code and OpenClaw tool call pipeline. The policy engine evaluates each tool call before execution. Rules are defined in YAML โ open and auditable. Default policies ship with coverage for the most common destructive patterns, credential access, and network egress. You can customize them to match your specific workflow and threat model.
Setup for Vibe Coders
If you're vibe coding today โ and statistically, you probably are โ here's the minimum security setup that doesn't kill your speed:
1. Install Shoofly Basic (free). It gives you logging and visibility into what your agent is doing. Even without policy enforcement, knowing that your agent read ~/.aws/credentials during a "set up a new project" session is valuable information.
2. Upgrade to Shoofly Advanced for policy enforcement. The default rules cover the patterns that matter: destructive commands outside the project root, credential access, network egress to unknown endpoints, sensitive file modification. Install takes five minutes. Your vibe coding workflow doesn't change โ you just stop losing files.
3. Keep your SAST tools. Pre-execution enforcement and output scanning aren't competing approaches. They cover different layers. Shoofly catches the rm -rf before it fires. Semgrep catches the SQL injection in the generated code. You need both.
4. Review the policy rules. They're YAML files. They're readable. Spend ten minutes understanding what's blocked and what's allowed, then adjust for your stack. If you legitimately need your agent to access AWS credentials during a deployment task, you can scope that permission narrowly โ allow reads from ~/.aws/ only during tasks tagged as deployment, block everywhere else.
Vibe coding is fast. That's the point, and that's why it's winning. But speed without guardrails is how you end up rebuilding your home directory from Time Machine backups on a Saturday morning.
Vibe coding is fast. Shoofly Advanced makes it safe โ without slowing you down.
FAQ
Q: Doesn't Claude Code already have permission prompts that prevent destructive actions? Yes โ when you're running interactively and reading each prompt. The problem is that vibe coding's entire value proposition is not doing that. Developers routinely use --dangerously-skip-permissions or auto-approve patterns to maintain flow. Pre-execution policy enforcement gives you the safety of permission prompts without the interruption: safe operations auto-approve, dangerous operations auto-block, and only genuinely ambiguous cases require human input.
Q: Is SAST scanning still necessary if I have pre-execution enforcement? Yes. Pre-execution enforcement and SAST scanning cover different layers. Pre-execution catches dangerous actions โ destructive commands, credential reads, unauthorized network requests. SAST catches dangerous code โ SQL injection, XSS, hardcoded secrets in generated source files. A vibe-coded project needs both: Shoofly to protect the execution layer during development, and SAST tools to protect the output before it ships to production.
Q: What about IDE-integrated AI tools like Copilot or Cursor โ do they have the same risk? Code completion tools that only suggest code (inline completions, tab-to-accept) have a narrower risk profile because they don't execute actions. The risk escalates when these tools gain agentic capabilities โ terminal access, file operations, tool calling. Any AI coding tool that executes rather than just suggests has the same execution-layer exposure that pre-execution enforcement addresses.
Further reading: Securing Claude Code: Best Practices ยท AI Computer Use Security: Attack Vectors ยท Claude Code Security
Ready to secure your AI agents? Shoofly Advanced provides pre-execution policy enforcement for Claude Code and OpenClaw โ 20 threat rules, YAML policy-as-code, 100% local. $5/mo.