Is Cursor Safe? A Security Engineer's Assessment (2026)

← Back to Blog

Cursor is a good product. It's fast, the completions are genuinely useful, and the agent mode has changed how a lot of teams write code. None of that is in question here.

What is in question is the security model underneath it -- because the current search results for "is cursor safe" are mostly vibes. Forum posts saying "it's fine." Reddit threads where someone asks a reasonable question and gets told to stop worrying.

This is a security engineering breakdown. We'll cover what Cursor can access, the published vulnerabilities researchers have found, and what you can do about it. The goal isn't to scare you off Cursor. It's to help you use it with your eyes open.

What Cursor Can Access

Before talking about vulnerabilities, it helps to understand the baseline: what does Cursor have access to when it's running normally, working as intended?

File system access. Cursor reads and writes files across your project directory, and in agent mode, it can create, modify, and delete files. If your workspace includes .env files, credentials, or SSH keys, Cursor's agent can read those. It needs file access to be useful -- but anything in your project directory is fair game.

Terminal access. Cursor can execute shell commands through its integrated terminal. In agent mode, it will run build commands, install dependencies, execute tests, and perform other shell operations. Those commands run with your user permissions. If you can rm -rf /, so can Cursor's agent.

Network capabilities. Cursor makes network requests to its AI backend for completions and agent responses. Extensions can also make network requests. The combination of file read access and network access is the pattern that makes data exfiltration possible: read a secret, send it somewhere.

Extension permissions. Cursor uses a VS Code-compatible extension ecosystem. Extensions can access the file system, run processes, make network requests, and interact with Cursor's APIs. The permissions model is inherited from VS Code -- which means the supply chain risks are inherited too.

This access model is similar to other AI-powered IDEs. It's not uniquely permissive. But it's worth being explicit about, because everything that follows depends on understanding what's already on the table.

CurXecute: Prompt Injection to Remote Code Execution

In late 2025, researchers at HiddenLayer published CurXecute -- a prompt injection attack chain that turns Cursor's agent mode into a remote code execution vector. The research was led by Joseph Chmielewski at HiddenLayer and was covered by both CyberScoop and BleepingComputer. [NEEDS SOURCE for specific CVE number if one was assigned]

Here's how the attack works:

Step 1: Inject a prompt into a file. The attacker places a malicious instruction inside a code comment, a markdown file, a README, or any other file that Cursor's agent will read. This could be a file in a cloned repository, an imported dependency, or a file shared by a colleague. The prompt doesn't need to look suspicious -- it can be disguised as a normal comment or hidden in whitespace.

Step 2: Cursor's agent processes the file. When Cursor's agent reads the file as part of its context -- indexing the project, responding to a question, or executing an agent mode task -- it ingests the malicious prompt as if it were a legitimate instruction. This is the core prompt injection problem: language models can't reliably distinguish between instructions from the user and instructions embedded in data they're processing.

Step 3: The agent executes arbitrary commands. The injected prompt instructs the agent to execute shell commands. Because Cursor's agent has terminal access, those commands run on your machine with your permissions. The HiddenLayer research demonstrated that this chain can achieve full remote code execution -- the attacker's payload runs on your system without any interaction from you beyond having the file in your project.

The implications: clone a repository with a malicious prompt in any file Cursor might read, and the agent can be tricked into executing code on your machine. A dependency with a poisoned file does the same thing. A shared code snippet with an embedded injection -- same result.

Credit to Cursor's team: they've acknowledged the prompt injection surface and implemented mitigations. But the fundamental architecture -- an LLM agent that reads untrusted files and has terminal access -- means prompt injection will remain a persistent risk. This isn't unique to Cursor. It's structural to every AI coding agent that combines file reading with code execution. Cursor happens to be the one where researchers published a named exploit chain with a proof of concept.

94 Chromium CVEs: The Electron Attack Surface (OX Security)

Cursor is built on Electron, which means it's built on Chromium. So is Windsurf. So was the original VS Code architecture. This isn't unusual -- most modern desktop apps use Electron. But it has security implications that most users don't think about.

OX Security published research identifying 94 Chromium vulnerabilities present in the Electron-based builds of Cursor and Windsurf. These aren't theoretical. They're catalogued CVEs in the Chromium codebase that ship with the Electron version these editors are built on.

Why this matters. Chromium is a browser engine. It has a browser-sized attack surface: rendering engines, JavaScript interpreters, network stacks, media codecs, GPU drivers. When you run Cursor, you're running all of that. If Cursor's Electron shell is behind on Chromium updates -- which Electron apps frequently are, because updating Chromium is expensive and complex -- your IDE carries known browser vulnerabilities.

What's exploitable in practice. Not all 94 CVEs are equally dangerous in Cursor's specific context. A vulnerability in Chromium's WebRTC stack might not be reachable through Cursor's normal usage. But vulnerabilities in V8 (the JavaScript engine), the network stack, or the rendering pipeline are more concerning, because Cursor's extensions and web views exercise those code paths directly.

The update lag problem. Browser vendors like Google ship Chromium patches on a rapid cycle. Electron apps consume those patches on their own schedule, which is usually slower. OX Security's research highlighted that both Cursor and Windsurf were running Electron versions with known, patched-upstream vulnerabilities. This isn't negligence -- it's an inherent friction in the Electron ecosystem. But it means your IDE may be carrying vulnerabilities that Chrome already fixed weeks or months ago.

The practical takeaway: keep Cursor updated. Every version bump may include Electron updates that close Chromium CVEs. And treat Cursor with the same network caution you'd apply to a browser -- because under the hood, it is one.

Extension Supply Chain Risks

Cursor's extension ecosystem is one of its biggest selling points. VS Code compatibility means access to a massive library of extensions, and Cursor has its own marketplace on top of that. But the same openness that makes the ecosystem useful also makes it a supply chain risk.

The VS Code extension model is permissive by design. Extensions can read and write files, execute processes, make network requests, access the clipboard, and interact with the editor's APIs. There's no granular permission system -- an extension either has access or it doesn't. When you install an extension, you're trusting its author with roughly the same access that Cursor's agent has.

Malicious extensions are a documented problem. Security researchers have repeatedly found malicious extensions in the VS Code marketplace -- extensions that exfiltrate data, inject cryptocurrency miners, or serve as backdoors. In 2024 and 2025, multiple reports documented extensions with millions of installs that contained obfuscated malicious code. [NEEDS SOURCE for specific Cursor marketplace incident if one exists] The vetting process for marketplace extensions is automated and has known gaps.

Update poisoning is a real vector. An extension that's legitimate today can become malicious tomorrow. If an extension author's account is compromised, or if they sell it to a new owner, a routine update can push malicious code to every user with auto-updates enabled. This has happened in npm, PyPI, and the Chrome Web Store -- there's no architectural reason it can't happen in Cursor's marketplace.

The risk compounds with agent mode. When Cursor's agent recommends installing an extension, or when an extension interacts with agent mode, the trust surface expands. You're trusting both the agent and the extension code, and the interaction between them isn't always predictable.

Practical steps: audit your extensions regularly. Prefer established publishers. Disable auto-updates for extensions you aren't monitoring. Be skeptical of extensions that request broad permissions for narrow functionality.

What You Can Do

Cursor's security posture isn't a binary safe-or-unsafe question. It's a spectrum, and you have real control over where you land on it. Here are concrete mitigations, ordered from simplest to most robust:

Keep Cursor updated. The Chromium CVE surface means every update matters. Don't defer updates.

Audit your workspace contents. Don't keep secrets in your project directory. Use a secrets manager. Move .env files out of Cursor's workspace scope or use .cursorignore to exclude them. If the agent can't read it, it can't leak it.

Vet your extensions. Treat extension installs like dependency installs. Check the publisher, the download count, the last update date. Uninstall extensions you don't actively use.

Be cautious with untrusted repositories. CurXecute shows that prompt injection can live in any file Cursor reads. When you clone a repository from an unknown source, review files before letting the agent index them.

Monitor network activity. Tools like Little Snitch (macOS) or Wireshark show what network requests Cursor and its extensions are making. Unusual outbound connections to unknown domains are worth investigating.

Use agent mode deliberately. Agent mode is where the highest-risk operations happen: file writes, terminal commands, code execution. Use it for trusted codebases and defined tasks.

Enforce policy at the tool-call layer. Instead of trusting that every agent action will be safe, define rules about what actions are allowed and enforce them before execution. More on this below.

Pre-Execution Security for Cursor

Here's where we talk about Shoofly, and we'll be specific about what it does and what it doesn't do.

Shoofly Advanced operates at the agent tool-call layer. It intercepts tool calls -- file reads, file writes, shell commands, network requests -- before they execute, and evaluates them against a set of policy rules you define. Those rules are written in YAML, they're open and auditable, and they run 100% locally. No data leaves your machine.

What Shoofly catches for Cursor workflows. If you're running an AI agent alongside Cursor -- using Claude Code, an MCP server, or another agent runtime that makes tool calls -- Shoofly's before_tool_call hook intercepts those calls and applies your policy rules. A tool call that tries to read your .env file? Blocked, if your policy says so. A shell command that runs curl to an unknown domain? Blocked. A file write outside your project directory? Blocked. The agent gets a denial response and moves on.

What Shoofly doesn't catch. Shoofly intercepts at the tool-call level, not at the IDE application level. It doesn't sit inside Cursor's process. It doesn't intercept Cursor's own internal operations -- completions, indexing, UI rendering. If a Chromium CVE in Cursor's Electron shell is exploited through the rendering engine, that's outside Shoofly's interception layer. If a malicious extension runs code directly within Cursor's process, Shoofly won't see that either unless the extension makes tool calls through an agent runtime that Shoofly hooks into.

We're being explicit about this because honesty about scope matters more than a sales pitch. Shoofly adds a real layer of defense for agent tool calls in Cursor workflows. It doesn't make Cursor itself invulnerable. No single tool does.

How it works architecturally. Shoofly Advanced runs as a daemon -- a background sidecar process on your machine. The hook plugin integrates with your agent runtime and intercepts tool calls before they execute. When a tool call comes in, the daemon evaluates it against your policy rules. HIGH and MEDIUM severity threats are auto-blocked. Everything runs locally. The policy rules are deterministic -- no LLM judgment involved in the enforcement step, which means no false-negative rate from model uncertainty.

What it costs. Shoofly Advanced is $5/mo. The rules are YAML you write and own. You can inspect every rule, modify them, and see exactly what's being enforced. Nothing is opaque.


Cursor is a powerful tool, and the team behind it is actively working on security. The vulnerabilities discussed here aren't signs of negligence -- they're the inevitable attack surface of an application that combines an LLM agent, a code editor, a terminal, a browser engine, and an extension ecosystem into a single process. That's a lot of surface.

The question isn't whether Cursor is "safe" or "unsafe." It's whether you're using it with adequate protections for your threat model. Side project with no secrets? Cursor out of the box is probably fine. Production infrastructure with credentials and client data? You need additional layers.

Cursor is powerful. Make it safe. Shoofly Advanced enforces policy rules on agent tool calls -- before they execute, with open and auditable YAML rules, 100% local on your machine.


*Further reading:*


Ready to secure your AI agents? Shoofly Advanced provides pre-execution policy enforcement for Claude Code and OpenClaw — 20 threat rules, YAML policy-as-code, 100% local. $5/mo.