The Extension Supply Chain Problem Nobody Is Solving

Browser and IDE extensions are one of the easiest ways into an enterprise network. The tooling to deal with this barely exists. Here's what we're building and why.

Ali Mosajjal
#supply-chain#browser-extensions#vscode#mcp#security

The Extension Supply Chain Problem Nobody Is Solving

Go ask your security team how many browser extensions are installed across your organization. Then ask them which ones have file system access. Then ask which ones updated in the last 48 hours, and whether anyone reviewed what changed.

You'll get blank stares for at least two of those.

The attack is boring and it works

Extension-based attacks don't require sophisticated exploitation. The typical flow looks like this: an attacker buys access to a developer account on the Chrome Web Store or VS Code Marketplace (these sell for a few hundred dollars on forums), pushes an update to an existing popular extension with a small malicious payload, and waits. Auto-update does the rest. Within hours, the new version is running on every machine that had the extension installed.

The payload doesn't need to be clever either. Read ~/.ssh, grab environment variables, scrape browser cookies, exfiltrate to a domain that looks like a CDN. The extension already has the permissions it needs. The user already trusts it.

This isn't theoretical. The BananaLeaks campaign compromised a single Chrome extension and harvested enterprise credentials at scale before anyone flagged it. Malicious VS Code extensions have been caught reading SSH keys and API tokens from developer machines. These are the ones we know about.

Why this is hard to fix

The obvious answer is "just block extensions" and some organizations do that. It works, in the same way that unplugging from the internet prevents phishing. Developers need their tools. Extensions are part of how people work. Blocking them wholesale creates shadow IT problems and makes the security team the enemy.

The less obvious answer is "just review them" and that falls apart at scale. The Chrome Web Store alone has over 200,000 extensions. VS Code Marketplace has tens of thousands more. Extensions update frequently, sometimes weekly. A manual review process can't keep up, and even automated scanning produces so many false positives that teams stop looking at the results.

The real problem is that nobody has built the infrastructure to make this a tractable problem. Package managers have had vulnerability databases and supply chain tooling for years (npm audit, Snyk, Socket). Extensions have almost nothing equivalent.

What we're building

RiskyPlugins is our answer to this. We index extensions across five marketplaces: Chrome Web Store, Firefox Add-ons, VS Code Marketplace, OpenVSX (which covers Cursor, Windsurf, Kiro, and other VS Code forks), and Microsoft 365.

Every extension gets pulled down, extracted in a sandbox, and run through thousands of detection rules covering malware signatures, secret exposure, obfuscation patterns, credential harvesting, and suspicious post-install behavior. We correlate those findings with external threat intelligence (VirusTotal, Abuse.ch, WHOIS) and then run AI analysis on top to figure out which findings actually matter in context.

That last part is critical. A password manager extension accessing credential storage APIs is doing its job. An ad blocker doing the same thing is not. Without context, scan results are just noise. The AI layer correlates what the extension claims to do, what permissions it requests, who published it, and what the scanners found. That's where the useful signal comes from.

Each extension gets a 0-100 risk score. The scoring is multi-dimensional (Malware, Credential Harvesting, Exfiltration, Supply Chain Trust, Code Quality, and several others), with publisher trust adjustments and historical smoothing so the score doesn't jump around between updates. We borrowed the probabilistic approach from FIRST's EPSS because naive severity summation doesn't produce scores people can make decisions on.

Everything that contributes to a score is visible. No black boxes. If you disagree with a score, you can see exactly why it was assigned and which findings drove it.

The enterprise side

Scoring extensions is useful for individual developers, but the enterprise problem is different. Organizations need policy enforcement, not just information.

That's the second half of what we're building: a private extension store that organizations control. Think of it as a managed gateway between the public marketplaces and your developers' machines. It supports blocking extensions above a risk threshold automatically, delaying new extension updates by a configurable window (so supply chain attacks get caught before they propagate), per-extension and per-developer approval workflows, and full version caching so every extension version that enters your organization is preserved. If an extension turns malicious in a future update, you have the previous clean version and a complete history of what changed.

This is the part that makes RiskyPlugins a business, not just a tool. Individual developers can use the public site for free. Enterprises pay for the policy layer and the private store.

The next attack surface: AI agent plugins

Everything I've described so far is about browser and IDE extensions. But the plugin problem is expanding fast, and the new frontiers have even less security tooling than extensions do.

MCP Servers (Model Context Protocol) are used by Claude Code, Cursor, and a growing list of AI tools. They're plugins for AI assistants, granting access to databases, file systems, APIs, and cloud infrastructure. A compromised MCP server can exfiltrate data through the AI's own context, inject instructions into workflows, or pivot into whatever systems the AI has access to. There's no registry, no review process, no scoring. You install an MCP server and you're trusting it completely.

Remote MCP servers add a network dimension. Your AI tools might be sending context to a third-party server you've never audited. You might not even know which servers are configured.

OpenClaw crossed 100,000 installations and has 3,000+ community skills on ClawHub. These skills get access to shell commands, file systems, banking apps, and web automation. Cisco's AI security research team already demonstrated that a third-party OpenClaw skill can perform data exfiltration and prompt injection without the user's knowledge. There's no security review process for ClawHub skills.

ChatGPT Plugins and GPT Actions connect to external APIs with no standardized way to evaluate their supply chain trust.

All of these are plugin ecosystems with the same fundamental problem: third-party code running with broad permissions, no centralized security vetting, and auto-update mechanisms that can turn a trusted tool malicious overnight. The same analysis approach we use for browser and IDE extensions (download, sandbox, scan, correlate, score) applies to these new categories. We plan to expand RiskyPlugins to cover them.

Where this is going

The short-term roadmap is finishing the risk scoring engine (we're in the process of migrating from v1 to a more sophisticated v2 model), expanding public coverage on riskyplugins.com, and shipping the enterprise private store for early customers.

Longer term, I want RiskyPlugins to be the place you check before installing anything that runs with elevated permissions on your machine or your AI agent's machine. Extensions, MCP servers, OpenClaw skills, GPT plugins. If it's third-party code with broad access, it should have a risk score attached to it.

The extension supply chain is an unsolved problem. The AI agent plugin supply chain is an unacknowledged problem. Both are growing. We're building the tooling that should have existed years ago.