ClawJacked: The OpenClaw Vulnerability That Every User Needs to Know

Published on
March 3, 2026
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

“SaaS is dead.”

That’s the joke floating around tech Twitter. The idea is simple: why pay for software subscriptions when you can self-host open tools and connect everything with AI agents?

It sounds smart. It sounds cheaper. It sounds powerful.

But then reality hits.

The recent ClawJacked vulnerability in OpenClaw showed how risky that shift can be. A simple visit to a malicious website could let an attacker silently connect to a locally running AI agent and take control of it. No warning. No pop-up. No clear signal.

This wasn’t some rare add-on or shady plugin. The issue existed in the default OpenClaw gateway setup. That means developers running it exactly as documented were exposed.

The bigger lesson is clear: AI agents are not just tools. They have access to systems, logs, integrations, and credentials. When one gets compromised, the damage spreads fast.

SaaS may not be dead. But careless self-hosting can get you claw jacked.

What Is ClawJacked?

ClawJacked is a high-severity security flaw that allowed malicious websites to take control of a locally running OpenClaw AI agent through a WebSocket connection.

The vulnerability was discovered by Oasis Security in OpenClaw's core gateway. It did not require any third-party plugin, extension, or marketplace add-on. The issue existed in the default setup.

"Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented," Oasis Security said in a report published this week.

Here’s what made it dangerous.

OpenClaw runs a local gateway on a developer’s machine. This gateway uses a WebSocket server that listens on localhost. It is supposed to be protected by a password.

The problem was that any website you visit can attempt to open a WebSocket connection to your localhost. Browsers block many cross-site requests, but they do not block these WebSocket connection types.

That gap opened the door.

If a developer visited a malicious website, hidden JavaScript on that page could quietly try to connect to the OpenClaw gateway. Because there was no proper rate limiting, the attacker could brute-force the password. Once inside, the gateway automatically approved the attacker's device as trusted.

No alert. No confirmation prompt.

From there, the attacker could:

  • Read configuration data
  • Access logs
  • View connected systems
  • Send commands through the AI agent

In simple terms, ClawJacked turned a local AI assistant into a remote access point.

The good news is that OpenClaw patched the issue quickly in version 2026.2.25. But the flaw exposed a bigger truth: local AI agents with system access must be treated like sensitive infrastructure, not hobby tools.

How the ClawJacked Attack Works (Step-by-Step)

ClawJacked was not complex. That’s what made it dangerous. It relied on normal browser behavior and a small security gap in the gateway design.

Here is how the attack unfolded.

1. The Setup

A developer has OpenClaw running on their laptop.

The local gateway is active. It listens on a localhost port and is protected by a password.

Everything looks safe because it’s “local.”

2. The Visit to a Malicious Website

The developer lands on a compromised or attacker-controlled website. This could happen through:

  • A phishing email
  • A social media link
  • A forum post
  • A normal browsing session

Nothing unusual appears on screen.

3. Silent WebSocket Connection to Localhost

Hidden JavaScript on that webpage attempts to open a WebSocket connection to the developer’s localhost gateway.

Browsers block many cross-site requests. But they do not block WebSocket connections to localhost.

The user sees nothing.

4. Brute-Forcing the Gateway Password

The gateway required a password. That sounds secure.

But there was no proper rate limiting. This meant the malicious script could try many password combinations quickly until it found the correct one.

No lockout. No slowdown.

5. Automatic Trusted Device Registration

Once authenticated, the attacker registered as a trusted device.

Here’s the key issue: the gateway automatically approved local device registrations without asking the user for confirmation.

As Oasis Security pointed out,

"That misplaced trust has real consequences. The gateway relaxes several security mechanisms for local connections - including silently approving new device registrations without prompting the user. Normally, when a new device connects, the user must confirm the pairing. From localhost, it's automatic."

That silent approval gave the attacker admin-level access.

6. Full Control of the AI Agent

After registration, the attacker could:

  • Access logs
  • Read configuration data
  • See connected tools and systems
  • Send commands through the AI agent
  • Trigger actions across integrated services

At this point, the AI agent was no longer private. It became a remote-controlled system.

The most concerning part is that the user never received a warning. No popup. No approval request. No visible sign of compromise.

This is why ClawJacked matters. It shows how a small design decision around local trust can lead to full system control.

Why Localhost Is Not Automatically Safe

Many developers believe that if something runs on “localhost,” it is safe by default.

That assumption is wrong.

Localhost simply means the service runs on your own machine. It does not mean it is isolated from the browser. It does not mean other websites cannot interact with it.

In the ClawJacked case, the issue was simple. Browsers allow websites to open WebSocket connections to localhost. Unlike normal web requests, these connections are not blocked by default.

This creates a blind spot.

If you visit a malicious website, that page can quietly attempt to connect to services running on your machine. 

As Oasis Security explained, "Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn't block these cross-origin connections. So while you're browsing any website, JavaScript running on that page can silently open a connection to your local OpenClaw gateway. The user sees nothing."

In the case of OpenClaw, the gateway treated localhost as trusted. It even auto-approved new device registrations from local connections.

That misplaced trust made the attack possible.

The lesson is clear:

  • Local services must still enforce strict authentication
  • Rate limiting must be enabled
  • Device approvals should require user confirmation
  • Local traffic should never be assumed safe

Developers often secure public endpoints carefully, but forget that local runtimes can also become attack targets.

If an AI agent has access to logs, integrations, credentials, or automation tools, then localhost becomes a high-value target.

Local does not mean private. Local does not mean protected.

It only means the attacker has to reach your browser first.

The Bigger Problem: AI Agents Have a Massive Blast Radius

ClawJacked was not just a password issue.

It exposed something deeper.

AI agents are not simple scripts. They connect to email systems, Slack workspaces, file storage, databases, cloud platforms, and internal tools. They can read data. They can send messages. They can trigger actions.

That means when one agent is compromised, the impact spreads fast.

Security researchers from Bitsight and NeuralTrust have warned that exposed OpenClaw instances increase the attack surface. Every connected service becomes another possible entry point.

If an agent has access to:

  • Slack channels
  • Internal documents
  • Cloud dashboards
  • Customer data
  • API keys

Then a single compromise can move across systems.

There is also the risk of prompt injection.

If an attacker places malicious instructions inside an email, Slack message, or webpage, the AI agent may process it as normal content. If the agent trusts that input, it can execute actions based on it.

This turns content into commands.

That is why the damage from an AI agent compromise is greater than that from a normal web app breach. These agents often hold permissions that allow them to read, write, and act across many tools at once.

The more integrations you connect, the larger the blast radius becomes.

ClawJacked showed how easily access could be gained. The bigger concern is what happens after that access is granted.

The instructions because they looked valid

As Endor Labs noted,

"As AI agent frameworks become more prevalent in enterprise environments, security analysis must evolve to address both traditional vulnerabilities and AI-specific attack surfaces."

It Gets Worse: Log Poisoning & Prompt Injection

ClawJacked was not the only issue affecting OpenClaw.

Another vulnerability showed how attackers could manipulate the AI agent indirectly through log poisoning.

In this case, attackers could send crafted WebSocket requests to a publicly exposed instance and write malicious content into log files. That might not sound serious at first.

But OpenClaw agents read their own logs to troubleshoot tasks.

If an attacker inserts harmful instructions inside those logs, the agent may treat them as useful system information. Instead of seeing the text as untrusted input, the agent may interpret it as guidance.

As Eye Security explained,

"If the injected text is interpreted as meaningful operational information rather than untrusted input, it could influence decisions, suggestions, or automated actions. The impact would therefore not be 'instant takeover,' but rather: manipulation of agent reasoning, influencing troubleshooting steps, potential data disclosure if the agent is guided to reveal context, and indirect misuse of connected integrations."

Security researchers at Eye Security explained that this would not cause instant system takeover. Instead, it would influence the agent’s reasoning.

The risks include:

  • Changing troubleshooting steps
  • Steering the agent toward unsafe actions
  • Causing data disclosure
  • Triggering unintended integrations

This is known as indirect prompt injection.

Unlike direct access attacks, prompt injection works by manipulating how the AI thinks. The attacker does not need admin credentials. They only need the agent to process poisoned content.

The danger increases when agents are connected to sensitive systems. A single poisoned instruction hidden in logs, emails, or documents could prompt the agent to perform harmful actions.

This shows a key shift in security.

It is no longer just about protecting passwords and ports. It is also about protecting how AI agents interpret information.

When logs, emails, and messages become inputs to automated reasoning, every input must be treated carefully.

ClawHub: The Supply Chain Problem Nobody Talks About

The risks do not stop at the core runtime.

They also extend to the marketplace.

ClawHub is an open marketplace where users can download skills for OpenClaw agents. These skills expand the agent's capabilities. They can connect to services, automate tasks, and add new features.

Also read How to Run OpenClaw Safely Across Platforms (Windows, macOS, Linux)

That flexibility is powerful.

But it also creates a supply chain risk.

Security research found dozens of malicious skills uploaded to ClawHub. Some appeared harmless. Some were even marked as safe by automated scanners. But inside, they contained hidden commands.

One campaign delivered a macOS information stealer known as Atomic Stealer. According to researchers at Trend Micro, the attack worked in a simple way:

  • A skill appeared normal on the surface
  • It directed the agent to fetch installation instructions from a website
  • The instructions included a hidden command
  • That command downloaded and executed malware

The AI agent followed the instructions because they looked valid.

As Trend Micro explained,

"The infection chain begins with a normal SKILL.md that installs a prerequisite. The skill appears harmless on the surface and was even labeled as benign on VirusTotal. OpenClaw then goes to the website, fetches the installation instructions, and proceeds with the installation if the LLM decides to follow the instructions."
like untrusted code execution.

In another case, certain skills instructed other AI agents to store cryptocurrency wallet keys in plain text and route payments through attacker-controlled systems.

This was not just malware.

It was agent-to-agent manipulation.

Some malicious actors even created AI personas on agent-focused social platforms and directly promoted their harmful skills to other agents. They relied on default trust between automated systems.

That turns the marketplace into a risk zone.

If users install skills without carefully reviewing them, they hand control over to unknown authors. And because AI agents can act on instructions automatically, the impact can be immediate.

The key takeaway is simple:

When AI agents install third-party skills, they extend trust beyond their own systems. If that trust is misplaced, the compromise spreads quickly.

Microsoft’s Warning: Don’t Run This on Your Workstation

The risks around AI agents have become serious enough that Microsoft issued a clear warning.

OpenClaw should not be treated like a normal developer tool.

The Microsoft Defender Security Research Team made this clear,

"Because of these characteristics, OpenClaw should be treated as untrusted code execution with persistent credentials. It is not appropriate to run on a standard personal or enterprise workstation."

It should be treated like untrusted code execution.

That means it can behave in ways you do not expect, especially when it processes external input or runs third-party skills.

The Microsoft Defender Security Research Team highlighted a few core risks:

  • Credential exposure — the agent may access tokens, API keys, or saved credentials
  • Memory manipulation — injected content can change how the agent reasons
  • Host compromise — if the agent executes harmful instructions, your system can be affected

The main concern is how AI agents operate.

They are designed to read data, make decisions, and take actions across connected systems. If they are tricked into running malicious instructions, they can act with the same permissions you gave them.

That is why the team made a strong recommendation.

Do not run OpenClaw directly on your main workstation.

Instead, they suggest:

  • Use a separate virtual machine or dedicated system
  • Run the agent with limited permissions
  • Avoid connecting it to sensitive data or production systems
  • Keep a monitoring and recovery plan in place

The idea is simple.

If something goes wrong, you should be able to contain it.

This advice changes how we should think about AI agents. They are not just tools we install and forget. They are active systems that can interact with your environment.

If you run them without isolation, you are giving that system direct access to your machine.

How Not to Get ClawJacked (Practical Security Checklist)

ClawJacked was not a complex attack. It worked because basic protections were missing.

The good part is that most of the risk can be reduced with simple steps. You do not need advanced security tools. You need the right habits.

Here is what you should do.

Patch Immediately

Start with the basics.

Update OpenClaw to the latest version. The ClawJacked issue was fixed quickly, but only for users who applied updates.

Running older versions leaves the door open.

Make it a rule:

If your AI agent connects to anything important, updates cannot be delayed.

Isolate the Runtime

Do not run AI agents on your main machine.

Use a separate environment:

  • A virtual machine
  • A dedicated system
  • A container with strict limits

If something goes wrong, you want the damage to stay contained.

Isolation is one of the most effective ways to reduce risk.

Lock Down WebSocket Access

ClawJacked worked because the gateway accepted connections too easily.

You should:

  • Add rate limiting to prevent password guessing
  • Use strong authentication
  • Restrict which connections are allowed

Local traffic should not be trusted by default.

Require Approval for New Devices

The gateway auto-approved local connections. That made the attack silent.

Change this behavior if possible.

Every new device or session should require manual approval. If something connects, you should know about it.

Audit Skills Before Installing

Do not install skills blindly from marketplaces like ClawHub.

Before installing:

  • Check the source
  • Review what the skill does
  • Avoid running commands suggested in comments or forums

If a skill asks you to copy-paste commands into your terminal, treat it as risky.

Limit Agent Permissions

Your AI agent does not need access to everything.

Reduce permissions:

  • Remove unused integrations
  • Avoid connecting sensitive systems
  • Use separate accounts with limited access

If the agent gets compromised, limited permissions reduce the impact.

Treat Inputs as Untrusted

AI agents read logs, emails, messages, and web content.

Do not assume that input is safe.

  • Logs can be poisoned
  • Emails can contain hidden instructions
  • Web content can be crafted to mislead

The agent should not blindly act on any input without checks.

Monitor Activity

You should know what your agent is doing.

Watch for:

  • New device registrations
  • Unusual connections
  • Unexpected actions or commands

If something feels off, investigate quickly.

Have a Recovery Plan

Assume that failure is possible.

Prepare for it:

  • Know how to shut down the agent
  • Revoke API keys and tokens
  • Reset configurations
  • A fast response can limit damage.

Do not treat AI agents as small scripts. Treat them as systems with access to your environment.

Because that is exactly what they are.

SaaS Isn’t Dead — Reckless Self-Hosting Is

“SaaS is dead” sounds bold. It sounds like control has shifted back to builders.

But ClawJacked tells a different story.

Self-hosting gives freedom. It also gives responsibility.

When you run tools like OpenClaw, you are not just installing software. You are running a system that can read data, connect to services, and take actions on your behalf.

In SaaS, security is managed for you. Updates are automatic. Access is controlled. Monitoring is built in.

In self-hosted setups, all of that becomes your job.

If you skip even a few basic checks, the risk grows fast.

ClawJacked did not break advanced defenses. It used simple gaps:

  • Trusting localhost
  • Missing rate limits
  • Silent approvals
  • Broad permissions

That is all it took.

This is where the “SaaS is dead” idea falls apart.

SaaS is not just about convenience. It is also about managed security. When you move away from it, you must replace that layer yourself.

Otherwise, you are trading subscription costs for security risk. The real shift is not SaaS vs self-hosted.

It is managed systems vs unmanaged systems. And unmanaged systems fail quietly.

Also read OpenClaw (Clawdbot or Moltbot) AI Security Risks You Should Know Before Using It

Final Thoughts: AI Agents Need Security-First Thinking

AI agents are changing how work gets done.

They can read, write, analyze, and act across tools. That makes them powerful. It also makes them risky.

ClawJacked is a reminder that these systems should not be treated as experiments once they touch real data.

They need the same care as any production system.

That means:

  • Secure defaults
  • Limited access
  • Continuous updates
  • Clear monitoring

The biggest mistake is thinking, “It’s just running locally.”

Local does not mean safe. If an AI agent can take action on your behalf, it should be protected like any system with access to your data.

Because once it is compromised, it is not just a bug. It is access.