OpenClaw (Clawdbot or Moltbot) AI Security Risks You Should Know Before Using It

Published on
January 29, 2026
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

OpenClaw (Clawdbot or Moltbot) entered the AI world with rare momentum. An open-source personal AI assistant that runs locally, takes real actions, and costs nothing quickly attracted developers, founders, and automation enthusiasts. For many, it felt like the first agentic AI that actually worked. Yet the same capabilities that made Clawdbot (Moltbot) powerful also exposed users to serious security risks that became impossible to ignore.

Built by Peter Steinberger, the AI assistant originally known as Clawdbot has gone through multiple rebrands following trademark concerns raised by Anthropic. The project was first renamed Moltbot and, shortly afterward, rebranded again as OpenClaw.

Despite the name changes, the underlying system remains the same. OpenClaw runs directly on a user’s machine with deep system-level access. It can read and write files, execute commands, control browsers, and interact with messaging platforms. That level of autonomy shifts trust away from cloud providers and places it entirely on individual configuration, operational discipline, and ongoing security hygiene.

Security researchers soon found that many OpenClaw (Clawdbot or Moltbot) deployments were exposed to the open internet, leaking API keys, credentials, private messages, and full command execution access. Investigations by figures like Jamieson O’Reilly and firms such as SlowMist and Hudson Rock showed that these were not theoretical concerns but active, observable failures.

This article examines Clawdbot’s confirmed security issues, why its architecture amplifies risk, and what these findings mean for anyone considering agentic AI in 2026.

Why Clawdbot’s OpenClaw (Clawdbot or Moltbot) Architecture Creates Security Risks

Clawdbot’s security issues do not come from a single bug. They stem from how the system is designed. At its core, OpenClaw (Clawdbot or Moltbot) follows a local-first, agentic architecture that deliberately breaks many of the security boundaries modern operating systems rely on.

Unlike cloud-based AI platform that operate within restricted environments, OpenClaw (Clawdbot or Moltbot) runs directly on a user’s machine with shell-level permissions. This allows it to read and write files, execute scripts, run terminal commands, and control web browsers. The agent must hold these privileges to perform tasks like managing emails, handling credentials, or completing purchases. That same access dramatically increases the blast radius when something goes wrong.

The assistant also acts as a bridge between local resources and external services. Through its gateway and control interfacepr, OpenClaw (Clawdbot or Moltbot) connects large language models to messaging platforms and third-party APIs. When misconfigured, these gateways have been exposed to the public internet, effectively turning a personal assistant into a remotely accessible control plane. Security researchers repeatedly found instances where authentication was missing or bypassed due to proxy misconfigurations.

Another architectural concern lies in trust assumptions. OpenClaw (Clawdbot or Moltbot) was designed with developer convenience in mind, often trusting localhost connections originating from “localhost” (127.0.0.1) and assuming the operator understands networking, reverse proxies, and authentication layers. When users deploy the system behind proxies without fully understanding how traffic is forwarded, internal safeguards can fail silently. The software behaves as designed, but the environment violates its threat model.

This design also makes Clawdbot (Moltbot)OpenClaw (Clawdbot or Moltbot) uniquely vulnerable to prompt injection and social engineering. Because the agent interprets instructions from emails, messages, and web content, malicious inputs can trigger unintended actions. Unlike traditional chatbots, these actions do not stop at text generation. They can include copying files, transmitting private messages, or executing commands without the user realizing it.

Security experts have pointed out that this model reverses decades of progress in endpoint protection. Sandboxing, permission separation, and process isolation exist to limit damage. a tools like OpenClaw (Clawdbot or Moltbot) intentionally bypass those constraints to deliver value. The result is a powerful assistant that demands expert-level operational security from anyone running it.

Confirmed OpenClaw (Clawdbot or Moltbot) Security Issues and Real-World Exposures

The security risks surrounding OpenClaw (Clawdbot or Moltbot) are not hypothetical. Multiple independent investigations confirmed active exposures affecting hundreds of real deployments, with attackers able to access credentials, private messages, and system-level controls.

Security researcher Jamieson O’Reilly was among the first to publicly document the issue after discovering that many OpenClaw (Clawdbot or Moltbot) control servers were accessible without authentication. By using internet scanning tools such as Shodan, he was able to locate exposed instances in seconds by searching for the distinctive “Clawdbot Control” interface.

These exposed gateways granted access to sensitive data including API keys, OAuth secrets, bot tokens, signing keys, and complete conversation histories across connected messaging platforms. In several cases, attackers could send messages as the user, execute tools, and run system commands remotely. Some instances were even running with elevated privileges, amplifying the potential damage.

Blockchain security firm SlowMist confirmed that hundreds of unauthenticated OpenClaw (Clawdbot or Moltbot) gateways were publicly accessible. Their analysis warned that exposed deployments could lead to credential theft, private message leaks, and remote code execution. These findings aligned with reports from Hudson Rock, which highlighted that secrets stored by the assistant were often saved in plaintext files on local machines.

The root cause of many exposures was not a single exploit but configuration mistakes. OpenClaw (Clawdbot or Moltbot) gateway was designed to trust localhost connections, assuming safe, local use. When users placed the system behind reverse proxies without correctly configuring trusted IPs, authentication checks were silently bypassed. Traffic appeared local even when coming from the open internet.

In parallel, supply chain risks emerged. O’Reilly demonstrated that OpenClaw (Clawdbot or Moltbot) skills library could be abused by uploading a malicious package and artificially inflating its popularity. Developers unknowingly installed the poisoned skill, proving that attackers could execute code on Moltbot instances without direct access.

These incidents revealed a consistent pattern. OpenClaw (Clawdbot or Moltbot) concentrates high-value assets in a single agent that has broad authority. When exposed, attackers do not just steal data. They inherit the agent’s capabilities, turning a personal assistant into a persistent backdoor.

Prompt Injection and Social Engineering Risks in OpenClaw (Clawdbot or Moltbot)

Prompt injection is one of the most serious risks facing OpenClaw (Clawdbot or Moltbot) because of how the assistant interprets information. Unlike traditional chatbots, OpenClaw (Clawdbot or Moltbot) does not just generate text. It observes messages, emails, and web content, then decides what actions to take based on that input.

Researchers demonstrated that attackers can hide instructions inside seemingly harmless content. An email or message might appear normal to a human reader, yet include embedded prompts that instruct the agent to copy files, send private messages, or transmit credentials to an external server. Once OpenClaw (Clawdbot or Moltbot) processes the message, it treats those instructions as legitimate tasks.

This risk increases because OpenClaw (Clawdbot or Moltbot) operates continuously and maintains long-term memory. Private conversations, credentials, and behavioral patterns are stored locally so the assistant can remain helpful over time. When prompt injection succeeds, attackers do not need to exploit traditional software vulnerabilities. They simply persuade the agent to misuse the access it already has.

Security researchers showed how this could be abused to extract API keys and cryptographic secrets in minutes. In one documented test, an attacker asked the agent to “check” an email and relay specific data back to an external endpoint. The assistant complied because the request appeared consistent with its role and permissions.

Social engineering compounds the problem. Users often trust the assistant to handle sensitive tasks such as reading emails, managing accounts, or interacting with financial services. If an attacker gains indirect influence over the assistant’s inputs, they can manipulate its behavior without touching the system directly. The user may never see the malicious instruction, yet the agent executes it faithfully.

This threat model is fundamentally different from traditional malware. There is no exploit chain, no suspicious binary, and no obvious intrusion. The attack lives entirely within natural language. As long as OpenClaw (Clawdbot or Moltbot) has system access and the authority to act, prompt injection remains a persistent risk that cannot be fully eliminated through patches alone.

Data Storage, Plaintext Secrets, and Malware Exposure

Clawdbot’s local-first design shifts data storage away from the cloud, but it does not automatically make that data safe. Investigations revealed that many secrets handled by the assistant are stored directly on the local filesystem in plaintext formats such as Markdown and JSON.

These files can include API keys, OAuth tokens, messaging platform credentials, conversation histories, and behavioral memory used by the agent to improve over time. While this approach simplifies development and debugging, it creates a single point of failure on the host machine. If that system is compromised, everything the assistant has ever accessed becomes available to the attacker.

Cybersecurity firm Hudson Rock warned that this storage model aligns perfectly with the capabilities of modern infostealer malware. Malware families such as Redline, Lumma, and Vidar are already optimized to search for local credential stores, browser data, and configuration files. Adding an AI assistant that aggregates high-value secrets in predictable directories increases the payoff for attackers.

The risk grows further when OpenClaw (Clawdbot or Moltbot) instances are exposed to the internet or run on always-on hosts like dedicated machines and home servers. If malware gains a foothold, attackers can silently harvest stored secrets and continue monitoring new activity as the assistant updates its memory. In some scenarios, attackers could also modify stored instructions, effectively turning the agent into a long-term backdoor that continues leaking data even after the initial compromise.

Unlike encrypted vaults or containerized environments, OpenClaw (Clawdbot or Moltbot) does not enforce encryption at rest by default. The security of stored data depends almost entirely on the operating system and the user’s ability to keep the host clean. For non-expert users, this assumption often fails in practice.

This storage model illustrates a broader issue with agentic AI. By concentrating sensitive data, long-term memory, and execution power in a single process, OpenClaw (Clawdbot or Moltbot) amplifies the impact of any breach. A standard chatbot leak exposes conversations. A compromised agent exposes an entire digital life.

Supply Chain Risks Through Skills and Extensions

Clawdbot’s extensibility is one of its strongest selling points, but it also introduces a high-risk supply chain problem. The assistant supports third-party skills that expand what the agent can do, from handling new services to automating custom workflows. Each skill runs with the same authority as the core agent.

Security researcher Jamieson O’Reilly demonstrated how this model can be abused by publishing a skill to Clawdbot’s skills library and artificially inflating its popularity. Developers across multiple countries downloaded the package, believing it to be safe based on its apparent adoption. The skill itself was intentionally benign, but it proved that arbitrary code execution on live Moltbot instances was possible through the supply chain alone.

At the time of investigation, the skills library treated all uploaded code as trusted by default. There was no mandatory review process, no sandboxing, and no permission scoping to limit what a skill could access. Once installed, a skill inherited full access to files, credentials, messaging integrations, and command execution. From an attacker’s perspective, this removes the need to breach a system directly.

Supply chain attacks are particularly dangerous because they scale. A single poisoned extension can spread rapidly as developers copy configurations, share recommendations, and automate installations. In the context of an agentic AI tool, the attacker does not just gain passive access. They gain an autonomous operator capable of executing instructions long after the initial installation.

This risk is not unique to OpenClaw (Clawdbot or Moltbot), but the consequences are amplified by its architecture. Traditional software plugins might steal data or modify behavior. A compromised OpenClaw (Clawdbot or Moltbot) skill can impersonate the user, manipulate communications, execute commands, and selectively exfiltrate information over time.

For users without the ability to audit code and monitor behavior continuously, this creates an uneven risk profile. The system rewards technical sophistication while exposing casual adopters to silent compromise. As agentic AI ecosystems grow, supply chain security becomes as important as endpoint protection, and OpenClaw (Clawdbot or Moltbot) currently places that burden almost entirely on the user.

Is OpenClaw (Clawdbot or Moltbot) Safe for Non-Technical Users?

OpenClaw (Clawdbot or Moltbot) is not designed for casual users, even though its popularity suggests otherwise. While installation may look simple on the surface, safe operation depends on a level of technical understanding that many users do not have. This mismatch is where most security failures begin.

Running OpenClaw (Clawdbot or Moltbot) securely requires knowledge of networking, authentication, reverse proxies, and access controls. Users must understand how localhost trust works, how proxies forward traffic, and how exposed ports can silently bypass safeguards. Without that understanding, it is easy to deploy a system that appears functional while remaining open to the internet.

Security experts have repeatedly warned that this gap between usability and operational complexity makes OpenClaw (Clawdbot or Moltbot) risky outside of expert environments. Eric Schwake, director of cybersecurity strategy at Salt Security, highlighted that many users fail to track which personal and corporate tokens they have shared with the assistant. Over time, this creates blind spots where credentials remain active, exposed, and forgotten.

The risk is not limited to misconfiguration. Even correctly deployed instances rely on assumptions that break down in home and small business environments. Malware infections, shared machines, and weak endpoint security can all undermine Clawdbot’s local-first model. Once the host is compromised, the assistant’s stored secrets and memory become low-effort targets.

Some security leaders have gone further. Heather Adkins, vice president of security engineering at Google Cloud, advised users to avoid installing OpenClaw (Clawdbot or Moltbot) altogether, arguing that the risk profile is unacceptable for most people. Other researchers questioned whether it is reasonable to trust any system with full shell access and persistent memory, regardless of intent.

For technically skilled operators who understand least-privilege design, network isolation, and continuous monitoring, OpenClaw (Clawdbot or Moltbot) can be controlled. For everyone else, it places disproportionate trust in perfect execution. The assistant does not forgive mistakes, and mistakes are inevitable.

Why Clawdbot Changed Its Name to Moltbot, Then to OpenClaw

Clawdbot’s rapid name changes were not driven by product strategy or security fixes. They were the result of trademark pressure and legal risk management.

The project was originally released as Clawdbot, an open-source agentic AI assistant built by Peter Steinberger. Shortly after the tool went viral, Anthropic raised trademark concerns over the similarity between “Clawdbot” and its flagship AI model name, Claude. While no lawsuit was publicly filed, the issue was significant enough to prompt an immediate rebrand.

Steinberger first renamed the project to Moltbot, a reference to a lobster shedding its shell. The intent was to preserve brand continuity while distancing the project from potential trademark conflict. Importantly, this change was cosmetic. The underlying codebase, architecture, and security posture remained unchanged.

Within days, the project was renamed again to OpenClaw. This second rebrand was aimed at fully eliminating any residual naming ambiguity and reinforcing the project’s open-source identity. Steinberger confirmed that OpenClaw would be the long-term name going forward.

Safer Alternatives for Agentic AI in 2026

The security problems exposed by OpenClaw (Clawdbot or Moltbot) are not an argument against agentic AI itself. They highlight the need for better design choices around access, isolation, and data ownership. In 2026, safer OpenClaw (Clawdbot or Moltbot) alternative focus on limiting blast radius rather than maximizing autonomy at all costs.

A safer agentic AI system starts with strict separation between decision-making and execution. Instead of granting shell-level access, modern tools rely on scoped permissions, controlled connectors, and audited actions. This allows assistants to be helpful without inheriting unrestricted control over a user’s system or credentials.

Another key shift is moving away from opaque local memory stores. Secure platforms encrypt data at rest, isolate workspaces, and make it clear which documents, APIs, or systems the assistant can access. This reduces the risk of malware harvesting secrets or attackers persisting silently inside long-lived memory files.

Platforms like Knolli take a different approach to agentic AI. Instead of running a general-purpose agent with full system access, Knolli allows teams to build private AI copilots around specific data, documents, and workflows. Access is explicit, scoped, and visible. The assistant answers questions, generates structured outputs, and supports daily work without acting as a hidden system operator.

This model favors predictability over raw autonomy. There is no need for reverse proxies, exposed gateways, or persistent shell access. Teams can deploy copilots in controlled environments, connect only the data they choose, and maintain clear boundaries between AI reasoning and execution. For businesses handling sensitive research, internal documents, or client data, this approach aligns better with modern security expectations.

Agentic AI is moving toward least-privilege design. The future belongs to assistants that respect security boundaries rather than dismantle them. OpenClaw (Clawdbot or Moltbot) showed what is possible. Safer platforms show what is practical.

Final Verdict: Should You Use OpenClaw (Clawdbot or Moltbot) in 2026?

OpenClaw (Clawdbot or Moltbot) proves that agentic AI can work. It can read emails, manage accounts, execute commands, and coordinate daily tasks with minimal prompting. From a capability standpoint, it represents a meaningful step forward. From a security standpoint, it represents a sharp warning.

The core issue is not a single vulnerability or a temporary misconfiguration. Clawdbot’s risk profile comes from design choices that concentrate authority, memory, and execution power in one always-on agent. When exposed through misconfiguration, malware, supply chain compromise, or prompt injection, attackers inherit the same privileges the user granted the assistant. That makes failures catastrophic rather than contained.

For experienced operators who understand network isolation, proxy hardening, secret rotation, and continuous monitoring, OpenClaw (Clawdbot or Moltbot) can be run with acceptable risk in controlled environments. Even then, it requires ongoing vigilance and a willingness to accept that no setup is perfectly secure.

For most users, the tradeoff is unfavorable. Non-technical operators are asked to manage enterprise-grade security decisions while trusting an AI system with unrestricted system access. Multiple security leaders and researchers have warned that this gap between power and protection is where harm occurs.

In 2026, agentic AI does not need to operate with full system access to be useful. Safer models now exist that provide assistance without dismantling security boundaries. Until tools like OpenClaw (Clawdbot or Moltbot) adopt strict isolation, encrypted storage, permission scoping, and hardened defaults, they remain better suited for experimentation than everyday use.

Exploring AI Copilots Built With Security Boundaries

Build a secure, work-ready AI copilot with Knolli—without granting AI full system access. Create controlled workflows, connect approved data sources, and deploy copilots designed for reliability, privacy, and everyday business use.

Build a Secure AI Copilot