Autonomous AI agents coordinating, sharing code, and taking action without human oversight may sound like science fiction, but the Moltbook and Clawdbot experiments show it’s already happening.
While much of the public conversation has focused on the novelty of “AI agents in the wild,” security leaders from Salt Security warn the real issue is far more mundane, yet potentially more dangerous.
According to Salt Security, these experiments expose a growing visibility and governance gap at the API layer, where autonomous agents operate using legitimate credentials and trusted access.
“What people interpreted as ‘emergent AI behaviour’ was really just API-driven automation operating at scale,” saysEric Schwake, director of cybersecurity Strategy at Salt Security.
“From a security perspective, autonomy isn’t intelligence, it’s more about speed. And speed amplifies risk when the underlying APIs aren’t visible or governed properly.”
Why should organisation pay attention
The Moltbook and Clawdbot experiments act as a preview of what will soon be commonplace inside enterprises, as agentic AI is embedded into SaaS platforms, DevOps pipelines, customer service systems and internal tooling.Salt Security highlights three key risks:
- Invisible attack surfaces – AI agents communicate entirely via machine-to-machine API calls, often bypassing traditional security tools. Many organisations don’t know which APIs their agents are accessing or what data they can reach.
- Authenticated access abuse – Autonomous agents operate with valid credentials and permissions, making them attractive targets for attackers. A single compromised agent can be manipulated to exfiltrate data, commit fraud, or trigger unauthorised actions, all while appearing “legitimate” in logs.
- Loss of governance and accountability – Without clear identity, provenance and behavioural baselines, organisations cannot prove what an agent did, why it did it, or whether it complied with internal policy or regulation.
“When you remove the human from the loop, you remove the manual gatekeeper,” Schwake adds. “If the APIs an agent relies on aren’t secured, that ‘autonomous’ system simply becomes a force multiplier for attackers.”
Not a future AI problem, rather a present-day security failure
Salt Security stresses that this is not about rogue AI or runaway intelligence. Instead, it reflects long-standing weaknesses in API visibility, credential management, and runtime protection, now magnified by autonomous systems acting at machine speed.
“The narrative of ‘uncontrolled AI’ collapses as soon as you inspect the backend,” says Schwake. “Agents don’t rebel. They simply follow the API paths they’re given. The real danger isn’t a rogue mind, it’s a rogue API call.”
How organisations should prepare
As agentic AI becomes the norm, Salt Security advises organisations to rethink how they secure automation:
- See it: Maintain continuous inventory of every API an AI agent can access, including shadow and dynamically generated APIs
- Govern it: Enforce least-privilege access, identity, and contextual policy for autonomous systems
- Protect it: Monitor behavioural patterns to detect anomalous or abusive agent activity in real time
“You can’t scale AI innovation without securing the API fabric underneath it,” Schwake concludes. “Every ‘decision’ an agent makes is ultimately an API call with real-world consequences for data, trust and compliance.”