Larry Davis, Ph.D.
Chief AI Officer
Agentic AI has become the center of conversation in artificial intelligence. The latest trend is OpenClaw, a framework that’s quickly redefining what it means to interact with AI agents. There have been multiple reports of security issues with OpenClaw in the news this week, so we thought it worthwhile to ask our Chief AI Officer, Larry Davis, to provide insights on what this means and why it is relevant. Throughout, we’ll link to more details on these topics, like OpenClaw, a collection of software tools for communicating with AI agents.
What is Agentic AI?
An agent is a software program that can take in information, make decisions on that information, and then take actions toward a predefined goal, often without human intervention. Traditional agents follow fixed rules and operate within firm boundaries.
Agentic AI is when traditional agents are augmented with large-language models (LLMs), like ChatGPT or Claude, to grant them reasoning capabilities. Because it has a “brain,” an AI agent isn’t limited to following explicit instructions; it can interpret intent, adapt to unknown situations, make decisions, and take actions where a traditional agent would get stuck. The advantage is that an AI agent has far greater capabilities than a traditional agent. They are often used as “digital teammates” or “digital assistants.” One of the challenges is that the behavior of an AI agent can be unpredictable; scientists still don’t understand why an LLM arrives at its decisions.
OpenClaw Hits Prime Time
OpenClaw’s rise is tied to a simple but powerful idea: agents should be able to interact with us through the tools we already use. With OpenClaw, agents can receive instructions and communicate status and results using messaging applications. From your phone, you can message an agent running on your desktop as easily as texting a human friend. The OpenClaw framework can be used on every major platform, lowering the barrier to experimentation and deployment.
That accessibility is fueling a wave of creativity. People are using OpenClaw to direct agents to:
- Negotiate lower insurance rates on their behalf
- Execute stock trades (sometimes profitably, sometimes disastrously)
- Participate in agent‑only social media channels
Creation Without Comprehension Is a Risk Multiplier
The growth in usage of OpenClaw and other agent tools is astonishing, a little unsettling, and absolutely a sign that we’ve crossed into a new phase of AI adoption. People are using AI to quickly build agents and applications without understanding the full ramifications. For example, giving an agent the power to conduct financial transactions for you.
I’m genuinely excited about the creative explosion AI is enabling. The ability to build applications and to automate and orchestrate complex workflows with natural language is transformative.
But the fundamentals still matter: basic cybersecurity standards, software development principles, operational safeguards, and clear boundaries of authority and access must be applied and measured. And even trained software developers don’t always get this right. We can’t expect people with no experience to know what to build into their agents to protect their information and systems.
Skipping these steps in the rush toward “vibe coding” is how cool projects and productivity boosts become huge vulnerabilities.
Questions Every Team Should Be Asking
If you’re deploying AI agents or other systems that use LLMs, or even thinking about it, your implementation team should be able to answer the following questions, clearly and confidently:
- Agent protocol — How do our agents communicate, authenticate, and verify instructions? How does one agent know another is genuine?
- Agent integrity — How do we know if our agent is using and responding with accurate information?
- Minimum necessary access — What is the least amount of access and authority our agent needs to accomplish its tasks, and how is that enforced?
- Data protection — What data does the agent touch, store, or transmit, and how is it secured?
- Disaster recovery — What happens when the agent fails, misbehaves, or is compromised?
These aren’t optional. They’re the foundation of responsible AI deployment. And just putting it in a prompt doesn’t guarantee that it will be implemented or respected.
Question For You
The future of Agentic AI is bright, but only if we build it with intention. As you explore what’s possible, what’s exciting, and what’s newly within reach, what part of your current or planned agent workflows feels the least defined from a security or governance standpoint?