Chapter 25: API Integration
OpenClaw exposes a REST API that lets you interact with the gateway programmatically โ send messages, manage sessions, trigger tasks, and query stats from any application. This chapter covers authentication, the full API reference, streaming, webhooks, the official SDK, and real-world integration patterns.
Why Use the API?
The REST API lets you integrate OpenClaw into any system:
- Internal dashboards โ Send messages and display responses in your own UI
- Automation pipelines โ Trigger AI tasks from CI/CD, cron jobs, or scripts
- Multi-app workflows โ Connect OpenClaw to Zapier, n8n, or Make
- Custom bots โ Build your own chat interface on top of OpenClaw's AI
- Monitoring โ Pull stats and session data into your analytics stack
Enabling the API
The REST API is enabled by default on the same port as the gateway:
{
"api": {
"enabled": true,
"port": 3000,
"prefix": "/api/v1",
"authentication": "bearer",
"cors": {
"origins": ["https://your-dashboard.com"],
"methods": ["GET", "POST", "PATCH", "DELETE"]
},
"keys": [
{
"key": "${API_KEY_1}",
"label": "My Application",
"workspaces": ["team", "admins"],
"rateLimit": 100
}
]
}
}
All API requests require a Bearer token in the Authorization header:
curl -H "Authorization: Bearer your-api-key" http://localhost:3000/api/v1/health
Authentication & API Keys
Generate a Key
openclaw api-key create --label "My App" --workspaces team,admins
Output:
API Key: oclaw_live_abc123def456...
Label: My App
Workspaces: team, admins
Created: 2026-04-30
Save this key immediately โ it is only shown once.
Manage Keys
# List existing keys (keys are masked)
openclaw api-key list
# Revoke a specific key
openclaw api-key revoke oclaw_live_abc123def456
# Rotate a key (creates new, revokes old)
openclaw api-key rotate oclaw_live_abc123def456
Key Scopes
Restrict a key to specific workspaces and rate limits:
{
"key": "${READONLY_KEY}",
"label": "Analytics Service",
"workspaces": ["*"],
"permissions": ["stats:read", "sessions:read"],
"rateLimit": 30
}
| Permission | Access |
|---|---|
messages:write | Send messages |
messages:read | Read message history |
sessions:read | List and read sessions |
sessions:delete | Clear sessions |
workspaces:read | List workspaces |
workspaces:write | Create/modify workspaces |
stats:read | Read usage statistics |
Core Endpoints
Health Check
GET /api/v1/health
Response:
{
"status": "ok",
"version": "1.4.2",
"uptime": 3600,
"channels": {
"telegram": "connected",
"slack": "connected",
"whatsapp": "connected"
},
"redis": "connected",
"memoryMB": 142
}
Send a Message
POST /api/v1/messages
Content-Type: application/json
Authorization: Bearer your-api-key
{
"workspace": "team",
"userId": "api-user",
"message": "Summarize the latest GitHub activity for the main repo",
"sessionId": "optional-session-id",
"context": {
"timezone": "America/New_York",
"language": "en"
}
}
Response:
{
"messageId": "msg_abc123",
"response": "Here is the latest GitHub activity...",
"tokensUsed": 842,
"latencyMs": 2100,
"sessionId": "sess_xyz789",
"toolsInvoked": ["github"],
"finishReason": "stop"
}
Streaming Response
For long responses, use streaming (Server-Sent Events) to show output as it's generated:
POST /api/v1/messages/stream
Content-Type: application/json
Authorization: Bearer your-api-key
{
"workspace": "team",
"userId": "api-user",
"message": "Write a detailed technical spec for a REST API"
}
SSE response format:
data: {"type":"token","content":"Here"}
data: {"type":"token","content":" is"}
data: {"type":"tool_use","tool":"github","input":{"action":"list_issues"}}
data: {"type":"tool_result","tool":"github","output":"3 open issues"}
data: {"type":"token","content":" the spec:"}
data: {"type":"done","messageId":"msg_abc","tokensUsed":1842,"latencyMs":4200}
JavaScript client example:
async function streamMessage(message) {
const res = await fetch('/api/v1/messages/stream', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.OPENCLAW_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
workspace: 'team',
userId: 'api-user',
message
})
});
const reader = res.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop(); // keep incomplete line in buffer
for (const line of lines) {
if (!line.startsWith('data: ')) continue;
const data = JSON.parse(line.slice(6));
if (data.type === 'token') process.stdout.write(data.content);
if (data.type === 'done') console.log('\n\nDone:', data.tokensUsed, 'tokens');
}
}
}
streamMessage('Explain Docker networking in 3 paragraphs');
Session Management
# List all active sessions
GET /api/v1/sessions
# Get a specific session
GET /api/v1/sessions/{sessionId}
# Get message history for a session
GET /api/v1/sessions/{sessionId}/history?limit=20&offset=0
# Clear a specific session
DELETE /api/v1/sessions/{sessionId}
# Clear all sessions in a workspace
DELETE /api/v1/sessions?workspace=team
Session object:
{
"id": "sess_xyz789",
"workspace": "team",
"userId": "api-user",
"createdAt": "2026-04-30T10:00:00Z",
"lastActiveAt": "2026-04-30T10:42:00Z",
"messageCount": 14,
"tokensUsed": 8420,
"channel": "api"
}
Workspace Management
# List all workspaces
GET /api/v1/workspaces
# Get a specific workspace
GET /api/v1/workspaces/{workspaceId}
# Create a new workspace
POST /api/v1/workspaces
{
"id": "new-team",
"label": "New Team",
"agent": "balanced",
"skills": ["web-search", "memory"]
}
# Update a workspace's allowlist
PATCH /api/v1/workspaces/{workspaceId}/allowlist
{
"add": ["telegram:123456789"],
"remove": ["telegram:987654321"]
}
# Delete a workspace
DELETE /api/v1/workspaces/{workspaceId}
Stats and Usage
# Overall stats
GET /api/v1/stats
# Stats for a date range
GET /api/v1/stats?from=2026-04-01&to=2026-04-30
# Per-workspace breakdown
GET /api/v1/stats/workspaces
# Per-user breakdown
GET /api/v1/stats/users?workspace=team&limit=20
Response:
{
"period": { "from": "2026-04-01", "to": "2026-04-30" },
"totals": {
"messages": 4821,
"tokensIn": 2840000,
"tokensOut": 8920000,
"costUSD": 24.80,
"avgLatencyMs": 1840,
"uniqueUsers": 38
},
"byDay": [
{ "date": "2026-04-01", "messages": 142, "tokensOut": 280000 },
{ "date": "2026-04-02", "messages": 198, "tokensOut": 390000 }
]
}
Webhooks (Outbound)
Configure OpenClaw to call your application when events happen:
{
"webhooks": {
"outbound": [
{
"url": "https://your-app.com/openclaw-events",
"secret": "${WEBHOOK_SECRET}",
"events": [
"message.received",
"message.sent",
"session.started",
"session.ended",
"error.agent",
"channel.disconnected"
],
"retries": 3,
"timeoutMs": 5000
}
]
}
}
Webhook Payload
{
"event": "message.sent",
"timestamp": "2026-04-30T10:42:00Z",
"data": {
"messageId": "msg_abc123",
"workspace": "team",
"userId": "telegram:123456789",
"channel": "telegram",
"response": "Here is your summary...",
"tokensUsed": 842,
"latencyMs": 2100
}
}
Verifying Webhook Signatures
Every call includes an X-OpenClaw-Signature header:
const crypto = require('crypto');
function verifyWebhook(rawBody, signature, secret) {
const expected = crypto
.createHmac('sha256', secret)
.update(rawBody)
.digest('hex');
return `sha256=${expected}` === signature;
}
// Express.js handler
app.post('/openclaw-events', express.raw({ type: 'application/json' }), (req, res) => {
const sig = req.headers['x-openclaw-signature'];
if (!verifyWebhook(req.body, sig, process.env.WEBHOOK_SECRET)) {
return res.status(401).send('Invalid signature');
}
const event = JSON.parse(req.body);
console.log('Event:', event.event, event.data);
res.status(200).send('OK');
});
Official SDK
Install the official OpenClaw API client:
npm install @openclaw/api-client
# or
pip install openclaw-client
Node.js / TypeScript
import { OpenClaw } from '@openclaw/api-client';
const client = new OpenClaw({
baseUrl: process.env.OPENCLAW_URL ?? 'http://localhost:3000',
apiKey: process.env.OPENCLAW_API_KEY!
});
// Send a message
const response = await client.messages.send({
workspace: 'team',
userId: 'my-app',
message: 'What is the status of our deployment?'
});
console.log(response.response);
// Stream a response
process.stdout.write('Response: ');
for await (const token of client.messages.stream({
workspace: 'team',
userId: 'my-app',
message: 'Write a comprehensive deployment checklist'
})) {
process.stdout.write(token);
}
// Get stats
const stats = await client.stats.get({ from: '2026-04-01', to: '2026-04-30' });
console.log('Total messages this month:', stats.totals.messages);
Python
from openclaw import OpenClaw
client = OpenClaw(
base_url="http://localhost:3000",
api_key=os.environ["OPENCLAW_API_KEY"]
)
# Send a message
response = client.messages.send(
workspace="team",
user_id="python-app",
message="Summarize the last 10 commits"
)
print(response.response)
# Stream
for token in client.messages.stream(
workspace="team",
user_id="python-app",
message="Write a deployment plan"
):
print(token, end="", flush=True)
Real-World Patterns
Trigger AI from a CI/CD Pipeline
# .github/workflows/deploy.yml
- name: Notify team via OpenClaw
run: |
curl -s -X POST $OPENCLAW_URL/api/v1/messages \
-H "Authorization: Bearer $OPENCLAW_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"workspace": "devops",
"userId": "github-actions",
"message": "Deployment of '${{ github.sha }}' to production completed successfully. Summarize the changes in this PR: ${{ github.event.pull_request.html_url }}"
}'
Daily Standup Summary
// Run every weekday at 9 AM
const response = await client.messages.send({
workspace: 'engineering',
userId: 'standup-bot',
message: 'Summarize all GitHub commits and Jira tickets closed yesterday for the team standup.'
});
// Post to Slack
await slackClient.chat.postMessage({
channel: '#standup',
text: response.response
});
On-Demand Report Generation
# Flask API endpoint that wraps OpenClaw
@app.route('/reports/weekly', methods=['POST'])
def weekly_report():
response = client.messages.send(
workspace="analytics",
user_id=f"report-{request.json['team']}",
message=f"Generate weekly performance report for {request.json['team']} team. Include metrics, highlights, and recommendations."
)
return jsonify({"report": response.response, "tokens": response.tokens_used})
Rate Limits and Error Handling
API responses include rate limit headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1746005400
Handle errors gracefully:
try {
const response = await client.messages.send({ ... });
} catch (error) {
if (error.status === 429) {
// Rate limited โ wait until reset
const resetAt = new Date(error.headers['x-ratelimit-reset'] * 1000);
console.log(`Rate limited. Retry after ${resetAt}`);
} else if (error.status === 503) {
// Gateway temporarily unavailable
console.log('OpenClaw is starting up, retry in a few seconds');
} else {
throw error;
}
}
Next: Chapter 26 โ Contributing to OpenClaw โ How to contribute bug fixes, features, and documentation to the OpenClaw open-source project.