Tools
Tools are the actions ScalyClaw can take during a conversation. The LLM decides when and how to use them — it emits a structured tool call, the system executes it, and the result is fed back into the conversation before the LLM continues. This loop repeats until the LLM produces a final text response with no pending tool calls.
ScalyClaw ships with a large set of built-in tools covering code execution, memory, messaging, scheduling, agent management, file I/O, vault, configuration, and more. They are always available — no additional configuration required. MCP tools from connected servers are discovered automatically and work the same way.
Built-in Tools
Built-in tools are divided into two categories by how they execute: direct tools run inline on the Node process, and job tools are dispatched to a BullMQ worker queue. See the Tool Architecture section for the full routing table.
Direct Tools
Direct tools run synchronously in the Node process. They are fast, lightweight operations — SQLite queries, Redis reads/writes, in-memory lookups — where the overhead of a queue round-trip would be net negative.
Memory
| Tool | Description | Key Parameters |
|---|---|---|
memory_store |
Persist a fact, preference, or observation to the long-term memory system. The entry is embedded and indexed for future semantic retrieval. | content — text to remembertype — category label (e.g. "preference", "fact", "task")confidence — float 0–3 |
memory_search |
Query existing memories using semantic similarity. Returns the most relevant entries ranked by vector distance, with FTS5 full-text search as a fallback. | query — natural-language search stringlimit — maximum results (default 10) |
memory_recall |
Retrieve a specific memory entry by its ID. | id — numeric ID of the memory entry |
memory_update |
Update the content or metadata of an existing memory entry. | id — numeric ID of the memory entrycontent — updated texttype — updated categoryconfidence — updated confidence |
memory_delete |
Remove a specific memory entry by its ID. Use when a stored fact is outdated or incorrect. | id — numeric ID of the memory entry to delete |
Messaging
| Tool | Description | Key Parameters |
|---|---|---|
send_message |
Send a text message to a channel or user. | channel — target channel IDmessage — text content to send |
send_file |
Send a file to a channel or user. | channel — target channel IDpath — path to the file to sendcaption — optional description |
Agents
| Tool | Description | Key Parameters |
|---|---|---|
create_agent |
Create a new agent with a given name, system prompt, and configuration. | name — unique agent nameprompt — system prompt textmodels, tools, skills — optional initial config |
delete_agent |
Permanently delete an agent by name. | name — agent to delete |
Scheduling
| Tool | Description | Key Parameters |
|---|---|---|
list_reminders |
List all scheduled reminders for the current channel. | — |
list_tasks |
List all scheduled LLM tasks. | — |
cancel_reminder |
Cancel a previously scheduled reminder, removing its BullMQ job from the scheduler queue. | id — reminder ID returned when it was created |
cancel_task |
Cancel a previously scheduled LLM task. | id — task ID returned when it was created |
Vault
| Tool | Description | Key Parameters |
|---|---|---|
vault_store |
Store a secret in the vault. Secrets are stored encrypted in Redis at scalyclaw:secret:*. |
key — secret namevalue — secret value |
vault_check |
Check whether a secret with the given key exists in the vault (does not reveal the value). | key — secret name to check |
vault_delete |
Remove a secret from the vault by key. | key — secret name to delete |
vault_list |
List all secret keys currently stored in the vault (keys only, not values). | — |
Skills
| Tool | Description | Key Parameters |
|---|---|---|
register_skill |
Register a skill after writing its files. Loads from disk, runs the security guard, adds to config, and notifies workers. | id — skill ID (e.g. "weather-skill") |
System Info
| Tool | Description | Key Parameters |
|---|---|---|
system_info |
Query system information. A single unified tool that covers agents, skills, models, guards, queues, processes, usage, config, and vault. | section — one of "agents", "skills", "models", "guards", "queues", "processes", "usage", "config", "vault" |
Management operations like list_models, toggle_model, list_guards, toggle_guard, get_config, update_config, list_queues, pause_queue, etc. are registered as admin tools. They are callable via the REST API and dashboard but are not exposed to the LLM directly. The LLM uses system_info for read access instead.
File I/O
| Tool | Description | Key Parameters |
|---|---|---|
list_directory |
List the contents of a directory. | path — directory path |
file_read |
Read file content — full file or a specific line range. | path — absolute file pathstartLine, endLine — optional line range |
file_write |
Create, overwrite, or append to a file. | path — absolute file pathcontent — text to writemode — "overwrite" | "append" |
file_edit |
Apply a targeted search-and-replace inside a file. | path — absolute file pathold — exact string to findnew — replacement string |
file_ops |
File operations: copy, delete, rename, diff, or info. | operation — "copy" | "delete" | "rename" | "diff" | "info"path — target pathdest — destination (for copy/rename) |
Context
| Tool | Description | Key Parameters |
|---|---|---|
compact_context |
Summarise and compress the current conversation context to reduce token consumption. | — |
Job Tools
Job tools are not called by the LLM directly — they are invoked via submit_job or submit_parallel_jobs. The TOOL_QUEUE map in tool-impl.ts routes them to BullMQ queues. The Node enqueues a job and awaits the result asynchronously. This provides process isolation for arbitrary code execution and a separate concurrency domain for agent delegation.
| Tool | Description | Queue | Key Parameters |
|---|---|---|---|
execute_command |
Run a shell or subprocess command on the worker. | scalyclaw-tools |
command — command string to executecwd — optional working directory |
execute_skill |
Invoke a deployed skill by name. Skills are executable packages stored on disk in the skills/ directory and executed on the worker. |
scalyclaw-tools |
skill — name of the skill to invokeparameters — arbitrary object passed as input to the skill |
execute_code |
Execute code in a sandboxed worker process. Supports JavaScript, Python, and Bash. Returns stdout, stderr, and exit code. | scalyclaw-tools |
language — "js" | "python" | "bash"code — source string to execute |
delegate_agent |
Hand off a task to a named sub-agent. The sub-agent runs its own orchestrator loop with its own system prompt and tool set, then returns a structured result. | scalyclaw-agents |
agent — name of the agent to delegate totask — natural-language description of the work to perform |
schedule_reminder |
Schedule a one-shot text reminder. Runs locally on the Node and enqueues a delayed job to scalyclaw-internal. |
— (inline → scalyclaw-internal) |
message — content of the reminderat — ISO 8601 timestamptimezone — IANA timezone string |
schedule_recurrent_reminder |
Schedule a repeating reminder. Runs locally on the Node and enqueues a repeating job to scalyclaw-internal. |
— (inline → scalyclaw-internal) |
message — content of the remindercron — cron expressiontimezone — IANA timezone string |
schedule_task |
Schedule a one-shot LLM task. Runs locally on the Node and enqueues a delayed job to scalyclaw-internal. |
— (inline → scalyclaw-internal) |
task — natural-language task descriptionat — ISO 8601 timestamp |
schedule_recurrent_task |
Schedule a repeating LLM task. Runs locally on the Node and enqueues a repeating job to scalyclaw-internal. |
— (inline → scalyclaw-internal) |
task — natural-language task descriptioncron — cron expressiontimezone — IANA timezone string |
Meta Tools (Job Management)
Meta tools let the LLM inspect and manage the jobs it has submitted, enabling patterns like fire-and-forget parallelism with later result collection.
| Tool | Description | Key Parameters |
|---|---|---|
submit_job |
Execute a job tool (any tool from the job tools list above) and wait for the result. | toolName — job tool to execute (e.g. "execute_code", "delegate_agent")payload — tool-specific parameters |
submit_parallel_jobs |
Execute multiple tools in parallel and wait for all results. | jobs — array of { toolName, payload } objects |
get_job |
Get the status and result of a job by ID. | id — job ID |
list_active_jobs |
List active and recent jobs across queues. | — |
stop_job |
Stop a running or pending job. | id — job ID to stop |
Tool Call Example
When the LLM decides to use a tool, it emits a structured JSON object. ScalyClaw parses this, routes it to the correct execution target, and returns the result in the next turn.
// memory_store — persist a user preference { "type": "tool_use", "id": "toolu_01XqR9", "name": "memory_store", "input": { "content": "User prefers responses in British English", "type": "preference", "confidence": 0.95 } }
// execute_code — run a quick calculation { "type": "tool_use", "id": "toolu_02Yp7K", "name": "execute_code", "input": { "language": "js", "code": "const result = [1,2,3,4,5].reduce((a,b) => a+b, 0);\nconsole.log(result);" } }
// schedule_recurrent_reminder — recurring daily reminder { "type": "tool_use", "id": "toolu_03Zc1M", "name": "schedule_recurrent_reminder", "input": { "message": "Time to review your open tasks", "cron": "0 9 * * 1-5", "timezone": "Europe/London" } }
MCP Tools
ScalyClaw supports the Model Context Protocol (MCP). Tools discovered from connected MCP servers are automatically added to the available tools list at startup and whenever a hot-reload signal is received. They appear alongside built-in tools with no special configuration required beyond registering the MCP server connection.
The LLM uses MCP tools exactly the same way it uses built-in tools — by name, with a structured input object. ScalyClaw handles schema discovery, input validation, and result forwarding transparently.
| Aspect | Built-in Tools | MCP Tools |
|---|---|---|
| Registration | Hardcoded in tool-impl.ts |
Discovered from MCP server at connection time |
| Schema | Defined in source code | Provided by the MCP server via tools/list |
| Execution target | Local or worker queue (see routing table below) | Node — inline; forwarded to the MCP server via the Node's MCP client |
| Hot reload | Not applicable | Re-discovered on scalyclaw:skills:reload pub/sub signal |
| LLM visibility | Always present in system prompt tool list | Injected into system prompt alongside built-in tools |
MCP servers are registered in the ScalyClaw dashboard under Settings → MCP Servers. Each entry takes a transport type (stdio or sse), a command or URL, and optional environment variables that are resolved from the secret vault.
Tool Architecture
Not all tools need the same execution environment. The unified tool router in tool-impl.ts uses a TOOL_QUEUE map to decide where each tool runs before dispatching it. This keeps the routing logic in one place and makes it easy to add new tools without changing the orchestrator.
Routing Table
| Tool | Execution target | Queue (TOOL_QUEUE key) | Reason |
|---|---|---|---|
execute_code |
Worker sandbox | tools → scalyclaw-tools |
Arbitrary code must run in an isolated process, not in the Node |
execute_skill |
Worker skill runner | tools → scalyclaw-tools |
Skills are loaded and executed in the worker; worker holds the module cache |
execute_command |
Worker subprocess | tools → scalyclaw-tools |
Shell commands run in an isolated worker process for safety |
delegate_agent |
Node agent executor | agents → scalyclaw-agents |
Sub-agents run their own orchestrator loop; separate queue for independent scaling |
schedule_* tools |
Node — inline | — (not in TOOL_QUEUE) |
The handler runs locally and enqueues a delayed/repeating job to scalyclaw-internal from within its own code |
| MCP tools | Node — inline | — (not in TOOL_QUEUE) |
Forwarded to the MCP server via callMcpTool(); the Node holds the MCP client connections |
| All other built-in tools | Node — inline | — (no queue) | Lightweight operations (SQLite, Redis, filesystem); no isolation or offloading needed |
Simplified Routing Logic
// tool-impl.ts — TOOL_QUEUE map (simplified) const TOOL_QUEUE: Partial<Record<string, QueueKey>> = { execute_command: "tools", // → scalyclaw-tools (sandboxed worker) execute_skill: "tools", // → scalyclaw-tools (skill runner) execute_code: "tools", // → scalyclaw-tools (sandboxed worker) delegate_agent: "agents", // → scalyclaw-agents (agent executor) }; async function dispatchTool(name: string, input: unknown, ctx: ToolContext): Promise<string> { const queueKey = TOOL_QUEUE[name]; if (queueKey) { return enqueueAndWait(queueKey, name, input, ctx); // await BullMQ job result } if (name.startsWith("mcp_")) { return callMcpTool(name, input); // Node-local MCP client call } return executeTool(name, input, ctx); // run inline in the Node }
Why the Split?
The split between worker-queued and locally-executed tools reflects two different concerns:
- Isolation and safety —
execute_codeandexecute_commandrun arbitrary user-provided code and shell commands. Executing them inside the Node process could crash the orchestrator or expose internal state. The worker process provides a hard process boundary. If execution goes wrong, only the worker job fails — the Node is unaffected. - Module loading — Skills are TypeScript/JavaScript modules loaded dynamically from Redis. The worker maintains the module cache so reloads are fast and the Node's runtime stays clean.
- Independent scaling — By routing code execution and agent delegation to separate queues, you can run more workers in high-throughput scenarios without touching the Node. Memory and scheduler operations are fast enough that the overhead of a queue round-trip would be net negative.
- Latency — Memory reads, memory writes, and vault lookups each complete in under 5 ms against a local SQLite database or Redis connection. Queueing them would add 10–50 ms of unnecessary overhead per tool call.
MCP tools discovered from connected MCP servers are executed inline on the Node process via callMcpTool(). They are not routed through any BullMQ queue. The Node holds the MCP client connections and forwards tools/call requests directly to the MCP server process. This is different from execute_code and execute_skill, which are dispatched to the worker via the scalyclaw-tools queue.
Agent Tool Scoping
Sub-agents have a restricted tool set compared to the main orchestrator. This prevents runaway recursion (agents cannot delegate further), limits scheduling access to the main channel, and keeps sub-agent scope focused on the task at hand.
| Category | Available to agents |
|---|---|
| Messaging | send_message, send_file |
| Memory | memory_store, memory_search, memory_recall, memory_update, memory_delete |
| Vault | vault_check, vault_list (read-only access; no store or delete) |
| File I/O | All file I/O tools (file_read, file_write, file_edit, file_ops, list_directory) |
| Code / Skills / Commands | execute_code, execute_skill, execute_command |
| Agent delegation | Not available — agents cannot delegate to other agents |
| Scheduling | Not available — agents cannot create or cancel reminders or tasks |