Workers
The worker process is a stateless execution sandbox that consumes only the scalyclaw-tools queue. It only needs Redis and a reachable gateway — no shared filesystem or direct connection to the Node. The Node process itself runs its own in-process BullMQ workers for the remaining five queues.
Overview
ScalyClaw splits queue consumption between two processes. The worker process (worker/) handles sandboxed tool execution in isolation. The Node process runs in-process BullMQ workers for orchestration, agents, scheduling, proactive messaging, and system events.
Worker process — scalyclaw-tools queue only
The worker process lives in worker/src/ and starts a single BullMQ consumer for the scalyclaw-tools queue. It handles two job types:
| Processor | Queue | Job type | Responsibility |
|---|---|---|---|
tool-processor.ts |
scalyclaw-tools |
tool-execution |
Executes sandboxed code via execute_code (JavaScript with Bun, Python with uv, Bash) and shell commands via execute_command. |
tool-processor.ts |
scalyclaw-tools |
skill-execution |
Runs skill modules via execute_skill. Delegates to execute-skill.ts which loads and invokes the named skill. |
The worker connects to Redis directly and reaches the Node only through the gateway API (used as a file bridge). Sub-processors: execute-code.ts, execute-command.ts, execute-skill.ts.
Node process — five in-process workers
The Node runs its own BullMQ workers for all orchestration queues. These are not separate processes — they run inside the Node alongside the channel adapters and REST API.
| Processor | Queue | Concurrency | Responsibility |
|---|---|---|---|
message-processor.ts |
scalyclaw-messages |
3 | Drives the full orchestrator pipeline for an inbound user message — session checks, guards, LLM call, tool loop, and reply dispatch. |
agent-processor.ts |
scalyclaw-agents |
3 | Runs delegated sub-agents. Each sub-agent gets its own orchestrator loop with its own system prompt and tool permission set, then returns a structured result. |
schedule-processor.ts |
scalyclaw-scheduler |
2 | Fires delayed and recurring jobs — reminders, timed tasks, periodic skill runs. BullMQ's built-in delay support handles timing; this processor handles what happens when a job fires. |
proactive-processor.ts |
scalyclaw-proactive |
1 | Generates and sends engagement-triggered outbound messages when the proactive system decides to reach out to a channel. |
system-processor.ts |
scalyclaw-system |
2 | Handles operational events — config reloads, maintenance tasks, cache invalidation. Kept separate from the message queue so system operations never block user traffic. |
All state that any processor reads or writes passes through Redis. A worker restart loses nothing — BullMQ jobs are durable and any active job is re-queued automatically after its lock expires.
The worker process intentionally holds no in-memory state between jobs. Skill modules and file bridge connections are re-established per job or kept warm within a single worker process — but no cross-job shared state exists. This makes it safe to kill and restart a worker at any point without data loss.
Deploying
The worker process can run on the same machine as the Node, on a separate remote machine, or inside a Docker container. The only requirements are a reachable Redis instance and network access to the Node's gateway API. All three modes use the same binary — the difference is how the worker is configured.
Worker setup is stored in ~/.scalyclaw-worker/worker.json and specifies the home directory, gateway connection (host, port, TLS, auth token), Redis connection, Node URL and token, and per-process concurrency.
Same Machine
The simplest setup. Install ScalyClaw once and start a worker alongside the Node. Both processes connect to the local Redis instance and the worker reaches the gateway on localhost.
# Start the Node (channel adapters + orchestrator) scalyclaw start # Start a worker in another terminal (or background it) scalyclaw worker start # Start a second worker for higher throughput scalyclaw worker start
Remote Machine
Install ScalyClaw on the remote machine, configure ~/.scalyclaw-worker/worker.json with the shared Redis instance and the Node's gateway address, then start the worker.
# On the remote worker machine bun install -g scalyclaw # Configure gateway + Redis, then start scalyclaw worker configure scalyclaw worker start
Docker Container
Run the official ScalyClaw image with the worker command. Pass Redis and gateway configuration as environment variables. No volume mounts or filesystem access to the Node are needed — the worker uses the gateway API as its file bridge.
# Single worker container docker run -d \ --name scalyclaw-worker \ -e REDIS_URL=redis://redis.internal:6379 \ scalyclaw/scalyclaw worker start # Scale to 3 replicas with Docker Compose docker compose up --scale worker=3
Configuration
Worker configuration is stored in ~/.scalyclaw-worker/worker.json. All workers in a deployment must point to the same Redis instance and a reachable Node gateway.
| Field | Description |
|---|---|
homeDir |
Base directory used by the worker for temporary files during job execution. |
gateway.host / port / tls / authToken |
Connection details for the Node's gateway API. The worker uses this as a file bridge — no direct filesystem access to the Node is required. |
redis.host / port / password / tls |
Redis connection details. Must point to the same Redis instance used by the Node. |
node.url / token |
Node API URL and authentication token used for callbacks and result delivery. |
concurrency |
Maximum number of scalyclaw-tools jobs processed simultaneously within this worker process. Increase for higher throughput or lower to reduce resource pressure during CPU-intensive code execution. |
For high execute_code throughput, run additional worker processes rather than increasing concurrency. Multiple worker processes provide true parallelism across the scalyclaw-tools queue; high concurrency within a single process can cause resource contention during CPU-intensive code execution.
Monitoring
The ScalyClaw dashboard provides two dedicated pages for worker and job observability: Workers and Jobs.
Workers Page
The Workers page lists every worker process that has connected to Redis. For each worker it shows:
| Field | Description |
|---|---|
| Health status | Online if a heartbeat was received within the last 30 seconds, Offline otherwise. |
| Uptime | How long the worker process has been running since its last start. |
| Concurrency | The concurrency value from worker.json this worker process is using for the scalyclaw-tools queue. |
| Version | ScalyClaw version string, useful when running a mixed-version deployment during a rolling upgrade. |
| Last heartbeat | Exact timestamp of the most recent heartbeat written to Redis. |
Workers write a heartbeat to Redis every 10 seconds. If a worker process dies unexpectedly, the dashboard detects it and marks it Offline within 30 seconds — no manual polling required.
Jobs Page
The Jobs page gives a live view into all six BullMQ queues. You can inspect, filter, retry, and clean jobs without touching Redis directly.
| Feature | Description |
|---|---|
| Queue selector | Switch between scalyclaw-messages, scalyclaw-agents, scalyclaw-tools, scalyclaw-proactive, scalyclaw-scheduler, and scalyclaw-system queues. |
| Status filter | Filter jobs by waiting, active, completed, or failed state. |
| Job detail | Click any job to inspect its full payload, timestamps, attempt count, and return value or error stack trace. |
| Retry | Re-enqueue a failed job with its original payload. Useful for transient failures such as a temporary network error to an MCP server. |
| Clean | Remove completed or failed jobs older than a given age to keep Redis memory usage bounded. The dashboard cleans in bulk per queue. |
BullMQ moves failed jobs to a separate failed set rather than deleting them. They remain inspectable and retryable until you explicitly clean them. Jobs that exceed their configured retry limit appear in the failed set with the full error recorded against each attempt.