How It Works
A single CMDOP prompt is just a chat message — but a lot has to happen for it to land on a remote machine, run safely, and come back with an answer. This page walks the path end to end.
The cast
Five things to keep in mind. Each has its own concept page; here we just name them.
- Daemon — long-running process started by
cmdop agent start. Keeps the machine “online” against the relay. See concept. - Agent loop — the LLM-driven turn cycle inside the daemon. Reads tools, decides, executes, repeats.
- Workspace — a tenant boundary (machines, members, API keys live inside one). See concept.
- Connect — the agent-to-agent surface (
cmdop connect ...,ask_agent, share links). See section. - Permission gate — the policy engine that decides whether a remote tool call may run. See concept.
A prompt’s journey
You sit at your laptop and type:
“On
vps-audi, tail the last 200 lines of/var/log/syslogand tell me if anything looks wrong.”
Here is what happens.
1. Local agent picks up the prompt
Your laptop’s daemon is already running. The desktop chat tab (or cmdop chat) hands the prompt to the local agent loop. The LLM sees the catalog of tools, including ask_agent, connect, and execute_command.
2. The agent decides to delegate
The LLM realizes the work belongs on vps-audi. It picks ask_agent("vps-audi", "Tail the last 200 lines of /var/log/syslog and report anything suspicious").
The local agent does not try to read the file — your laptop does not have it.
3. Connect resolves the hostname
ask_agent is a Connect tool. It funnels through remoteagent.Ask:
- Resolve
vps-audito a stable machine UUID (exact hostname → exact name → unique fuzzy prefix). - Check the workspace API key or OAuth token in the local credential store.
- Confirm the target reports
is_online=true. - Dial the relay; ask the relay to bridge to that machine.
If any step fails — wrong hostname, target offline, expired auth — you get a typed error, not a silent timeout.
4. The remote daemon receives a sub-call
The daemon on vps-audi accepts the bridged call. Before the LLM sees the prompt, the permission gate inspects it:
- Floor checks (no
.envaccess, norm -rf /, no protected paths). - Rule lookup against
~/.cmdop/permissions.yaml. - Mode default if no rule matches (
defaultasks,strictdenies,bypassallows).
For a read-only tail of /var/log/syslog, a sane policy on vps-audi allows it.
5. The remote agent loop runs
vps-audi’s agent runs its own LLM turn. It calls read_logs or execute_command("tail -200 /var/log/syslog"). Results stream back as token events: TOOL_START, TOOL_END, TOKEN, TOKEN, …
6. Audit on both sides
Both daemons append a JSON-line entry to their respective audit.log. The receiver records who called, what tool, what arguments, and the gate decision. The caller records that it dispatched a remote call.
7. The reply lands back in your chat
The local agent sees the reply text, can decide to do follow-up work (open a board issue, ask another machine), and ultimately produces the user-facing summary in the chat tab.
Why “connect-first”
Older docs called this surface SSH. It wasn’t, and the framing led people to look for things that were never there (TTYs, port forwarding, ~/.ssh/config). Renaming to Connect was a clarification:
- Connect resolves agents, not hosts.
- The wire protocol is gRPC over an outbound TLS relay, not a port-22 inbound listen.
- The unit of work is a tool call, not a shell session.
A persistent shell-like session does exist (ssh_session tool, remote sessions) — but it sits on top of Connect, not under it.
What changes when you scale up
The same path scales to many machines:
ask_agent→ one target.ask_agent_stream→ one target with token streaming.ask_agents→ many targets, parallel goroutines, deterministic result map.
Fan-out across prod-1, prod-2, prod-3 is the same code path as a single call, just with a list of hostnames.
What does not happen
- Your code does not run “in the cloud”. The local agent and every remote agent run on your machines.
- The relay does not see your prompts in a useful form — it bridges authenticated bytes.
- The local LLM does not paraphrase a remote agent’s reply when you use the Desktop’s per-machine inspector chat — that is a direct pipe.
Read next
The block diagram and the file-system layout.
Five minutes from download to first prompt.
The deep dive on ask_agent, ask_agents, and the error taxonomy.