CLI --machine Chat
cmdop chat --machine <hostname> keeps your local LLM in charge of the
chat loop, but reshapes its tool surface so every operation runs on the
remote machine. We call this Path B. It is the right tool when you want
the local model to coordinate work across one or more machines without
being a passive transport for whatever the remote agent says.
This page covers what --machine does, how it differs from the
desktop inspector chat (Path A), and when
each model is better.
How Path B works
When you start cmdop chat --machine prod-api-1, three things change
inside the local agent:
- Prompt seeded. A
prompts.SectionTargetMachineblock tells the local LLM that the user wants work to happen onprod-api-1. - Tool surface trimmed. Local-only tools are filtered out — the
agent loses
file_write,read_file,grep,glob, andlist_diron the local machine. What remains is the dispatch surface:ask_agent,ask_agent_stream,connect,ssh_session, etc. - Identity context.
SectionContext.Hostnamecarries the local machine’s hostname so the prompt can disambiguate “here” from “there”.
Implementation: internal/chat/factory.go →
WithRemoteOnlyToolsFilter() and
WithPromptTargetMachineID(). See internal/chat/CLAUDE.md
§Path B for the full contract.
Path B keeps the local LLM. It just hides local tools and steers
the prompt toward ask_agent / connect exec for everything that
touches files or shells.
Quick start
# Start a chat targeting one remote machine.
cmdop chat --machine prod-api-1
# Or by UUID prefix.
cmdop chat --machine 8f23Then just talk:
you> Are there any 5xx errors in the nginx access log from the last hour?
agent> Let me check on prod-api-1.
[tool: connect exec prod-api-1 -- "tail -10000 /var/log/nginx/access.log | awk '{print $9}' | sort | uniq -c"]
I see 247 5xx responses in the last hour, mostly 502 from the upstream API.
Want me to look at the upstream logs?The local LLM is the one composing the reply. It called the remote shell via the dispatch surface, but the synthesis (“247 5xx responses, mostly 502”) is local-model output.
Multi-machine prompts
The same flag works with comma-separated targets:
cmdop chat --machine vps-audi,vps-bmw,prod-api-1The local agent now has three remote machines on its dispatch surface.
A prompt like “compare disk usage across these three” turns into a
single ask_agents fan-out:
you> Free disk on / for each, please.
agent> [tool: ask_agents hostnames=[vps-audi, vps-bmw, prod-api-1]
prompt="What is your free disk percentage on /?"]
- vps-audi: 42% free (ext4)
- vps-bmw: 11% free (btrfs) ← low
- prod-api-1: 78% free (ext4)
vps-bmw is below the warning threshold.For more on fan-out, see server-to-server.
Path A vs Path B
The same comparison from the desktop side, restated:
| Path A | Path B | |
|---|---|---|
| Surface | Desktop inspector chat. | cmdop chat --machine (CLI). |
| LLM | Remote agent. | Local agent. |
| Tool surface | Whatever the remote has. | Dispatch tools (filtered). |
| Output | Direct stream from remote. | Local LLM’s synthesis. |
| Multi-machine | One target per inspector tab. | Many targets per chat. |
| Permission gate | On remote (caller skipped if same OAuth). | Local agent’s permissions on dispatch + remote’s permissions on the actual call. |
| Best for | Inspecting one machine’s thinking. | Coordinating across machines. |
If you want to see what the remote agent thinks, use Path A. If you want to ask “across these N machines, find the odd one out”, use Path B.
Tool filtering
The trimmed surface is intentional. Without filtering, the local
agent would have both file_write (writes to your laptop) and
connect exec (writes via the remote shell). Confusing every-day
prompts like “write the result to a file” — which file? whose disk?
WithRemoteOnlyToolsFilter() resolves the ambiguity by removing the
local file tools entirely. Anything filesystem-shaped has to go
through connect exec, ssh_session, or the remote file_write
exposed via ask_agent. The local LLM still has web_search,
time, and other neutral tools.
If you start cmdop chat without --machine, the full local tool
surface comes back.
When local synthesis is the wrong choice
Path B works when the local LLM is the right narrator. It is wrong when:
- You need the remote prompt to shape the answer (operations runbooks live there).
- The remote tools are richer than the local ones (e.g. a
domain-specific skill installed only on
prod-api-1). - You want to avoid paraphrasing of structured remote output.
For those cases, switch to Path A or just call ask_agent directly
from a normal cmdop chat session.
The local LLM’s prompt has its own opinions. If those conflict with the remote agent’s prompt, the local LLM wins — it is the one composing your reply. Switch to Path A when you specifically want the remote’s voice.
Identity and the prompt
The seeded SectionTargetMachine does two jobs:
- Tells the local LLM that “you are talking about machine X” so it
prefers
connect exec Xoverconnect exec localhost. - Sets up the multi-target list so the LLM knows which machines it may dispatch to.
SectionContext.Hostname is your local hostname, so prompts like
“is this server my laptop?” disambiguate without asking. See
internal/chat/factory.go for the section wiring.
Permissions on both sides
Path B fires the permission gate twice:
- Local agent’s gate. When the LLM tries to call
ask_agentorconnect exec, your laptop’spermissions.yamldecides whether the dispatch is allowed. - Remote agent’s gate. The receiver still applies its own rules to whatever the dispatch invokes (with the self-to-self bypass when the OAuth identity matches).
For team setups, this is the layer where you want strict mode on both sides. See ../guides/permissions/modes.
Workspace handling
--machine accepts any string the resolver understands — hostname,
friendly name, UUID, or unique prefix. The active workspace decides
which machines are visible. To target a machine in a different
workspace, switch first:
cmdop connect workspace use staging
cmdop chat --machine vps-bmwThere is no --machine workspace:host syntax.
Useful flags
cmdop chat supports the usual chat flags alongside --machine:
| Flag | Effect |
|---|---|
--machine <h> | Targets one or more machines. Comma-separates for many. |
--model <id> | Override the local LLM. The remote agent is unaffected. |
--no-stream | Disable token streaming on the local UI. |
--workspace <name> | Per-call workspace override. |
See ../cli/chat for the complete flag reference.
Common patterns
Single-target debug
cmdop chat --machine prod-api-1
> What is consuming most CPU right now?Cross-machine compare
cmdop chat --machine prod-api-1,prod-api-2,prod-api-3
> Are these three running the same nginx version?Fleet sweep
cmdop chat --machine vps-audi,vps-bmw,mac-studio
> Tell me each machine's uptime and free memory.The agent will (correctly) reach for ask_agents to fan out.
Related
Path A — direct pipe to a remote agent.
ask_agent, ask_agent_stream, ask_agents — the dispatch tools.
Full CLI reference for the chat verb.
Conceptual model behind both paths.