Submit from Your Mac, Wake Up to Finished Assets
Your GPU workstation does the heavy lifting overnight. Your Mac submits workflows, checks status, and downloads results — no port forwarding, no tokens, just SSH.
Here is the setup I run for every project that involves a lot of images: a GPU workstation running at home does the generation, and my Mac — wherever I am — submits the work, checks status, and downloads the results. No VPN. No port forwarding. No tokens to manage. Just SSH.
The thing that makes this work is that modl has an MCP server built in. When Claude Code on your Mac connects via modl mcp, it gets a handful of tools: submit a workflow, check a run’s status, and list output paths. The connection runs over the same SSH tunnel you’d use to open a shell. The heavy lifting stays on the machine with the GPU. Your Mac stays in charge of the work.
This guide walks through the full setup and shows the workflow I use to batch a children’s book series: queue the whole series overnight, check status in the morning, pull the keepers, and fix any covers that need a mask edit — all without touching the workstation.
The setup
On the workstation, install modl normally. All you need for remote operation is a running SSH server and modl on the PATH. No daemon to keep alive, no API server to manage — modl mcp is a stdio process that the SSH connection starts and stops automatically.
On your Mac, add the server to your Claude Code MCP config (usually ~/.claude/settings.json):
{
"mcpServers": {
"modl-workstation": {
"command": "ssh",
"args": ["workstation", "modl", "mcp"]
}
}
}
That’s it. workstation is whatever alias you have in ~/.ssh/config pointing at the GPU machine. Claude Code reconnects automatically between sessions. When you open a new Claude Code session, the modl_workstation_* tools are ready.
If your SSH config uses a non-standard port, key, or username, those go in ~/.ssh/config alongside the Host workstation block — not in the MCP config. The MCP config just needs the alias to resolve.
To verify the connection:
If you see the model list, the tunnel is working. From here the tools operate just like a local modl session — the only difference is that the files land on the workstation, not your Mac.
Writing the workflow
Before queuing anything overnight, write and validate the workflow file on your Mac. The YAML lives with your project — check it into git alongside any reference images or style notes. Here’s the kind of file I use for a children’s book series:
name: book-series-batch
model: flux2-klein-9b
lora: ~/modl/loras/my-character.safetensors
defaults:
width: 1344
height: 896
steps: 28
guidance: 3.5
steps:
- id: chapter1-cover
generate: "a young girl standing at the entrance of an enchanted forest, golden hour light, children's book illustration style, warm and inviting"
seeds: [42, 7, 99]
- id: chapter2-cover
generate: "the same girl crossing a stone bridge over a sparkling river, misty morning, watercolor illustration, soft colors"
seeds: [42, 7, 99]
- id: chapter3-cover
generate: "the girl and a small fox sitting together under a giant mushroom in the rain, cozy, storybook style"
seeds: [42, 7, 99]
- id: chapter4-cover
generate: "a moonlit meadow, the girl releasing a glowing lantern into the night sky, fireflies around her"
seeds: [42, 7, 99]
Four chapters, three seeds each, twelve images. Before you submit overnight, it’s worth validating the YAML locally — open a shell into the workstation and dry-run it there:
All green means no typos, no missing LoRAs, no capability mismatches. Then submit from Claude Code:
You get a run_id back immediately. The workflow is running on the workstation; your Mac is free. Close the laptop, go to bed.
If the workflow YAML has an obvious error — missing model, unresolvable LoRA path, bad YAML syntax — run_workflow catches it within about 500ms and returns an error instead of a run_id. Mistakes that take longer to surface (generation failures midway through) show up in job_status later.
Checking status in the morning
Open Claude Code on your Mac. Ask about the run:
Everything completed. List the outputs:
To pull the whole run as a ZIP, open a port-forward to modl serve and use curl:
Alternatively, if MODL_BASE_URL is set on the workstation, list_run_outputs returns HTTP URLs you can open directly in a browser or download with curl — no port-forward needed.
Extract the ZIP on your Mac, open in Finder, pick the keeper from each chapter. For most runs, one seed per chapter is clearly the winner. Write down which ones — chapter1/seed 99, chapter2/seed 42, and so on — in a text file next to the YAML. That curation list plus the YAML is your full asset record.
Passing reference images from your Mac
The most powerful use of the remote workflow setup is for narrative or character-consistency projects: you have a few reference images on your Mac — a character portrait, a style sample, a location — and you want to run dozens of scenes against them.
Without any workaround, this breaks: the workflow runs on the workstation, and paths like /Users/pedro/refs/alice.png don’t exist there. There are two clean solutions.
Named image variables (recommended)
Define your reference images once in an images: block at the top of the YAML. Each value is a base64-encoded data URI — modl decodes it to disk on the workstation before any step runs, then every step that references $name gets the same file. You only encode each image once, no matter how many scenes use it.
name: enchanted-story
model: flux2-klein-9b
lora: my-character-lora
images:
alice: "data:image/png;base64,iVBORw0KGgo..." # character reference
bob: "data:image/png;base64,iVBORw0KGgo..." # character reference
style: "data:image/png;base64,iVBORw0KGgo..." # style/mood reference
defaults:
width: 1344
height: 896
steps: 28
steps:
- id: scene-forest
edit: "$alice"
prompt: "alice wandering through an enchanted forest at dawn, soft golden light"
seeds: [42, 7, 99]
- id: scene-gate
edit: "$bob"
prompt: "bob standing guard at the castle gate, stormy sky behind him"
seeds: [42, 7, 99]
- id: scene-reunion
edit: "$alice"
prompt: "alice and a small silver fox meeting in a moonlit clearing"
seeds: [42, 7, 99]
- id: scene-tower
edit: "$bob"
prompt: "bob climbing the stone tower stairs, torch in hand"
seeds: [42, 7, 99]
Four scenes, three seeds each — 12 images — from two character references defined once. Add more scenes without touching the images: block. The base64 strings stay in the YAML; there’s nothing to scp.
To encode an image on your Mac before pasting it into the YAML:
Image variables are resolved at parse time — if a name is referenced in a step but missing from images:, you get an error immediately when you submit, not halfway through the run: image variable '$alice' is not defined — add 'alice:' to the top-level 'images:' map.
When something needs a targeted fix
For fixing a specific output — softening an expression, adjusting lighting on one scene — point edit: at the existing file on the workstation using the absolute path from list_run_outputs:
name: scene-forest-fix
model: flux2-klein-9b
steps:
- id: forest-retouch
edit: "/home/pedro/modl/outputs/2026-05-06/scene-forest-42.png"
prompt: "soften the lighting, more dappled shadows through the canopy"
seeds: [42, 7, 13]
Three variations of just that scene, same model, same LoRA. Check status in a few minutes, pull the one you like.
Mask-based inpainting (change only a specific region) is CLI-only — modl generate "prompt" --init-image scene.png --mask mask.png --base flux-fill-dev. SSH into the workstation for those edits; workflow YAML doesn’t support masks.
Queuing multiple files at once
If you have several workflows ready — different books in the series, or a training-data batch alongside the cover batch — submit them in sequence before you go to sleep:
Each run gets its own run_id. They run sequentially on the workstation (one GPU, one job at a time). In the morning, check each run independently:
If a run is interrupted
Long runs sometimes get interrupted — power fluctuation, machine restart, someone else needing the GPU. When you re-submit the same workflow YAML, modl’s skip-existing check looks at what’s already in the output directory: if the (prompt, seed, model) triple already has a completed image with a sidecar YAML next to it, that step is skipped. The run picks up from where it left off.
This makes long multi-chapter batches safe to re-run without burning GPU time on work that’s already done. If you genuinely want fresh outputs for a step that completed, pass --force to override.
Why SSH and not a web server
The short answer: nothing to manage. The workstation doesn’t need to run any persistent service, open any ports, or handle authentication. The SSH tunnel is already there; modl mcp piggybacks on it. On a machine you trust on your own network, this is just simpler. You get the same tool-call interface whether you’re on the same LAN or tunneling in from a coffee shop.
If you want a persistent web dashboard — a real-time progress view while the run is executing, a gallery of all your outputs — modl serve gives you that on the workstation. Open an SSH port-forward in a second terminal (ssh -L 7878:localhost:7878 workstation) and browse to http://localhost:7878 on your Mac. The MCP tools and the web UI are the same underlying data; use whichever fits the moment.
The daily pattern
Practically, what this enables is a rhythm: Mac for planning and reviewing, workstation for generation. Write the workflow on your Mac, validate it with --dry-run, submit overnight, review in the morning. The only thing that needs to touch the workstation is SSH — and you were already doing that anyway.
The model and LoRA stay on the workstation permanently. The YAML file is the thing you version-control and share. The ZIP is the thing you hand to a collaborator or upload to wherever the assets need to live. Your Mac is a terminal for the work; the work itself lives where the GPU is.
For a dozen images of a children’s book series, this means the actual generation work takes no time on your side — you submit before bed, review over coffee. Add more chapters to the YAML, submit again. Fix a cover, re-submit just the fix. The recipe travels; the GPU stays put.