Put nginx in front of MinIO at a path prefix and try to use presigned URLs. Go ahead, I’ll wait.
The problem is AWS Signature V4. The signature covers the request path. nginx at /storage/ strips that prefix before forwarding — so boto3 signs /storage/bucket/key, the server sees /bucket/key, the HMAC doesn’t match, you get a 403. This isn’t a configuration mistake, it’s how nginx path-prefix proxying works and it’s how SigV4 verification works. They’re structurally incompatible unless you get clever with your proxy config or switch to a server that handles it differently.
docker-hybrids3 handles it differently. It has a path_prefix config option — set it to /storage and all routes move under that prefix natively. No path stripping at the proxy. boto3 signs /storage/bucket/key, nginx forwards /storage/bucket/key, the server sees /storage/bucket/key. Signatures match. That’s also why I run everything behind a single nginx gateway (aigate does this) without fighting the proxy config.
The other thing that pushed me to build it: when an AI agent needs to write a file and hand back a URL, you don’t want to stand up a distributed object store for that. You want something that starts in 2 minutes and gets out of the way. HybridS3 has an MCP server built in — the agent calls upload_object, gets back a URL, done.
What It Is
SQLite for metadata, flat files on disk. boto3 works against it. Plain HTTP with curl works. Bearer token auth — no SigV4 ceremony for plain HTTP requests. Three interfaces, one container, no web console, no IAM, no distributed mode.
Buckets are defined in config, not created via API. You always know exactly what exists. Want a bucket? Add it to the YAML, restart. Want TTL expiry? Set ttl: 24h on the bucket and objects delete themselves after their last write. No lifecycle policies, no cron jobs.
Running It
docker run -d --name hybrids3 \
-p 8080:8080 \
-v ./config.yaml:/config/config.yaml:ro \
-v hybrids3-data:/data \
psyb0t/hybrids3Config at /config/config.yaml, data at /data, port 8080. Runs as UID 1000.
Config
master_key: "change-me-to-something-secret"
master_public_key: "master"
cleanup_interval: 1m
# path_prefix: /storage
buckets:
uploads:
public: true
key: "uploads-secret"
public_key: "uploads-id"
ttl: 24h
max_file_size: 50MB
permanent:
public: false
key: "perm-secret"
public_key: "permanent-id"
ttl: 0
max_file_size: 100MBEach bucket has a private key (never transmitted, used to verify HMACs) and a public_key (the S3 access key ID, safe to embed in URLs). That split is what makes presigned URLs work — the public key can appear in the URL, the private key never does.
public: true means GET, HEAD, and LIST require no auth. PUT and DELETE still need a key. public: false means everything requires auth.
HTTP API
Plain Bearer token in the Authorization header:
# upload
curl -X PUT http://localhost:8080/uploads/file.txt \
-H "Authorization: Bearer uploads-secret" \
-d "hello"
# read from public bucket — no auth
curl http://localhost:8080/uploads/file.txt
# read from private bucket
curl http://localhost:8080/permanent/doc.pdf \
-H "Authorization: Bearer perm-secret"
# list objects with prefix filter
curl "http://localhost:8080/uploads?prefix=images/&max-keys=50" \
-H "Authorization: Bearer uploads-secret"Requests with an AWS Sig V4 Authorization header get S3-compatible XML responses. Everything else gets JSON.
boto3
import boto3
from botocore.config import Config
s3 = boto3.client(
"s3",
endpoint_url="http://localhost:8080",
aws_access_key_id="uploads-id", # public_key from config
aws_secret_access_key="uploads-secret", # key from config
region_name="us-east-1",
config=Config(signature_version="s3v4"),
)
s3.put_object(Bucket="uploads", Key="file.txt", Body=b"hello")
s3.get_object(Bucket="uploads", Key="file.txt")
s3.list_objects_v2(Bucket="uploads", Prefix="images/")Behind Nginx
The part that actually matters. Set path_prefix: /storage in config, then:
location /storage {
proxy_pass http://hybrids3:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}Two things to get right: no trailing slash on proxy_pass (a trailing slash tells nginx to strip the location prefix — that breaks SigV4), and $http_host not $host ($host strips the port, which causes signature mismatches on non-standard ports).
With path_prefix set, boto3 points at http://yourdomain/storage and everything works.
MCP
An MCP server runs at /mcp/. Seven tools: upload_object, download_object, delete_object, list_objects, list_buckets, object_info, presign_url. Connect any MCP-compatible client:
{
"mcpServers": {
"hybrids3": {
"type": "streamable-http",
"url": "http://localhost:8080/mcp/"
}
}
}Auth at the endpoint level (Authorization header or ?auth= query param) or per-tool via auth_key parameter. Use the master key for full access, a bucket key to scope the connection to one bucket.
Presigned URLs
For private buckets, this generates a real AWS Sig V4 presigned URL signed with the bucket’s private key:
curl -X POST "http://localhost:8080/presign/permanent/doc.pdf?expires=3600" \
-H "Authorization: Bearer perm-secret"For public buckets it returns the plain URL — no signature needed since GET requires no auth anyway. Expiry range 1 second to 7 days.
Limitations
No multipart upload, no object versioning, no ACLs beyond bucket-level public/private, no bucket creation via API, no CORS headers, no replication, no encryption at rest. If you need any of that, use something else.
Grab it: github.com/psyb0t/docker-hybrids3. Licensed under WTFPL.