Look, I’ve built a lot of shit in Go. Worked on microservices that were basically just glorified HTTP handlers talking to each other across Docker networks. Debugged race conditions at 3 AM because service A couldn’t properly communicate with service B. Wasted entire afternoons trying to trace a request through seven different repos just to find out someone fucked up a JSON field name.
So I built servicepack. Not because I wanted another framework to add to the pile of mediocrity, but because I was tired of choosing between “run everything in one repo and pray” or “microservices hell with distributed debugging.”
The Problem Nobody Talks About
Here’s the thing about microservices that nobody admits until they’re knee-deep in the bullshit: local development is a goddamn nightmare.
You’ve got your authentication service, your user service, your notification service, your payment service, your whatever-the-fuck service. Each one has its own repo. Each one needs its own Docker container. Each one needs its own set of environment variables. You write a docker-compose.yml that’s 300 lines long and still doesn’t work half the time because port 8080 is already taken by that other project you forgot you were running.
Want to debug a request that touches three services? Good fucking luck. You’re either:
- Setting up remote debugging across three containers (hope you like port forwarding)
- Littering your code with print statements like it’s 1995
- Giving up and just deploying to staging to see what breaks
And God forbid you want to add a new service. Now you gotta create a whole new repo, set up CI/CD, configure the damn thing in docker-compose, add it to your Kubernetes manifests, and explain to your teammate why they need to pull four different repos just to work on a feature.
But here’s the real kick in the balls: the boilerplate. Every single microservice repo needs the same shit. Logger initialization. Error handling. Configuration parsing. Service registration. Graceful shutdown logic. Health check endpoints. For some services, the boilerplate is as much fucking code as the actual service logic. You end up copy-pasting the same 500 lines of setup code across seven repos, and when you need to fix a bug in the shutdown logic, you get to fix it seven times. Or you forget one repo and spend an hour debugging why that one service doesn’t shut down gracefully.
The alternative? Shove everything into a monolith. One big-ass repo with everything mixed together. Works great until it doesn’t, and then you’re stuck trying to extract pieces while the whole thing is on fire.
What If We Stopped Being Stupid?
Here’s my hot take: the problem isn’t monoliths OR microservices. The problem is that we treat local development and production deployment like they have to be the same fucking thing.
They don’t.
When you’re developing locally, you want:
- Everything in one process so you can step through the entire flow in your debugger
- One terminal window because you only have so much screen real estate
- The ability to see ALL the logs in one place without having to
docker-compose logs -fand pray
When you’re deploying to production, you want:
- Services that can fail without taking down the whole goddamn system
- The ability to run different subsets of services on different machines
- Control over what runs where without changing code
servicepack gives you both. Write your code as services. Run it all in one binary locally. Deploy the same binary to production and use SERVICES_ENABLED to control what runs where. Same binary, different env var, different subset of services. No separate repos, no separate builds, no separate deploys.
How This Shit Actually Works
The core idea is stupid simple: you write services that implement an interface. The framework finds them automatically (using my gofindimpl tool because grep is for amateurs) and registers them as factories. Services are only instantiated when actually needed — ./app run creates all enabled services, ./app <service> <subcommand> creates only that one. No database connections on startup just to run a CLI command.
Each service is just this:
type Service interface {
Name() string
Run(ctx context.Context) error
Stop(ctx context.Context) error
}That’s it. Three methods. Your service gets a context for cancellation, returns an error if shit goes wrong, and cleans up when asked to stop.
The framework handles the rest: concurrent execution, graceful shutdown, error propagation, dependency ordering, automatic retries, allowed failures. One service crashes? Depending on how you configured it, it either retries, dies gracefully, or takes everything down. Press Ctrl+C? Context gets cancelled, all services stop gracefully.
Optional Interfaces: Retry, Allowed Failure, Dependencies
Services can opt into advanced behavior by implementing optional interfaces:
// Retryable - service gets restarted on failure
type Retryable interface {
MaxRetries() int
RetryDelay() time.Duration
}
// AllowedFailure - service can die without killing everything
type AllowedFailure interface {
IsAllowedFailure() bool
}
// Dependent - service waits for other services to start first
type Dependent interface {
Dependencies() []string
}
// ReadyNotifier - signal when actually ready to serve
type ReadyNotifier interface {
Ready() <-chan struct{}
}
// Commander - expose CLI subcommands
type Commander interface {
Commands() []*cobra.Command
}Retry: When Run() returns an error, the service manager retries up to MaxRetries() times with RetryDelay() between attempts. If context is cancelled during the delay, it bails cleanly.
Allowed Failure: When a service fails (even after retries), its error gets logged but doesn’t propagate — other services keep running. Perfect for non-critical shit like cache warmers or metrics exporters.
Dependencies: The service manager resolves a dependency graph using topological sort. Services with no deps start first, then their dependents. Cyclic dependencies are detected and rejected at startup. If a dependency isn’t in the process (e.g. you filtered it out with SERVICES_ENABLED), it logs a warning and skips that dependency instead of failing — so you can run a subset of services without dependency errors blocking you.
Ready Notification: Services that need initialization time (connecting to a database, warming a cache) can implement ReadyNotifier. The service manager waits for the Ready() channel to close before starting dependent services. Services that don’t implement this are considered ready immediately after launch. This means your API service won’t start accepting requests until the database service has actually connected and is ready to serve queries.
CLI Commands: Services that implement Commander get their own CLI namespace: ./app <servicename> <subcommand>. Only that service gets instantiated — no other services are touched. Returns standard cobra commands so you get flags, args, help — everything for free. The example-migrator uses this to expose ./app example-migrator up, down, and status commands.
You can combine them all — a service can be retryable AND an allowed failure AND have dependencies AND signal readiness AND expose CLI commands.
The ServiceManager: Not Your Average Goroutine Spawner
The heart of this whole thing is the ServiceManager. It’s a singleton (yeah I said it, singletons are fine when you’re not an idiot about it) that orchestrates everything.
Here’s what makes it not garbage:
Concurrent Execution With Dependency Ordering: Services are started in dependency order via topological sort. Services in the same dependency group start concurrently. Each runs in its own goroutine with a sync.WaitGroup for coordination and a channel for error propagation.
Context-Based Cancellation: Services get a context.Context. When that context is done, your service should stop. This isn’t rocket science, but you’d be surprised how many frameworks get this wrong.
Reverse-Order Shutdown: When the app shuts down (or when a non-allowed-failure service fails), services stop in reverse startup order — last started, first stopped. If your API depends on the database, the database stops last. Each service group stops concurrently within itself, but groups are processed in reverse sequence. Each service’s Stop() gets a 30-second timeout — if it doesn’t clean up in time, it gets cut off and the shutdown continues.
Panic Recovery: If a service panics, it doesn’t crash the whole process. The framework catches the panic via defer/recover, wraps it in an ErrServicePanic error, and treats it like any other service failure — retries if retryable, allowed failure if configured, or propagates to shut everything down. No more one rogue goroutine taking out your entire application.
Error Handling That Doesn’t Suck: Errors bubble up through the manager using ctxerrors (another one of my libraries) so you actually know WHERE in the call stack things went to shit. No more “an error occurred” with zero context.
Custom Initialization
cmd/init.go is your hook to run shit before the app starts. It’s never touched by framework updates. Use it to add custom slog handlers, set up global config, or anything else:
// cmd/init.go
package main
import slogconfigurator "github.com/psyb0t/slog-configurator"
func init() {
slogconfigurator.AddHandler(myLokiHandler)
}Every slog.Info/Error/etc call across the entire app — framework, services, everything — goes to all registered handlers. Want Loki? Datadog? Elasticsearch? Just write a slog.Handler and plug it in here.
Custom CLI Commands
cmd/commands.go is your hook to add standalone CLI commands. Like cmd/init.go, it’s never touched by framework updates:
// cmd/commands.go
package main
import "github.com/spf13/cobra"
func commands() []*cobra.Command {
return []*cobra.Command{
{
Use: "seed",
Short: "Seed the database",
Run: func(_ *cobra.Command, _ []string) {
// your logic
},
},
}
}Then ./app seed just works. These are standalone commands separate from service commands — services expose their own subcommands via the Commander interface, while cmd/commands.go is for app-level stuff that doesn’t belong to any service.
One ServiceManager, Zero Boilerplate Per Service
Here’s the part that actually saves your ass in real projects: you write the boilerplate once.
With traditional microservices, every repo needs:
- Logger setup
- Error handling library integration
- Configuration parsing
- Signal handling for graceful shutdown
- Service lifecycle management
- Context plumbing
- Metrics collection setup
- Health check endpoints
That’s easily 300-500 lines of boilerplate before you write a single line of actual business logic. And when you have 10 services? That’s 3000-5000 lines of the same fucking code, duplicated across repos.
With servicepack, all that shit lives in one place: the framework. Every service automatically gets:
- Same logging: Go’s standard
log/slogconfigured viaslog-configurator(LOG_LEVEL, LOG_FORMAT, LOG_ADD_SOURCE) - Same error handling:
ctxerrorseverywhere, full stack traces with context - Same config parsing: Environment variables, no YAML/TOML/JSON config file bullshit
- Same lifecycle management: ServiceManager handles startup, shutdown, retries, dependency ordering
- Same signal handling: Internal runner manages OS signals and graceful shutdown with configurable timeout
- Same CLI:
cobracommands already wired up - Same environment detection:
goenvfor prod/dev detection viaENVvar
When you add a new service, you scaffold it with one command:
make service NAME=my-cool-serviceThis creates a skeleton at internal/pkg/services/my-cool-service/. Edit the generated file, put your logic in Run(). Done — your service starts automatically on next build. Remove it with make service-remove NAME=my-cool-service.
Need to change how logging works across all services? Change it once in cmd/init.go. Need to add metrics? Add it once to the ServiceManager. Need better error context? Update ctxerrors usage once.
This is the real win: boilerplate lives in servicepack, services are just business logic.
Make It Your Own In 30 Seconds
git clone https://github.com/psyb0t/servicepack
cd servicepack
make own MODNAME=github.com/yourname/yourprojectThis single command:
- Nukes the
.gitdirectory - Replaces the module name everywhere
- Sets you up with a fresh
go.mod - Replaces README with just your project name
- Runs
git initfor a fresh start - Sets up dependencies
- Creates an initial commit on main branch
You’re not “using servicepack” anymore — you OWN it. It’s your project now. The framework is just the bones.
Framework Updates That Don’t Make You Cry
Okay so you’ve made it your own. Now what happens when I push updates to the framework? Do you have to manually merge changes? Track what you’ve customized? Maintain a fork?
Nope.
make servicepack-updateThis command:
- Checks for uncommitted changes (won’t touch your dirty working directory)
- Compares current version with latest
- Creates a backup automatically
- Creates a branch called
servicepack_update_to_VERSION - Downloads the latest framework and applies changes
- Commits everything to the update branch
- Leaves you there to review and test
Your services? Never touched. Your README.md? Untouched. Your customizations? Safe.
There’s a clear separation between framework files (get updated) and user files (yours forever):
Framework files (these get updated):
cmd/— CLI entry point (exceptcmd/init.goandcmd/commands.go)internal/app/— Application orchestrationinternal/pkg/service-manager/— The core frameworkscripts/make/servicepack/— Framework build scriptsMakefile.servicepack— Framework build targetsDockerfile.servicepackandDockerfile.servicepack.dev— Framework container images.github/— CI/CD workflows.golangci.yml— Linting rules
User files (never touched):
internal/pkg/services/— YOUR services live herecmd/init.go— Your custom initializationcmd/commands.go— Your custom CLI commandsREADME.md— Your documentationLICENSE— Your licenseMakefile— Your custom build targetsDockerfileandDockerfile.dev— Your custom container imagesscripts/make/— Your custom build scripts
If you’ve customized a framework file, add it to .servicepackupdateignore and it won’t get overwritten.
After updating:
# See what changed
make servicepack-update-review
# Test the changes
make dep && make service-registration && make test
# Happy? Merge it
make servicepack-update-merge
# Not happy? Fuck it
make servicepack-update-revertThe Build System: Dynamic As Fuck
servicepack extracts your binary name from go.mod automatically:
APP_NAME := $(shell head -n 1 go.mod | awk '{print $2}' | awk -F'/' '{print $NF}')Your module is github.com/yourusername/my-awesome-api? Your binary is my-awesome-api. Change the module name? Binary name updates automatically. One source of truth.
The build uses Docker with static linking (no CGO dependencies) and injects the app name at build time via ldflags. Your binary is portable as fuck — copy it anywhere and it just works.
Customization Without Conflicts
The build system uses an override pattern for everything. There’s a framework version of every script, makefile, and Dockerfile — and you can override them with user versions:
Build Scripts:
- Framework:
scripts/make/servicepack/*.sh(updated by framework) - User:
scripts/make/*.sh(your custom versions, take priority)
Makefiles:
Makefile.servicepack— Framework targets (updated)Makefile— Your targets (includes servicepack + lets you override)
Docker Images:
Dockerfile.servicepackandDockerfile.servicepack.dev— Framework images (updated)DockerfileandDockerfile.dev— Your custom images (optional overrides)
Define a target in your Makefile? It overrides the framework version automatically. Need custom packages or build steps? Copy the framework Dockerfile and customize it. The build system uses your version automatically.
Backup and Restore
Because shit happens:
make backup # timestamped backup to .backup/ and /tmp
make backup-restore # restore latest backup
make backup-restore BACKUP=filename.tar.gz # restore specific backup
make backup-clear # delete all backupsFramework updates automatically create backups before making changes. You always have a way back.
Service Filtering: Run What You Need
Sometimes you don’t want to run ALL services. Maybe you’re working on authentication and don’t give a shit about the notification service right now.
export SERVICES_ENABLED="auth,user"
./build/yourapp runOnly those services run. Everything else is skipped. No commenting out code, no build flags, just an environment variable. Leave it empty or unset? Everything runs.
Example Services
The framework ships with 7 example services that demonstrate every lifecycle pattern:
- hello-world — basic long-running service
- example-database — retryable (2 retries, 2s delay), signals ready after startup
- example-api — depends on database + flaky services
- example-migrator — one-shot with allowed failure, depends on database
- example-optional — allowed failure, fails immediately but app keeps running
- example-flaky — retryable, fails twice then recovers
- example-crasher — retryable, fails all retries and kills everything
Run them all with make run-dev and watch the lifecycle in action — retries, dependency ordering, allowed failures, and the eventual crash. These get removed when you run make own.
Testing That Actually Catches Shit
The framework requires 90% test coverage by default (excluding the example services and cmd package). Coverage runs with race detection because if you’re not testing for race conditions in concurrent code, what the fuck are you even doing.
make test-coverageThe framework itself has comprehensive tests — unit tests for the service manager (retry, allowed failure, dependencies, concurrency), integration tests combining all features end-to-end, and app lifecycle tests with mock services. Over 1600 lines of test code.
For test isolation, there’s ResetInstance() to clear the singleton and ClearServices() to wipe the service registry. Mock services implement the Service interface so you can test without real service dependencies.
The Pre-commit Hook That Saves Your Ass
There’s a pre-commit.sh script that runs make lint && make test-coverage. Use it with your favorite pre-commit framework, or use my ez-pre-commit tool to auto-setup the hook.
Lint checks run 80+ linters via golangci-lint: errcheck, govet, staticcheck, gosec, the works.
Environment Variables
# Logging (via slog-configurator)
LOG_LEVEL=debug # debug, info, warn, error
LOG_FORMAT=json # json, text
LOG_ADD_SOURCE=true # show file:line in logs
# Environment (via goenv)
ENV=dev # dev, prod (default: prod)
# Service filtering
SERVICES_ENABLED=service1,service2 # comma-separated, empty = all
# Shutdown
RUNNER_SHUTDOWNTIMEOUT=10s # graceful shutdown timeoutWhere To Find This Shit
https://github.com/psyb0t/servicepack
MIT licensed, do whatever you want with it.