servicepack: The Go Framework That Actually Understands How Real Development Works

Look, I’ve built a lot of shit in Go. Worked on microservices that were basically just glorified HTTP handlers talking to each other across Docker networks. Debugged race conditions at 3 AM because service A couldn’t properly communicate with service B. Wasted entire afternoons trying to trace a request through seven different repos just to find out someone fucked up a JSON field name.

So I built servicepack. Not because I wanted another framework to add to the pile of mediocrity, but because I was tired of choosing between “run everything in one repo and pray” or “microservices hell with distributed debugging.”

The Problem Nobody Talks About

Here’s the thing about microservices that nobody admits until they’re knee-deep in the bullshit: local development is a goddamn nightmare.

You’ve got your authentication service, your user service, your notification service, your payment service, your whatever-the-fuck service. Each one has its own repo. Each one needs its own Docker container. Each one needs its own set of environment variables. You write a docker-compose.yml that’s 300 lines long and still doesn’t work half the time because port 8080 is already taken by that other project you forgot you were running.

Want to debug a request that touches three services? Good fucking luck. You’re either:

  1. Setting up remote debugging across three containers (hope you like port forwarding)
  2. Littering your code with print statements like it’s 1995
  3. Giving up and just deploying to staging to see what breaks

And God forbid you want to add a new service. Now you gotta create a whole new repo, set up CI/CD, configure the damn thing in docker-compose, add it to your Kubernetes manifests, and explain to your teammate why they need to pull four different repos just to work on a feature.

But here’s the real kick in the balls: the boilerplate. Every single microservice repo needs the same shit. Logger initialization. Error handling. Configuration parsing. Service registration. Graceful shutdown logic. Health check endpoints. For some services, the boilerplate is as much fucking code as the actual service logic. You end up copy-pasting the same 500 lines of setup code across seven repos, and when you need to fix a bug in the shutdown logic, you get to fix it seven times. Or you forget one repo and spend an hour debugging why that one service doesn’t shut down gracefully.

The alternative? Shove everything into a monolith. One big-ass repo with everything mixed together. Works great until it doesn’t, and then you’re stuck trying to extract pieces while the whole thing is on fire.

What If We Stopped Being Stupid?

Here’s my hot take: the problem isn’t monoliths OR microservices. The problem is that we treat local development and production deployment like they have to be the same fucking thing.

They don’t.

When you’re developing locally, you want:

  • Everything in one process so you can step through the entire flow in your debugger
  • Hot-reload on save because waiting for Docker to rebuild is soul-crushing
  • One terminal window because you only have so much screen real estate
  • The ability to see ALL the logs in one place without having to docker-compose logs -f and pray

When you’re deploying to production, you want:

  • Services that can scale independently
  • Services that can fail without taking down the whole goddamn system
  • Services that different teams can deploy without stepping on each other
  • The flexibility to rewrite one service in Rust (or whatever the fuck) without touching the others

servicepack gives you both. Write your code as services. Run it all in one binary locally. Want to deploy as microservices later? Extract the service code – it’s already modular. Or just deploy the whole binary and use SERVICES_ENABLED to control what runs where.

How This Shit Actually Works

The core idea is stupid simple: you write services that implement an interface. The framework finds them automatically (using my gofindimpl tool because grep is for amateurs), spins them all up in goroutines, and manages their lifecycle.

Each service is just this:

type Service interface {
    Name() string
    Run(ctx context.Context) error
    Stop(ctx context.Context) error
}

That’s it. Three methods. Your service gets a context for cancellation, returns an error if shit goes wrong, and cleans up when asked to stop.

The framework handles the rest: concurrent execution, graceful shutdown, error propagation, the whole nine yards. One service crashes? Everything shuts down cleanly. Press Ctrl+C? Context gets cancelled, all services stop gracefully.

The ServiceManager: Not Your Average Goroutine Spawner

The heart of this whole thing is the ServiceManager in internal/pkg/service-manager/service_manager.go. It’s a singleton (yeah I said it, singletons are fine when you’re not an idiot about it) that orchestrates everything.

Here’s what makes it not garbage:

Concurrent Execution Without The Bullshit: Each service runs in its own goroutine. We use a sync.WaitGroup for coordination and a channel for error propagation. First error stops everything – no half-alive system states.

Context-Based Cancellation: Services get a context.Context. When that context is done, your service should stop. This isn’t rocket science, but you’d be surprised how many frameworks get this wrong.

Graceful Shutdown: When the app shuts down (or when any service fails), we call Stop() on all running services concurrently. Each service gets the shutdown context to clean up. The app-runner handles timeout enforcement, so services that take too long get cut off.

Error Handling That Doesn’t Suck: Errors bubble up through the manager using ctxerrors (another one of my libraries) so you actually know WHERE in the call stack things went to shit. No more “an error occurred” with zero context.

Service Auto-Discovery: Because Manual Registration Is Tedious

You know what sucks? Manually registering every service in some initialization function. You know what sucks more? Forgetting to register a service and spending 20 minutes figuring out why it’s not running.

servicepack uses gofindimpl to automatically discover all types that implement the Service interface. When you run make service-registration, it generates internal/pkg/services/services.gen.go with an Init() function that registers everything automatically.

Add a service? It gets picked up automatically. Remove a service? It’s gone. No manual bookkeeping, no forgotten registrations, no bullshit.

One ServiceManager, Zero Boilerplate Per Service

Here’s the part that actually saves your ass in real projects: you write the boilerplate once.

With traditional microservices, every repo needs:

  • Logger setup (logrus, zap, whatever)
  • Error handling library integration
  • Configuration parsing
  • Signal handling for graceful shutdown
  • Service lifecycle management
  • Context plumbing
  • Metrics collection setup
  • Health check endpoints

That’s easily 300-500 lines of boilerplate before you write a single line of actual business logic. And when you have 10 services? That’s 3000-5000 lines of the same fucking code, duplicated across repos.

With servicepack, all that shit lives in one place: the framework. Every service automatically gets:

  • Same logginglogrus configured via environment variables (LOG_LEVEL, LOG_FORMAT, LOG_CALLER)
  • Same error handlingctxerrors everywhere, full stack traces with context
  • Same config parsinggonfiguration for consistent env var handling
  • Same lifecycle management: ServiceManager handles startup, shutdown, error propagation
  • Same signal handlingapp-runner manages OS signals and graceful shutdown
  • Same CLIcobra commands already wired up

When you add a new service, you write:

type MyService struct{}
func (s *MyService) Name() string { return "my-service" }
func (s *MyService) Run(ctx context.Context) error {
    // Your actual business logic here
    return nil
}
func (s *MyService) Stop(ctx context.Context) error {
    // Cleanup if needed
    return nil
}

That’s it. No logger initialization. No signal handlers. No shutdown coordination. You implement your logic, everything else is already there.

Need to change how logging works across all services? Change it once in the framework. Need to add metrics? Add it once to the ServiceManager. Need better error context? Update ctxerrors usage once.

This is the real win: boilerplate lives in servicepack, services are just business logic.

Now yeah, the tradeoff is you’re stuck with my choices: logrus for logging, ctxerrors for error handling, gonfiguration for config, cobra for CLI. If you hate any of these, you’ll need to fork and swap them out. But let’s be honest – these aren’t garbage libraries. They’re solid, widely-used tools that get the job done. And the ability to change them once for all services beats maintaining the same boilerplate across 10 different repos any day of the fucking week.

Make It Your Own In 30 Seconds

Here’s the thing about frameworks: they’re great until you need to actually use them for YOUR project. Most frameworks make you clone their repo and then manually change a bunch of shit while trying not to break anything.

Fuck that.

git clone https://github.com/psyb0t/servicepack
cd servicepack
make own MODNAME=github.com/yourname/yourproject

This single command:

  • Nukes the .git directory
  • Replaces the module name everywhere
  • Rewrites go.mod and go.sum
  • Resets the README to just your project name
  • Runs git init for a fresh start
  • Sets up dependencies
  • Creates an initial commit

You’re not “using servicepack” anymore – you OWN it. It’s your project now. The framework is just the bones.

Framework Updates That Don’t Make You Cry

Okay so you’ve made it your own. Now what happens when I push updates to the framework? Do you have to manually merge changes? Track what you’ve customized? Maintain a fork?

Nope.

make servicepack-update

This command:

  1. Checks for uncommitted changes (won’t touch your dirty working directory)
  2. Creates a backup automatically (in .backup/ and /tmp)
  3. Fetches the latest framework version
  4. Creates a branch called servicepack_update_to_VERSION
  5. Applies the updates while protecting your code
  6. Commits everything to the update branch
  7. Leaves you there to review and test

Your services? Never touched. Your README.md? Untouched. Your customizations? Safe.

There’s a clear separation between framework files (get updated) and user files (yours forever):

Framework files (these get updated):

  • cmd/ – CLI entry point
  • internal/app/ – Application orchestration
  • internal/pkg/service-manager/ – The core framework
  • scripts/make/servicepack/ – Framework build scripts
  • Makefile.servicepack – Framework build targets
  • Dockerfile.servicepack and Dockerfile.servicepack.dev – Framework container images

User files (never touched):

  • internal/pkg/services/ – YOUR services live here
  • README.md – Your documentation
  • LICENSE – Your license
  • Makefile – Your custom build targets
  • Dockerfile and Dockerfile.dev – Your custom container images
  • scripts/make/ – Your custom build scripts

If you’ve customized a framework file (like modifying the core app logic), just add it to .servicepackupdateignore and it won’t get overwritten.

After updating:

# See what changed
make servicepack-update-review
# Test the changes
make dep && make service-registration && make test
# Happy? Merge it
make servicepack-update-merge
# Not happy? Fuck it
make servicepack-update-revert

You get framework improvements without losing your customizations. It’s not black magic – it’s just well-designed automation.

The Build System: Dynamic As Fuck

Most frameworks hardcode the binary name. Or they make you configure it in 47 places. Or they just name everything “main” because who gives a shit about UX.

servicepack extracts your binary name from go.mod automatically:

APP_NAME := $(shell head -n 1 go.mod | awk '{print $2}' | awk -F'/' '{print $NF}')

Your module is github.com/yourusername/my-awesome-api? Your binary is my-awesome-api. Change the module name? Binary name updates automatically. One source of truth.

The build uses Docker with static linking (no CGO dependencies) and injects the app name at build time via ldflags. Your binary is portable as fuck – copy it anywhere and it just works.

Customization Without Conflicts

The build system uses an override pattern for everything. There’s a framework version of every script and makefile, and you can override them with user versions:

Build Scripts:

  • Framework: scripts/make/servicepack/*.sh (updated by framework)
  • User: scripts/make/*.sh (your custom versions)

The Makefile checks for user scripts first. If you want custom build logic, just copy the framework script to scripts/make/ and edit it. Framework updates won’t touch your version.

Makefiles:

  • Makefile.servicepack – Framework targets (updated)
  • Makefile – Your targets (includes servicepack + lets you override)

Define a target in your Makefile? It overrides the framework version automatically. Want to add custom deployment commands? Just add them to your Makefile.

Docker Images:

  • Dockerfile.servicepack and Dockerfile.servicepack.dev – Framework images (updated)
  • Dockerfile and Dockerfile.dev – Your custom images (optional overrides)

Need custom packages or build steps? Copy the framework Dockerfile and customize it. The build system will use your version automatically.

Service Filtering: Run What You Need

Sometimes you don’t want to run ALL services. Maybe you’re working on authentication and don’t give a shit about the notification service right now.

export SERVICES_ENABLED="auth,user"
./build/yourapp run

Only those services run. Everything else is skipped. No commenting out code, no build flags, just an environment variable.

Leave SERVICES_ENABLED empty or unset? Everything runs. Simple.

Testing That Actually Catches Shit

The framework requires 85% test coverage by default (excluding the hello-world example and cmd package). Coverage runs with race detection because if you’re not testing for race conditions in concurrent code, what the fuck are you even doing?

make test-coverage

This runs go test -race and fails if coverage drops below 85%. Your build pipeline should call this. Your pre-commit hook should call this. Anything less is amateur hour.

For test isolation, there’s ResetInstance() to clear the singleton and ClearServices() to wipe the service registry. Write your tests with mock services, reset the state between tests, and you’re golden.

The Pre-commit Hook That Saves Your Ass

There’s a pre-commit.sh script that runs make lint && make test-coverage. Use it with your favorite pre-commit framework, or use my ez-pre-commit tool to auto-setup the hook.

Lint checks run 100+ linters via golangci-lint: errcheck, govet, staticcheck, gosec, the works. If you can push code that fails basic quality checks, you’re either running without hooks or you’re deliberately bypassing them (which, fair enough, sometimes you gotta move fast).

The Concurrency Model: Goroutines Without Chaos

Here’s how the concurrency actually works:

  1. ServiceManager gets your services
  2. For each service, it spawns a goroutine and adds it to a sync.WaitGroup
  3. Each goroutine calls service.Run(ctx) and sends errors to a channel
  4. The main goroutine selects on: context cancellation, errors from services, or manual stop
  5. First error or cancellation triggers shutdown
  6. Shutdown calls Stop(ctx) on all running services (concurrently)
  7. Wait for all services to finish cleanup
  8. Done

No complicated state machines. No weird lifecycle hooks. Just goroutines, wait groups, and channels – the way concurrent Go is supposed to work.

What’s Next (The TODO List)

I’ve got plans for this thing:

Service Retry: If a service crashes, check its retry count and restart it if it hasn’t hit the limit. Some services are flaky – might as well handle that gracefully.

Allowed Failures: Let certain services die without killing everything. Useful for one-shot migrations or periodic jobs that run once and fuck off.

Service Dependencies: Let services declare “I need database service to start first.” Topological ordering for startup so things come up in the right order.

Health Checks: Built-in HTTP endpoints to check if services are alive. Timeouts, failure thresholds, the whole deal.

Management API: HTTP endpoint to see running services and control them. Start/stop/restart individual services without bouncing the whole process.

Metrics: Track startup times, failure counts, restart counts. Optionally export to Prometheus because everyone loves Prometheus.

Service Communication: Built-in message bus so services can talk to each other without you having to roll your own pub/sub bullshit.

Why You Should Give A Shit

If you’re building something with multiple services and you’re tired of:

  • Docker Compose files that barely work
  • Debugging distributed systems locally
  • Managing 47 git repos for one feature
  • Choosing between monoliths and microservices hell
  • Framework updates that break your shit

Then servicepack might not make you hate everything quite as much.

It’s not going to solve world hunger or make your code magically faster. It’s just a framework that gets out of your way and lets you write services without dealing with the usual bullshit.

Clone it, make it yours, write some services, and see if it makes your life less annoying. If it does, great. If it doesn’t, at least you only wasted 30 seconds making it your own.

Where To Find This Shit

https://github.com/psyb0t/servicepack

MIT licensed, do whatever you want with it.