This digital service is scaffolding for a Golang web server. It is configured with logging, metrics, health checks, & profiling. It is also integrated with secrets storage and a relational database.
A corporate development team can deploy a prototype into a production environment. ¡Vamos! hastens development and eases operation.
Let's learn how to develop, build, test, and operate this application.
This is for MacOS. You will need two things: Podman and Golang.
A natively installed instance of Postgres is fine when it is the only dependency, but I imagine anyone using this will have an existing installation of Postgres configured for a different development context. We can use Postgres inside a virtual machine to avoid disruptions. And we can add other databases and dependencies.
A virtual machine managed by podman1 will host databases needed by the application. The virtual machine runs Linux, specifically Fedora CoreOS.2 And systemD will manage containers hosting databases.
The included makefile offers a command that copies a few .container files from a directory named _linux/ to a new directory on the MacOS host. And copies a .sql initilization script for Postgres. Then uses podman to create a virtual machine named dev_vamos that can read the new directory. Then uses systemD to fetch container images and run them. And setup the Postgres instance in a container.
All you need is an installation of podman. Postgres will need a minute to start.
~/vamos $ make podman_create_vm
~/vamos $ podman ps -a
Instead of using podman commands to manipulate the containers directly, we can use systemD inside the Linux virtual machine to start and stop containers.
~/vamos $ podman machine ssh dev_vamos "systemctl --user status dev_postgres"
● dev_postgres.service - Launch Postgres 18 with native UUIDv7
Loaded: loaded (/var/home/core/.config/containers/systemd/dev_postgres.container; generated)
Drop-In: /usr/lib/systemd/user/service.d
└─10-timeout-abort.conf
Active: active (running) since Fri 2025-07-04 09:49:39 EDT; 6s ago
Invocation: 3b0202c669c640a1a6a96bd8bab6f4d5
Main PID: 9034 (conmon)
Tasks: 24 (limit: 2155)
Memory: 39.4M (peak: 55.5M)
CPU: 287ms
CGroup: /user.slice/user-501.slice/user@501.service/app.slice/dev_postgres.service
├─libpod-payload-4f575acdf6c9155ee2e079ba37c9220e9aef7bb47430af6c9ad969d26cf12d30
│ ├─9036 postgres
│ ├─9062 "postgres: io worker 1"
│ ├─9063 "postgres: io worker 0"
│ ├─9064 "postgres: io worker 2"
│ ├─9065 "postgres: checkpointer "
│ ├─9066 "postgres: background writer "
│ ├─9068 "postgres: walwriter "
│ ├─9069 "postgres: autovacuum launcher "
│ └─9070 "postgres: logical replication launcher "
└─runtime
├─9018 rootlessport
├─9025 rootlessport-child
└─9034 /usr/bin/conmon --api-version 1 # removed for brevity
Connect to the database named test_data in the containerized Postgres instance from the MacOS host.
~/vamos $ psql -h localhost -U tester -d test_data
A container image of Postgres 18 Beta is preferred for the native UUIDv7 feature. How is a container obtained and managed by podman in this development environment?
A special .container file is read from a user directory named .config/containers/systemd/ in the VM by a podman tool named quadlet. And quadlet parses the file to produce a systemD service file. The resulting .service file can download a container image and run it. More details can be studied in the makefile under the command podman_create_vm.
The quadlet .container file includes a few commands commonly used to run containers in both Docker and podman.
# _linux/dev_postgres.container
[Unit]
Description=Launch Postgres 18 with native UUIDv7
[Container]
Image=docker.io/library/postgres:18beta2-alpine3.22
ContainerName=postgres
Environment=POSTGRES_PASSWORD=password
Environment=POSTGRES_USERNAME=postgres
Environment=POSTGRES_HOST_AUTH_METHOD=trust
PublishPort=5432:5432
Volume=/data/postgres:/var/lib/postgresql/18/docker
Volume=/data/setup/setup_db1.sql:/docker-entrypoint-initdb.d/setup_db1.sql
PidsLimit=100
[Service]
Restart=on-failure
RestartSec=10
[Install]
RequiredBy=databases.target
The _testdata/setup_db1.sql file will be copied from the project on the host to the volume of the virtual machine, then mounted to the Postgres container. Postgres only reads this file once during its initilization. It will skip reading it whenever the container is started again.
-- _testdata/setup_db1.sql
DROP DATABASE IF EXISTS test_data;
CREATE DATABASE test_data;
CREATE USER tester WITH PASSWORD 'password';
\c test_data
GRANT ALL ON SCHEMA public TO tester;
Notice the command to switch from the default database to the newly created test_data database. The default user must be in the latter database to effectively grant privileges to another account.
To launch the Postgres development instance, simply ssh into the podman virtual machine and order systemD to start the service. Logs can be viewed via journalD.
~/ $ podman machine ssh dev_vamos "systemctl --user start dev_postgres"
~/ $ podman machine ssh dev_vamos "journalctl --user -u dev_postgres"
The extension .service is excluded from the commands for brevity.
A couple of CLI tools that won't be imported into the application.
~/vamos $ go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest
~/vamos $ go install github.com/sqlc-dev/sqlc/cmd/sqlc@latest
The CLI tool migrate creates numbered .sql files that we can fill in with SQL commands. Then it applies them in numbered order to a Postgres database.3
Create a .sql file that will hold the commands to create a table named authors.
~/vamos $ migrate create -ext sql -dir sqlc/migrations/first -seq create_authors
~/vamos $ tree sqlc/migrations/first
sqlc/migrations/first
├── 000001_create_authors.down.sql
├── 000001_create_authors.up.sql
In 000001_create_authors.up.sql, write the following SQL commands:
CREATE TABLE IF NOT EXISTS authors (
id UUID DEFAULT uuidv7() PRIMARY KEY,
name text NOT NULL,
bio text
);
After writing a SQL command to create a table, apply the command. Notice the subdirectory associated with a particular database, in this case first. Notice the keyword up as the final token in the command.
~/vamos $ export TEST_DB=postgres://tester@localhost:5432/test_data?sslmode=disable
~/vamos $ migrate -database $TEST_DB -path sqlc/migrations/first up
The creation of any tables and any adjustments offered by *.up.sql can be reversed by following the SQL commands written in *.down.sql files.
The command line tool sqlC reads .sql files and writes Go code we can import into the application.4
# sqlc/sqlc.yaml
version: "2"
sql:
- engine: "postgresql"
queries: "queries/first"
schema: "migrations/first"
gen:
go:
package: "first"
out: "data/first"
sql_package: "pgx/v5"
emit_json_tags: true
- engine: "postgresql"
queries: "queries/second"
schema: "migrations/second"
gen:
go:
package: "second"
out: "data/second"
sql_package: "pgx/v5"
emit_json_tags: true
In sqlc/sqlc.yaml, two database engines are listed to help the Go application connect to two different Postgres databases. Each entry relies on a directory of .sql files written for queries, and a directory of .sql files named migrations written for creating tables. sqlC reads these files as inputs.
The produced code will reside in the first package in a newly created subdirectory named data/first and the second package in subdirectory data/second. The code will use the pgx/v5 driver, and include JSON tags in the fields of the generated structs that represent data entities.
After we draft a .sql file for a hypothetical table of authors, like so:
-- sqlc/migrations/first/000001_create_authors.up.sql
CREATE TABLE IF NOT EXISTS authors (
id UUID DEFAULT uuidv7() PRIMARY KEY,
name text NOT NULL,
bio text
);
We can execute the command to create Go code that will interact with the Postgres database.
~/vamos $ sqlc generate -f sqlc/sqlc.yaml
The tool sqlC produces the following code in a models.go file:
// sqlc/data/first/models.go
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.28.0
package first
// abbreviated for clarity...
type Author struct {
ID pgtype.UUID `json:"id"`
Name string `json:"name"`
Bio pgtype.Text `json:"bio"`
}
Author will be accessble in a method of a struct named Queries.
// sqlc/data/first/authors.sql.go
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.28.0
// source: authors.sql
package first
// abbreviated for clarity...
const getAuthor = `-- name: GetAuthor :one
SELECT id, name, bio FROM authors WHERE name = $1 LIMIT 1
func (q *Queries) GetAuthor(ctx context.Context, name string) (Author, error) {
row := q.db.QueryRow(ctx, getAuthor, name)
var i Author
err := row.Scan(&i.ID, &i.Name, &i.Bio)
return i, err
}
And Queries is generated in sqlc/data/first/db.go. It holds the database handle, i.e., the connection pool.
// sqlc/data/first/db.go
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.28.0
package first
// abbreviated for clarity...
func New(db DBTX) *Queries {
return &Queries{db: db}
}
The Postgres connection pool created in main() is transferred to Queries when configuring the Backbone with the Options pattern.5
// main.go
package main
// abbreviated for clarity...
func main() {
db1, _ := rdbms.ConnectDB(cfg, DB_FIRST)
backbone := router.NewBackbone(
router.WithLogger(srvLogger),
router.WithQueryHandleForFirstDB(db1),
)
router := router.NewRouter(backbone)
}
The Backbone struct holds the dependencies needed by the HTTP Handlers. It resides in the Router package.
// internal/router/backbone.go
package router
// abbreviated for clarity...
func WithQueryHandleForFirstDB(dbHandle *pgxpool.Pool) Option {
return func(b *Backbone) {
q := rdbms.FirstDB_AdoptQueries(dbHandle)
b.FirstDB = q
}
}
This particular function transfers the connection pool to Queries.
// internal/data/rdbms/rdbms.go
package rdbms
// abbreviated for clarity...
func FirstDB_AdoptQueries(dbpool *pgxpool.Pool) *first.Queries {
return first.New(dbpool)
}
Create a feature with an existing SQL Table by following this process:
- Draft a SQL query.
- Generate Go code in sqlc/data/ based on the new SQL.
- Draft a new HTTP Handler.
- Register the new HTTP Handler with the Router.
- Add a log line.
- Add a metric line & register it.
In the directory sqlc/queries/first, add a file named authors.sql, then write this inside it.
-- name: GetAuthor :one
SELECT * FROM authors WHERE name = $1 LIMIT 1;
Then use sqlC to transform that SQL query into Go code.
~/vamos $ sqlc generate -f sqlc/sqlc.yaml
sqlC will read the comment, then create a const with that name, and assign a query to it. Then it will create a method with the same name that uses the const.
// sqlc/data/first/authors.sql.go
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.28.0
// source: authors.sql
package first
const getAuthor = `-- name: GetAuthor :one
SELECT id, name, bio FROM authors WHERE name = $1 LIMIT 1
`
func (q *Queries) GetAuthor(ctx context.Context, name string) (Author, error) {
row := q.db.QueryRow(ctx, getAuthor, name)
var i Author
err := row.Scan(&i.ID, &i.Name, &i.Bio)
return i, err
}
The method GetAuthor() can be invoked inside an HTTP handler.
Developers can focus on the file internal/router/routes_features_v1.go to create RESTful features.
Dependency injection is the technique used to provide database handles to the HTTP handlers on the web server. Handlers are simply methods of the struct Backbone. Access a Postgres database through a Queries struct residing in the Backbone field named FirstDB.
A custom func type named errHandler has been created to make responding to HTTP requests feel like idiomatic Go with a returned error. The usual work performed by a HTTP Handler, such as reading data from a database, will be done inside an errHandler.
// internal/router/routes_features_v1.go
package router
// abbreviated for clarity...
// Similar to the http.HandlerFunc, but returns an error.
type errHandler func(http.ResponseWriter, *http.Request) error
// readAuthor conforms to the signature of errHandler and feels idiomatic.
func (b *Backbone) readAuthor(w http.ResponseWriter, req *http.Request) error {
surname := req.PathValue("surname")
timer, cancel := context.WithTimeout(req.Context(), TIMEOUT_REQUEST)
defer cancel()
result, err := b.FirstDB.GetAuthor(timer, surname)
// No need to hande the error inside the body of this modified handler.
// Simply return it.
if err != nil {
return err
}
w.Write([]byte(result.Name))
return nil
}
The returned error needs to be managed & recorded by the function eHand. The errHandler needs to be wrapped by eHand to conform to the http.HandlerFunc interface and be accepted by the router.
Inside the package router in internal/router/routes_features_v1.go, add the wrapped errHandler to the router in the private function addFeaturesV1.
Select the HTTP method that is most appropriate for the writing and reading of data. The ability to select GET or POST as an argument in parameter pattern is a new feature of the language in version 1.22.6
// internal/router/routes_features_v1.go
package router
// abbreviated for clarity...
func addFeaturesV1(router *http.ServeMux, b *Backbone) {
rAuthorHandler := b.eHand(b.readAuthor)
router.Handle("GET /author/{surname}", rAuthorHandler)
}
func (b *Backbone) eHand(f errHandler) http.HandlerFunc {
return func(w http.ResponseWriter, req *http.Request) {
// f is the method b.readAuthor
err := f(w, req)
// Error management begins here.
// 1) Did the client cancel the request? No response needed.
// 2) Did the request exceed a timer?
// 3) Did the database simply lack the data? Not really an error.
// 4) Mask unanticipated errors with a 503.
if err != nil {
switch {
case errors.Is(err, context.Canceled):
b.Logger.Error("HTTP", "status", StatusClientClosed)
case errors.Is(err, context.DeadlineExceeded):
b.Logger.Error("HTTP", "status", http.StatusRequestTimeout)
http.Error(w, "timeout", http.StatusRequestTimeout)
case errors.Is(err, sql.ErrNoRows):
w.WriteHeader(http.StatusNoContent)
default:
b.Logger.Error("HTTP", "err", err.Error())
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
}
}
Inside a HTTP handler, record errors and extra data by simply invoking b.Logger.Error(topic, key, value).
//internal/router/routes_features_v1.go
package router
// abbreviated for clarity...
func (b *Backbone) readAuthor(w http.ResponseWriter, req *http.Request) error {
surname := req.PathValue("surname")
timer, cancel := context.WithTimeout(req.Context(), TIMEOUT_REQUEST)
defer cancel()
result, err := b.FirstDB.GetAuthor(timer, surname)
if err != nil {
b.Logger.Error("readAuthor", "msg", err.Error())
return err
}
w.Write([]byte(result.Name))
}
The package metrics is responsible for custom metrics.
First, define the options Name and Help. Second, select one of four types: counter, gauge, histogram, or summary.7 Third, register it inside the function Register(). This will be invoked in main().
// internal/metrics/metrics.go
package metrics
import (
"github.com/prometheus/client_golang/prometheus"
)
var readAuthorOpts = prometheus.CounterOpts{
Name: "read_author_count",
Help: "amount readAuthor requests",
}
var ReadAuthorCounter = prometheus.NewCounter(readAuthorOpts)
func Register() {
prometheus.MustRegister(ReadAuthorCounter)
}
Finally, increment the counter with the method Inc() inside a HTTP Handler.
// internal/router/routes_features_v1.go
package router
// abbreviated for clarity...
import "vamos/internal/metrics"
func (b *Backbone) readAuthor(w http.ResponseWriter, req *http.Request) error {
metrics.ReadAuthorCounter.Inc()
surname := req.PathValue("surname")
// skipping the rest of the body...
}
Observe the new data on the /metrics route.
~/vamos $ curl localhost:8080/metrics
# abbreviated for clarity...
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 0
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP read_author_count amount readAuthor requests
# TYPE read_author_count counter
read_author_count 0
Applications usually receive a request for a health status, then perform some logic to evaluate the health of the application and the health of any dependencies, then answer. That flow of events doesn't happen in this web app.
Instead, the web server responds to any request for health by simply reading from a custom struct named Health that resides in Backbone.
// internal/router/operations.go
package router
// abbreviated for clarity...
type Health struct {
Rdbms bool
Heap bool
}
Health has several boolean fields. Any request for the status of health is answered by a method that reads from these fields and evaluates the totality of the boolean conditions.
// internal/router/operations.go
package router
// abbreviated for clarity...
func (h *Health) PassFail() bool {
return h.Rdbms && h.Heap
}
The answer is then provided as a HTTP Header -- either 204 or 503.
// internal/router/routes_operations.go
package router
// abbreviated for clarity...
func (b *Backbone) Healthcheck(w http.ResponseWriter, r *http.Request) {
status := b.Health.PassFail()
if status {
w.WriteHeader(http.StatusNoContent)
} else {
b.Logger.Error("Failed health check")
w.WriteHeader(http.StatusServiceUnavailable)
}
}
How is the health of those records evaluated? An individual function that determines the condition of a resource is inserted into a timed loop inside a go routine. Notice the function named checkHeapSize is an argument to the beep function.
// internal/router/operations.go
package router
// abbreviated for clarity...
func (b *Backbone) SetupHealthChecks(cfg *config.Config) {
pingDbTimer := time.Duration(cfg.Health.PingDbTimer)
heapTimer := time.Duration(cfg.Health.HeapTimer)
go beep(pingDbTimer, b.PingDB)
go beep(heapTimer, checkHeapSize)
}
And beep creates a Ticker8 that will emit a signal periodically. Then enters a loop that awaits the signal. Upon receiving the signal, a function represented by the parameter task is invoked. checkHeapSize will be invoked as the task.
// internal/router/operations.go
package router
// abbreviated for clarity...
func beep(seconds time.Duration, task func()) {
ticker := time.NewTicker(seconds * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
task()
}
}
}
What is the benefit of this convoluted setup? No matter how often an external service hammers the /health endpoint, it will be less taxing because it simply reads a boolean. The real work of evaluating any resource is held in a discrete function, and there can be a few or many. They all run in the background. They each update a particular health status on their own time. And the configuration of time is determined by the operator of this application.
Generate a SemVer based on the Git Commit record, then provide that value as input to the build step. An informative record of Git Commits can aid any operator during an incident.
~/vamos $ go build -v -ldflags="-s -X 'vamos/internal/config.AppVersion=v.0.0.0' " -o vamos
The linker flag -s removes symbol table info and DWARF info to produce a smaller executable. And -X9 sets the value of a string variable named AppVersion that resides in the config package. This allows us to dynamically write the version of the application after each new commit & build.
package config
import (
"gopkg.in/yaml.v3"
"os"
)
var AppVersion string
Three natively written functions determine equality, the absence of errors, and truth. One less dependency in the application. Below is an example of a testing function residing in testhelper.go.
// internal/testhelper/testhelper.go
func Equals(tb testing.TB, exp, act interface{}) {
if !reflect.DeepEqual(exp, act) {
_, file, line, _ := runtime.Caller(1)
fmt.Printf("\033[31m%s:%d:\n\n\texp: %#v\n\n\tgot: %#v\033[39m\n\n", filepath.Base(file), line, exp, act)
tb.FailNow()
}
}
These functions can be invoked by a test package. Use the dot at the beginning of the import path to avoid prefacing every invocation with the name of the testhelper package.
// internal/secrets/secrets.go
package secrets_test
import (
"testing"
. "vamos/internal/secrets"
. "vamos/internal/testhelper"
)
func Test_Configuration(t *testing.T) {
// abbreviated for clarity...
Equals(t, "token", openbao.Token)
}
Notice secrets_test is a separate package from the package secrets. All the tests reside in the former and the functionality resides in the latter. The package secrets_test needs to import the package secrets, and only public functions & fields can be tested. This encourages black box testing and clean code.
A few steps are required to test interaction with a database.
- Apply SQL commands to change the local development database.
- Generate Go code in sqlc/data/ to interact with updated database.
- Run Go tests marked integration.
- Reverse SQL commands.
It is possible to write code into a *test package that can create tables, insert sample data, and then drop tables whenever a test is launched. Errors can force the test to halt and leave the database with the new state without reversing it. For this reason, it is easier to rely on a tool outside of the application test suite to create and delete Postgres tables. I rely on migrate. However, I prefer using code in the test suite to insert sample data.
Use a make command to easily perform the aforementioned tasks.
~/vamos $ make test_database
The application will amend the test suite by first repositioning the root of a test executable in order to read files that provide sample data and the configuration file. Then the test suite will write data to the database, then run the test functions. Lastly, the report is offered.
// internal/data/rdbms/rdbms_int_test.go
package rdbms_test
// abbreviated for clarity...
func TestMain(m *testing.M) {
os.Setenv("APP_ENV", "DEV")
Change_to_project_root()
timer, _ := context.WithTimeout(context.Background(), time.Second*5)
// Setup common resource for all integration tests in only this package.
dbErr := CreateTestTable(timer)
if dbErr != nil {
panic(dbErr)
}
os.Unsetenv("APP_ENV")
code := m.Run()
os.Exit(code)
}
The first function tested is the one that creates a connection pool. No other test runs concurrently in this moment. The environment inside the test is adjusted to induce reading configuration data for the development environment.
func Test_ConnectDB(t *testing.T) {
t.Setenv("APP_ENV", "DEV")
t.Setenv("OPENBAO_TOKEN", "token")
db, dbErr := ConnectDB(config.Read(), TEST_DB_POS)
Ok(t, dbErr)
t.Cleanup(func() { db.Close() })
}
To hasten the test suite, a different set of functions that perform read operations are executed concurrently. And they rely on a common connection pool created in the same test group. The final action of the test group is to close the connection pool.
func Test_ReadingData(t *testing.T) {
t.Setenv("APP_ENV", "DEV")
t.Setenv("OPENBAO_TOKEN", "token")
db, _ := ConnectDB(config.Read(), TEST_DB_POS)
q := FirstDB_AdoptQueries(db)
timer, _ := context.WithTimeout(context.Background(), TIMEOUT_READ)
t.Run("Read one author", func(t *testing.T) {
readOneAuthor(t, q, timer)
})
t.Run("Read many authors", func(t *testing.T) {
readManyAuthors(t, q, timer)
})
t.Run("Read most productive author", func(t *testing.T) {
readMostProductiveAuthor(t, q, timer)
})
t.Run("Read most productive author & book", func(t *testing.T) {
readMostProductiveAuthorAndBook(t, q, timer)
})
t.Cleanup(func() { db.Close() })
}
The Postgres connection pool retains access to the Openbao secrets storage in a method named BeforeConnect. This method ensures that the connection pool can read fresh credentials, so it enables the security practice of revoking & rotating credentials.
// internal/data/rdbms/rdbms.go
package rdbms
// abbreviated for clarity...
func configure(cfg *config.Config, dbPosition int) (*pgxpool.Config, error) {
pgxConfig.BeforeConnect = func(ctx context.Context, cc *pgx.ConnConfig) error {
pw, pwErr := secrets.BuildAndRead(cfg, db.Secret)
if pwErr != nil {
return pwErr
}
cc.Password = pw
return nil
}
}
Requests need to be terminated during a rolling deployment in a manner that preserves the data of the customer, enhances the user experience, and avoids alarms that can mistakenly summon staff.
The webserver is launched in a separate go routine, then a channel is opened in main() to receive termination signals. This blocks main() until either signal 2 or 15 is received.
// main.go
package main
// abbreviated for clarity...
func main() {
webserver := server.NewServer(cfg, router)
go server.GracefulIgnition(webserver)
catchSigTerm()
server.GracefulShutdown(webserver)
}
Signal 15 allows the program to close listening connections and idle connections while awaiting active connections. This is essential in a dynamic environment like a Kubernetes cluster. A kubelet transmits Signal 15 to a container and pods wait 30 seconds for application cleanup.10
// main.go
package main
// abbreviated for clarity...
func catchSigTerm() {
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
<-sigChan
}
After Signal 15 is received, server.GracefulShutdown(webserver) is invoked. It wraps http.Server.Shutdown(shutdownCtx) with a 15 second timer. And the cancellation function stop() will also be invoked.
// internal/server/server.go
package server
// abbreviated for clarity...
const GRACE_PERIOD = time.Second * 15
func GracefulShutdown(s *http.Server) {
quitCtx, quit := context.WithTimeout(context.Background(), GRACE_PERIOD)
defer quit()
err := s.Shutdown(quitCtx)
if err != nil {
return err
}
return nil
}
stop() was assigned to the server during configuration. It signals to all the child contexts derived from base and used by the HTTP Handlers to terminate any active connections.
// internal/server/server.go
package server
// abbreviated for clarity...
func NewServer(cfg *config.Config, router http.Handler) *http.Server {
base, stop := context.WithCancel(context.Background())
s := &http.Server{
BaseContext: func(lstnr net.Listener) context.Context { return base },
}
s.RegisterOnShutdown(stop) // Cancellation Func assigned to shutdown.
return s
}
Inserting the webserver in a go routine is required to avoid a hasty shutdown. When http.Server.Shutdown() is invoked, http.Server.ListenAndServe() returns immediately.11 ListenAndServe() was blocking in a go routine, and becomes un-blocked. If ListenAndServe() had been implemented in main(), then it would immediately un-block and main() would immediately return.
Two environmental variables are needed by the application to read a configuration file and access storage of sensitive credentials.
~/vamos $ APP_ENV=DEV OPENBAO_TOKEN=token ./vamos
Metrics are created by Prometheus in the package metrics in the file /internal/metrics/metrics.go and scraped on the endpoint /metrics. The package captures go runtime metrics, e.g., go_threads, go_goroutines, etc.12
New metrics needs to be registered, so they can be activated in main().
// internal/metrics/metrics.go
package metrics
// abbreviated for clarity...
import "github.com/prometheus/client_golang/prometheus"
func Register() {
prometheus.MustRegister(ReadAuthorCounter)
}
The main() function will invoke the public Register() function.
// main.go
package main
// abbreviated for clarity...
import "vamos/internal/metrics"
func main() {
metrics.Register()
}
Logging is configured as debug in development or as warn in production.
# internal/config/dev.yml
---
logger:
level: debug
The level is read in logging.go.
// internal/logging/logging.go
package logging
func configure(cfg *config.Config) *slog.HandlerOptions {
logLevel := &slog.LevelVar{}
if cfg.Logger.Level == "debug" {
logLevel.Set(slog.LevelDebug)
} else {
logLevel.Set(slog.LevelWarn)
}
opts := &slog.HandlerOptions{Level: logLevel}
return opts
}
The primary logger is configured to include two details that can aid anyone debugging an incident in production. The version of the language, and the version of the application. Every child logger inherits these details.
// internal/logging/logging.go
package logging
func CreateLogger(cfg *config.Config) *slog.Logger {
goVersion := slog.String("lang", runtime.Version())
appVersion := slog.String("app", config.AppVersion)
group := slog.Group("version", goVersion, appVersion)
opts := configure(cfg)
handler := slog.NewJSONHandler(os.Stdout, opts)
logger := slog.New(handler).With(group)
slog.SetDefault(logger)
return logger
}
This can be observed during startup.
~/vamos $ APP_ENV=DEV OPENBAO_TOKEN=token ./vamos
{"time":"2025-07-24T13:05:01.477738-04:00","level":"INFO","msg":"Begin logging","version":{"lang":"go1.24.0","app":"v.0.0.0"},"level":"DEBUG"}
The Backbone struct holds several databases and dependencies that can be used inside HTTP handlers. The logger is actually transferred from Backbone to Bundle. The Bundle also acquires the STDLIB router http.ServeMux. It holds both the logger and the router.
// internal/router/router.go
package router
// abbreviated for clarity...
func NewRouter(b *Backbone) *Bundle {
mux := http.NewServeMux()
routerWithLoggingMiddleware := NewBundle(b.Logger, mux)
return routerWithLoggingMiddleware
}
The middleware is configured in internal/router/middleware.go as a method on Bundle that adopts the http.Handler interface from the router by implementing ServeHTTP(http.ReponseWriter, *http.Request).
// internal/router/middleware.go
package router
// abbreviated for clarity...
func (b *Bundle) ServeHTTP(w http.ResponseWriter, req *http.Request) {
b.Logger.Info(
"Inbound",
"method", req.Method,
"path", req.URL.Path,
"uagent", req.Header.Get("User-Agent"),
)
b.Router.ServeHTTP(w, req)
}
Details of every request are recorded. The HTTP method, path, and User-Agent header are highlighted. After those details are logged, the function continues to the regular router in b.Router.ServeHTTP(w, req).
By satisfying this interface, the http.Server can treat Bundle as a router.
// internal/server/server.go
package server
// abbreviated for clarity...
func NewServer(cfg *config.Config, router http.Handler) *http.Server {
s := &http.Server{
Addr: ":" + cfg.HttpServer.Port,
Handler: router,
}
return s
}
Then every incoming request is logged in a standard manner.
~/vamos $ APP_ENV=DEV OPENBAO_TOKEN=token ./vamos
# skipping other logs...
{"time":"2025-08-05T16:45:17.23609-04:00","level":"INFO","msg":"Inbound","version":{"lang":"go1.24.0","app":"v.0.0.0"},"server":{"method":"GET","path":"/health","uagent":"HTTPie/3.2.4"}}
We can obtain useful data from the production environment during a memory problem.
A Backbone field named HeapSnapshot holds a pointer to a buffer that collects information generated by the runtime/pprof/WriteHeapProfile(io.Writer) function.
// internal/router/operations.go
package router
// abbreviated for clarity...
type Backbone struct {
Logger *slog.Logger
FirstDB *first.Queries
Health *Health
DbHandle *pgxpool.Pool
HeapSnapshot *bytes.Buffer
}
The Backbone struct implements the method Write([]byte) (n int, err error) to comply with the Writer interface expected by WriteHeapProfile.13 And a custom implemention resets the buffer before each write to avoid a memory leak.
// internal/router/operations.go
package router
// abbreviated for clarity...
func (b *Backbone) Write(p []byte) (n int, err error) {
b.HeapSnapshot.Reset()
return b.HeapSnapshot.Write(p)
}
After a configured threshold for memory is surpassed, heap data will be gathered.
// internal/router/operations.go
package router
// abbreviated for clarity...
func (b *Backbone) CheckHeapSize(threshold uint64) {
var stats runtime.MemStats
runtime.ReadMemStats(&stats)
if stats.HeapAlloc < threshold {
b.Health.Heap = true
return
}
b.Health.Heap = false
b.Logger.Warn("Heap surpassed threshold!", "threshold", threshold, "allocated", stats.HeapAlloc)
err := pprof.WriteHeapProfile(b)
if err != nil {
b.Logger.Error("Error writing heap profile", "ERR:", err.Error())
}
}
Another method can be drafted that will read from the buffer and exfiltrate the data for review by developers & operations staff.
Footnotes
-
https://docs.fedoraproject.org/en-US/fedora-coreos/fcos-projects/ ↩
-
https://github.com/golang-migrate/migrate?tab=readme-ov-file#migrate ↩
-
https://docs.sqlc.dev/en/stable/tutorials/getting-started-postgresql.html ↩
-
https://rednafi.com/go/dysfunctional_options_pattern/#functional-options-pattern ↩
-
https://tip.golang.org/doc/go1.22#enhanced_routing_patterns ↩
-
https://prometheus.io/docs/tutorials/understanding_metric_types/ ↩
-
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-flow ↩
-
https://pkg.go.dev/github.com/prometheus/client_golang/prometheus#hdr-Advanced_Uses_of_the_Registry ↩