visit
I'll start with something that I hope one day will become a standard. You can read more about it . Let's name it boilerplate:
mkdir -p \
$GOPATH/src/github.com/boilerplate/pkg \
$GOPATH/src/github.com/boilerplate/cmd \
$GOPATH/src/github.com/boilerplate/db/scripts \
$GOPATH/src/github.com/boilerplate/scripts
pkg/ will contain common/reusable packages, cmd/ programs, db/scripts db related scripts and scripts/ will contain general purpose scripts.
$ cd $GOPATH/src/github.com/boilerplate && \
go mod init github.com/boilerplate
Let's create a Dockerfile.dev for local development environment:
# Start from golang v1.13.4 base image to have access to go modules
FROM golang:1.13.4
# create a working directory
WORKDIR /app
# Fetch dependencies on separate layer as they are less likely to
# change on every build and will therefore be cached for speeding
# up the next build
COPY ./go.mod ./go.sum ./
RUN go mod download
# copy source from the host to the working directory inside
# the container
COPY . .
# This container exposes port 7777 to the outside world
EXPOSE 7777
I don't want to install/setup PostgreSQL database neither I want any other project contributor to do so. Let's automate this step with docker-compose. The content of docker-compose.yml file:
version: "3.7"
volumes:
boilerplatevolume:
name: boilerplate-volume
networks:
boilerplatenetwork:
name: boilerplate-network
services:
pg:
image: postgres:12.0
restart: on-failure
env_file:
- .env
ports:
- "${POSTGRES_PORT}:${POSTGRES_PORT}"
volumes:
- boilerplatevolume:/var/lib/postgresql/data
- ./db/scripts:/docker-entrypoint-initdb.d/
networks:
- boilerplatenetwork
boilerplate_api:
build:
context: .
dockerfile: Dockerfile.dev
depends_on:
- pg
volumes:
- ./:/app
ports:
- 7777:7777
networks:
- boilerplatenetwork
env_file:
- .env
entrypoint: ["/bin/bash", "./scripts/entrypoint.dev.sh"]
I will not be explaining how docker-compose works here, but it should be pretty much self explanatory. Two interesting things to point out. First is
./db/scripts:/docker-entrypoint-initdb.d/
in pg service. When I run docker-compose, pg service will take bash scripts from host ./db/scripts
folder, place and run them in pg container. Currently there will be only one script. It will ensure that test database will be created . Lets create that script file:$ touch ./db/scripts/1_create_test_db.sh
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
DROP DATABASE IF EXISTS boilerplatetest;
CREATE DATABASE boilerplatetest;
EOSQL
Second interesting thing is
entrypoint: ["/bin/bash", "./scripts/entrypoint.dev.sh"]
It installs the way that go.mod is not affected and same package is not picked up and installed in production later. It also builds our application, starts listening for any changes made to source code and recompiles it. It looks like this:#!/bin/bash
set -e
GO111MODULE=off go get github.com/githubnemo/CompileDaemon
CompileDaemon --build="go build -o main cmd/api/main.go" --command=./main
Next, I'll create
.env
file in root of our project which will hold all environment variables for local development:POSTGRES_PASSWORD=password
POSTGRES_USER=postgres
POSTGRES_PORT=5432
POSTGRES_HOST=pg
POSTGRES_DB=boilerplate
TEST_DB_HOST=localhost
TEST_DB_NAME=boilerplatetest
All variables with
POSTGRES_
prefix will be picked up by our pg service in docker-compose.yml and create database with relevant details.// pkg/config/config.go
package config
import (
"flag"
"fmt"
"os"
)
type Config struct {
dbUser string
dbPswd string
dbHost string
dbPort string
dbName string
testDBHost string
testDBName string
}
func Get() *Config {
conf := &Config{}
flag.StringVar(&conf.dbUser, "dbuser", os.Getenv("POSTGRES_USER"), "DB user name")
flag.StringVar(&conf.dbPswd, "dbpswd", os.Getenv("POSTGRES_PASSWORD"), "DB pass")
flag.StringVar(&conf.dbPort, "dbport", os.Getenv("POSTGRES_PORT"), "DB port")
flag.StringVar(&conf.dbHost, "dbhost", os.Getenv("POSTGRES_HOST"), "DB host")
flag.StringVar(&conf.dbName, "dbname", os.Getenv("POSTGRES_DB"), "DB name")
flag.StringVar(&conf.testDBHost, "testdbhost", os.Getenv("TEST_DB_HOST"), "test database host")
flag.StringVar(&conf.testDBName, "testdbname", os.Getenv("TEST_DB_NAME"), "test database name")
flag.Parse()
return conf
}
func (c *Config) GetDBConnStr() string {
return c.getDBConnStr(c.dbHost, c.dbName)
}
func (c *Config) GetTestDBConnStr() string {
return c.getDBConnStr(c.testDBHost, c.testDBName)
}
func (c *Config) getDBConnStr(dbhost, dbname string) string {
return fmt.Sprintf(
"postgres://%s:%s@%s:%s/%s?sslmode=disable",
c.dbUser,
c.dbPswd,
dbhost,
c.dbPort,
dbname,
)
}
So what's happening here? Config package has one public
Get
function. It creates a pointer to Config instance, tries to get variables as command line arguments and uses env vars as default values. So it's the best of both worlds as it makes our config very flexible. Config instance has 2 methods to get dev and test DB connection strings.// pkg/db/db.go
package db
import (
"database/sql"
_ "github.com/lib/pq"
)
type DB struct {
Client *sql.DB
}
func Get(connStr string) (*DB, error) {
db, err := get(connStr)
if err != nil {
return nil, err
}
return &DB{
Client: db,
}, nil
}
func (d *DB) Close() error {
return d.Client.Close()
}
func get(connStr string) (*sql.DB, error) {
db, err := sql.Open("postgres", connStr)
if err != nil {
return nil, err
}
if err := db.Ping(); err != nil {
return nil, err
}
return db, nil
}
Here I introduce another 3rd party package
github.com/lib/pq
which you can read more about . Again, there's public Get
function that accepts connection string, establishes connection to database and returns pointer to DB instance. // pkg/application/application.go
package application
import (
"github.com/boilerplate/pkg/config"
"github.com/boilerplate/pkg/db"
)
type Application struct {
DB *db.DB
Cfg *config.Config
}
func Get() (*Application, error) {
cfg := config.Get()
db, err := db.Get(cfg.GetDBConnStr())
if err != nil {
return nil, err
}
return &Application{
DB: db,
Cfg: cfg,
}, nil
}
There's public
Get
function again, remember, consistency is the key! :) It returns pointer to our Application instance that will hold our configuration and access to database.// pkg/exithandler/exithandler.go
package exithandler
import (
"log"
"os"
"os/signal"
"syscall"
)
func Init(cb func()) {
sigs := make(chan os.Signal, 1)
terminate := make(chan bool, 1)
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
go func() {
sig := <-sigs
log.Println("exit reason: ", sig)
terminate <- true
}()
<-terminate
cb()
log.Print("exiting program")
}
So exithandler has public
Init
function that will accept a callback function which will be invoked when program exits unexpectedly or is terminated by user.// cmd/api/main.go
package main
import (
"log"
"github.com/boilerplate/pkg/application"
"github.com/boilerplate/pkg/exithandler"
"github.com/joho/godotenv"
)
func main() {
if err := godotenv.Load(); err != nil {
log.Println("failed to load env vars")
}
app, err := application.Get()
if err != nil {
log.Fatal(err.Error())
}
exithandler.Init(func() {
if err := app.DB.Close(); err != nil {
log.Println(err.Error())
}
})
}
There's new 3rd party package introduced
github.com/joho/godotenv
which will load env vars from a .env file created earlier. It will get pointer to application that holds config and db connection and listen for any interruptions to perform graceful shutdown.$ docker-compose up --build
$ docker container ls
I can allocate pg servide name in name column. In my case docker has named it boilerplate_pg_1. I'll connect to it by typing:
$ docker exec -it boilerplate_pg_1 /bin/bash
$ psql -U postgres -W
Password, as per
.env
file is just a "password". .env
file also used by pg service to create boilerplate database and custom script from /db/scripts folder was responsible for creating boilerplatetest database. Lets make sure it's all according to the plan. Type \l
And sure thing I have
boilerplate
and boilerplatetest
databases ready to work with.