paint-brush
Connecting Dots: Go, Docker and k8s [Part 2] by@danstenger
192 reads

Connecting Dots: Go, Docker and k8s [Part 2]

by DanielJune 1st, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Portscanner is running in Kubernetes and scale/manage it with k8s. I created a simple port scanning tool with GO with GO. I'll use minikube and kubectl to run it locally and scale it to 4 pods. I can't get access to the tool, so I'll have to create a service of a type of load balancer that will distribute requests to pods. To fix this, I need to edit the default deployment configuration: "Always. ChangeAlwaysAlways"

Company Mentioned

Mention Thumbnail
featured image - Connecting Dots: Go, Docker and k8s [Part 2]
Daniel HackerNoon profile picture

In the previous post, I created a simple port scanning tool with GO. Now it's time to run this tool in docker and scale/manage it with k8s!

First, I'll create a

Dockerfile
with multistage build to reduce image size:

# stage 1
FROM golang:1.15.6 as builder
WORKDIR /app

# fetch dependeicnies first as they're not changing often and will get cached
COPY ./go.mod ./go.sum ./
RUN go mod download

# copy source to working dir of a container
COPY . .

# build the app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o server cmd/server/main.go

# stage 2
FROM alpine:latest
WORKDIR /app
COPY --from=builder /app/server .
EXPOSE 8080
ENTRYPOINT ["./server"]
Let's build and run the scanner. From the root directory of the project:
$ # build the image first
$ docker build -t portscanner .

$ # now that image is built, run it
$ docker run --name portscanner --rm -p 8080:8080 portscanner

$ # logs:
$ INFO	2021/01/28 15:32:15 starting server at :8080

Port scanner has started on port 8080, and I've exposed the same port to the host with

-p 8080:8080
. I'll use that for testing. Since I know that the application is available on port 8080, my port scanning tool should be able to detect itself:

$ curl -X GET //localhost:8080/open-ports\?domain\=127.0.0.1\&toPort\=9000

$ # output:
{
  "from_port": 0,
  "to_port": 9000,
  "domain": "127.0.0.1",
  "open_ports": [
    8080
  ]
}

$ # with missing query params
$ curl -X GET //localhost:8080/open-ports | jq

$ # output:
{
  "Result": "ERROR",
  "Cause": "INVALID_REQUEST",
  "InvalidFields": {
    "domain": [
      "can't be blank"
    ],
    "toPort": [
      "can't be blank",
      "invalid decimal string"
    ]
  }
}

It's all working as expected. Let's take it one step further and run it in Kubernetes! To do so, I'll use

minikube
and
kubectl
. You can find OS-specific installation instructions for minikube - and for kubectl - . When all tools are installed, I'll run
minikube start
and allow it some time to start with default params. Minikube runs a separate VM, and that VM doesn’t have access to the local Docker registry and hence no access to the previously built image. To fix this issue, I have to switch to minikube environment:

$ eval $(minikube docker-env)
Now I'll build the image again, but this time, inside minikube env:
$ docker build -t portscanner .

$ # list all images to see if portscanner is there
$ docker image ls
Now that image is accessible to minikube; I'll create deployment:
$ kubectl create deployment portscanner --image=portscanner

This tells Kubernetes to create a deployment named portscanner and use a previously built portscanner image for it. There's few handy commands to check the status of deployment and pods:

$ # to check deployment status
$ kubectl get deployment portscanner

$ # to get list of pods
$ kubectl get pods

Both of the above have 0/1 in READY status, and

get pods
has ImagePullBackOff STATUS. From
get pods
command, I can get the pod name and investigate what's wrong:

kubectl describe pod portscanner-xxxx...
The above command has a more informative output where I can see that although the image is accessible in minikube VM, deployment still tries to pull it from the cloud. To fix this, I need to edit the default deployment configuration:
kubectl edit deployment portscanner

The above will open

.yaml 
configuration file in your default editor. This part
spec.template.spec.containers.0.imagePullPolicy
says
Always
which means that no matter what, deployment will always try to pull image from the cloud. Change
Always
to
IfNotPresent
, save the file and check deployment status once again:

kubectl get deployment portscanner

This time the Ready status is 1/1! Success! Now portscanner is running in Kubernetes. There's still one small issue, though. I can't access it. To get access to my tool, I'll have to create a service of a type

LoadBalancer
that will take incoming requests and distribute them to pods:

kubectl expose deployment portscanner --type=LoadBalancer --port=8080

In most other environments, when you use a Kubernetes

LoadBalancer
, it will provision a load balancer external to your cluster, for example, an Elastic Load Balancer (ELB) in AWS. That's not the case when running it locally. Luckily I can simulate the connection with minikube service:

minikube service portscanner
This will print out connection details and try to open the browser with the given url. This url can be used to issue curl commands to the port scanner, just like before. Let's try it out:
$ curl -X GET //{your_service_ip}:{your_service_port}/open-ports\?domain\=127.0.0.1\&toPort\=9000 | jq

$ # Output:
{
  "from_port": 0,
  "to_port": 9000,
  "domain": "127.0.0.1",
  "open_ports": [
    8080
  ]
}
Right, now I have 1 pod running and serving my application. So, for my last trick, I'll scale the deployment to 4 pods:
kubectl scale deployments/portscanner --replicas=4

Now, if I run

kubectl get pods
, I'll see that there are 4 pods running.

To get log output for all pods:
kubectl logs -l app=portscanner -f
I hope this will get you going and help to build some exciting things. You can find the source code .
바카라사이트 바카라사이트 온라인바카라