visit
CI/CD Hands-On: A Simple But Functional Continuous Integration Workflow [Part 1].
$ kind create cluster --name develop
$ kind export kubeconfig --name develop --kubeconfig kubeconfig
$ terraform init
$ terraform apply -var="github_owner=owner_name" -var="github_repository=repo_name" # Introduce your GitHub token
Once Terraform finishes the installation process, you should have FluxCD running in your KinD cluster and a new folder named cluster in your repository.
Under the hood, Terraform installs and configures the IP range. You can read more about the MetalLB configuration in the first part of the article:
resource "helm_release" "metallb" {
name = "metallb"
repository = "//metallb.github.io/metallb"
chart = "metallb"
}
data "docker_network" "kind" {
name = "kind"
}
resource "kubectl_manifest" "kind-address-pool" {
yaml_body = yamlencode({
"apiVersion" : "metallb.io/v1beta1",
"kind" : "IPAddressPool",
"metadata" : { "name" : "kind-address-pool" },
"spec" : { "addresses" : [replace(tolist(data.docker_network.kind.ipam_config)[0].subnet, ".0.0/16", ".255.0/24")] }
})
depends_on = [helm_release.metallb]
}
resource "kubectl_manifest" "kind-advertisement" {
yaml_body = <<YAML
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: kind-advertisement
YAML
depends_on = [helm_release.metallb]
}
resource "helm_release" "flux" {
repository = "//fluxcd-community.github.io/helm-charts"
chart = "flux2"
name = "flux2"
namespace = "flux-system"
create_namespace = true
version = "2.9.2"
}
resource "tls_private_key" "flux" {
depends_on = [helm_release.flux]
algorithm = "ECDSA"
ecdsa_curve = "P256"
}
resource "github_repository_deploy_key" "flux" {
depends_on = [tls_private_key.flux]
title = "Flux"
repository = var.github_repository
key = tls_private_key.flux.public_key_openssh
read_only = "false"
}
resource "flux_bootstrap_git" "this" {
depends_on = [github_repository_deploy_key.flux]
path = "clusters/develop"
}
Create a folder named base inside a folder named infrastructure. The base folder has the basic infrastructure configuration for all your clusters. Next, create a folder named ingress-nginx. Use the namespace name as the folder name.
---
apiVersion: v1
kind: Namespace
metadata:
name: ingress-ngnix
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: ingress-nginx
spec:
interval: 2h
url: //kubernetes.github.io/ingress-nginx
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: ingress-nginx
spec:
interval: 15m
chart:
spec:
chart: ingress-nginx
version: 4.7.1
sourceRef:
kind: HelmRepository
name: ingress-nginx
interval: 15m
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ingress-ngnix
resources:
- namespace.yaml
- helmrepository.yaml
- helmrelease.yaml
Use multiple files to define your objects: helmrelease.yaml, helmrepository.yaml, namespace.yaml, kustomization.yaml, etc.
The reads and processes the resources to apply them. Last but not least, you need to create a Kustomization object to synchronize your cluster configuration. Create a YAML file named infrastructure.yaml inside the cluster/cluster_name folder:
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: infra-base
namespace: flux-system
spec:
interval: 1h
retryInterval: 1m
timeout: 5m
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure/base
prune: true
wait: true
You can use the base folder to install your stack into all your clusters or use a different one to customize your installation depending on the cluster. For example, we want to install Flagger only into the development cluster.
Create a new folder using your cluster name inside the infrastructure folder. Then, create a file named infrastructure.yaml into your cluster/cluster_name:
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: infra-cluster-name
namespace: flux-system
spec:
dependsOn:
- name: infra-base
interval: 1h
retryInterval: 1m
timeout: 5m
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure/cluster_name
prune: true
FluxCD will synchronize the cluster status after applying the infra-base Kustomization. Install Flagger, creating the following YAML file inside the infrastructure/cluster_name/flagger-system folder:
---
apiVersion: v1
kind: Namespace
metadata:
name: flagger-system
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: flagger
spec:
interval: 1h
url: //flagger.app
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: flagger
spec:
interval: 1h
install:
crds: CreateReplace
upgrade:
crds: CreateReplace
chart:
spec:
chart: flagger
version: 1.x.x
interval: 6h
sourceRef:
kind: HelmRepository
name: flagger
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: flagger-loadtester
spec:
interval: 1h
chart:
spec:
chart: loadtester
version: 0.x.x
interval: 6h
sourceRef:
kind: HelmRepository
name: flagger
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: flagger-system
resources:
- namespace.yaml
- helmrepository.yaml
- helmrelease.yaml
To build the application continuous deployment pipeline, create the installation YAML file into the apps/cluster_name/podinfo*:*
---
apiVersion: v1
kind: Namespace
metadata:
name: podinfo
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: podinfo
spec:
interval: 5m
url: //stefanprodan.github.io/podinfo
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo
spec:
releaseName: podinfo
chart:
spec:
chart: podinfo
version: 6.5.0
sourceRef:
kind: HelmRepository
name: podinfo
interval: 50m
install:
remediation:
retries: 3
values:
ingress:
enabled: true
className: nginx
hpa:
enabled: true
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: podinfo
resources:
- namespace.yaml
- helmrepository.yaml
- helmrelease.yaml
You can use the update hosts Python script to update your local environment hosts, as explained in the first part of the article.
Then, create the Kustomization file into the cluster/cluster_name folder to synchronize your apps:
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: apps
namespace: flux-system
spec:
interval: 10m0s
dependsOn:
- name: infra-cluster-name
sourceRef:
kind: GitRepository
name: flux-system
path: ./apps/cluster_name
prune: true
wait: true
timeout: 5m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: podinfo-chart
spec:
image: ghcr.io/stefanprodan/charts/podinfo
interval: 5m
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: podinfo-chart
spec:
imageRepositoryRef:
name: podinfo-chart
policy:
semver:
range: 6.x.x
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageUpdateAutomation
metadata:
name: podinfo-chart
spec:
interval: 30m
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
git:
checkout:
ref:
branch: main
commit:
author:
email: [email protected]
name: fluxcdbot
messageTemplate: 'chore(develop): update podinfo chart to {{range .Updated.Images}}{{println .}}{{end}}'
push:
branch: main
update:
path: ./apps/cluster_name/podinfo
strategy: Setters
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: podinfo
resources:
[...]
- imagepolicy.yaml
- imagerepository.yaml
- imageautoupdate.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo
spec:
releaseName: podinfo
chart:
spec:
chart: podinfo
version: 6.5.0 # {"$imagepolicy": "podinfo:podinfo-chart:tag"}
sourceRef:
kind: HelmRepository
name: podinfo
interval: 50m
install:
remediation:
retries: 3
---
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: podinfo-request-success-rate
spec:
provider:
type: prometheus
address: //loki-stack-prometheus-server.loki-stack:80
query: |
100 - sum(
rate(
http_requests_total{
app_kubernetes_io_name="podinfo",
namespace="{{ namespace }}",
status!~"5.*"
}[{{ interval }}]
)
)
/
sum(
rate(
http_requests_total{
app_kubernetes_io_name="podinfo",
namespace="{{ namespace }}",
}[{{ interval }}]
)
) * 100
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: podinfo
resources:
[...]
- metrictemplate.yaml
---
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
spec:
provider: nginx
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
ingressRef:
apiVersion: networking.k8s.io/v1
kind: Ingress
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
name: podinfo
progressDeadlineSeconds: 60
service:
port: 9898
targetPort: 9898
analysis:
interval: 10s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: podinfo-request-success-rate
thresholdRange:
min: 99
interval: 1m
webhooks:
- name: acceptance-test
type: pre-rollout
url: //flagger-loadtester.flagger-system/
timeout: 30s
metadata:
type: bash
cmd: curl -sd 'test' //podinfo-canary.podinfo:9898/token | grep token
- name: load-test
url: //flagger-loadtester.flagger-system/
timeout: 5s
metadata:
cmd: hey -z 1m -q 10 -c 2 //podinfo-canary.podinfo:9898/healthz
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: podinfo
resources:
[...]
- canary.yaml
Use Loki stack to monitor the status of the cluster resources using Grafana + Loki + Prometheus. Install the Loki stack by creating the following YAML file inside the infrastructure/cluster_name/loki-stack folder:
---
apiVersion: v1
kind: Namespace
metadata:
name: loki-stack
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: grafana
spec:
interval: 2h
url: //grafana.github.io/helm-charts
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: loki-stack
spec:
interval: 1h
chart:
spec:
chart: loki-stack
version: v2.9.11
sourceRef:
kind: HelmRepository
name: grafana
interval: 1h
values:
grafana:
enabled: true
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- grafana.local
prometheus:
enabled: true
nodeExporter:
enabled: true
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: loki-stack
resources:
- namespace.yaml
- helmrepository.yaml
- helmrelease.yaml
You can use the update hosts Python script to update your local environment hosts, as explained in the first part of the article