Lessons learned
There’s a lot of buzz around Google Anthos! I’ve lost count of the number of times I’ve been asked “What IS Anthos?!”. Well today I’m NOT going to answer that question! However, what I will do is (hopefully!) help some of you to spin up your very own Anthos bite-size cluster on some relatively inexpensive hardware thanks to the newest “flavor” of Anthos: Anthos on Bare Metal. From there you’ll be able to explore the features of Anthos for yourself, and embark on your own Anthos journey of discovery!Anthos on Bare Metal allows you to spin up a cluster from as few as 2 Intel NUCs. It’s a relatively straightforward process, but the process can be a little daunting if you haven’t been through it before. I encountered a few issues along the way, so what I’m walking you through today is my “path of least resistance”!So let’s start with the hardware. I got sent 3 Intel NUCs. Each of them has a Core I7 Gen 10 processor, 32GB of RAM, and 256 GB SSD.
To be more specific, here's everything that I needed to get up and running:
- 3x Core: 6 cores / 32GB DDR4 Mem / 256GB SSD
- 8-port GbE switch (optional if you have enough ports on your router / switch)
- 4xRJ45 CAT5E
- 16GB USB Stick to install Ubuntu
You can install Anthos on Bare Metal on Ubuntu, CoreOS, or RHEL servers. (See more details ).Out of the box the NUCs come with Ubuntu installed. However, the Kernel version led to issues with Ansible for me, so I started from scratch and created a bootable USB stick that would install “virgin” Ubuntu 20.04 onto each NUC. I will create my cluster out of just 2 of the NUCs (which is really the minimum we’d recommend). The third I will use as my “workstation” in order to bootstrap the cluster using the command-line tool. All is not lost, however, we can add that third NUC to the cluster later, once it’s up and running!
Happy-path install process
- Connect one of the NUCs to a monitor and keyboard. Boot it up. In the case of the NUCs I received, Ubuntu was already installed. We’ll use this machine as a workstation for now.
- Open a terminal &
sudo su
(you'll need the password for the default user)
mkdir usb-boot && cd usb-boot
- Copy over shell script from (Thanks to my colleague Cody Hill for this!)
chmod +x build_iso.sh
- Insert a USB stick into the NUC
fdisk -l
and note the device that represents the USB stick (i.e. /dev/sda
)
./build_iso.sh -F
/dev/sda
(whatever is returned from above)
- Ubuntu ISO will be downloaded and written to the USB stick. N.B. command will finish but writing may continue. Wait 5 mins.
- Powerdown your NUC.
- Connect your second NUC to monitor and keyboard. Insert the USB stick created in step 8. Power on. Press F11 until the boot menu shows. Select the USB stick to boot from. If there are multiple partitions, select the first.
- Ubuntu install will proceed. Select language, keyboard layout, network configuration (strongly recommended to switch to “manual” network config and assign a static IP). I selected not to download updates. For storage config you can just accept defaults. Do not install any other options / software / tools when prompted. You will be prompted to create a username and password. Use whatever you would like, just make a note of it. My suggestion would be to name the machine nuc-1 (and then nuc-2, and so on). At this stage you can elect to copy over an SSH key from Git if you’d like, or you can perform this step later (described in step 19).
- Wait for install / updates to fully complete (“Reboot” will show as option at bottom when it’s done). Reboot. Remove USB stick when prompted. NUC will reboot into Ubuntu.
- Disconnect monitor and keyboard from NUC.
- Repeat steps 11-14 for remaining NUC(s).
- Reconnect workstation NUC and boot up.
- Open a terminal & generate an SSH key:
ssh-keygen -t rsa
cat $home/.ssh/id_rsa.pub
and copy to clipboard
- SSH into each NUC in turn (using username and password set in step 12)
ssh [email protected]
sudo su
vi /root/.ssh/authorized_key
Paste in public SSH key and save
Perform further steps listed here:
- From the workstation NUC validate you can SSH to each NUCvia
ssh -o IdentitiesOnly=yes -i
/home/ubuntu/.ssh/id_rsa
root@
192.168.1.231
(N.B use the path to the private SSH key you previously generated, and the IP address of each of your node NUCs). You should be able to connect to the NUC as root without being prompted for a password.
- From the workstation NUC proceed to follow instructions here: . You’ll install and init gcloud and kubectl. Next you’ll install the bmctl CLI tool.
- Proceed to follow instructions here: . We’ll create a hybrid cluster - a single cluster for both admin and workloads, that can also manage user clusters.(other options are standalone, admin, & user clusters). In the hybrid cluster deployment option, one of your NUCs will be the control plane node, the other will be the worker node. You can read more about the other deployment options on .
- Find a “block” of # of NUC contiguous IP addresses on your network. 3 or 4 IP addresses will be enough for the purposes of a POC. Whenever you create a K8s service of type “LoadBalancer”, one of these IP addresses will be consumed. I used 192.168.1.236-192.168.1.240. The “start” IP address (i.e. 192.168.1.236) you will specify as your ingressVIP in the YAML file. You should also find another free IP address for your controlPlaneVIP. It should NOT be in the start-end range discussed above. I used 192.168.1.196.
- Specify one of your NUC IP addresses as the IP address for the control plane node pool, and the other(s) right at the bottom of the YAML as the worker node pool.
- Run
./bmctl create cluster -c [name of your cluster]
to perform preflight checks, and then to install Anthos. If you followed the instructions in step 22 then all APIs and service account keys would have been generated for you.
- Once it completes successfully you’ll be given the path to the kubeconfig to connect to your new cluster.
- [Optional] If you wish you can re-image your “workstation NUC” using the USB key, and add it as a worker node to your cluster kubectl -n my-cluster edit nodepool np1. Remember to copy your “baremetal” folder (and subfolders) off to somewhere else first (because you’ll need the kubeconfig file!).
Running Your First Workload
So now let’s deploy a simple “Hello world!” Kubernetes deployment and service. Let’s begin by setting our
KUBECONFIG
environment variable.
bmctl
puts the
KUBECONFIG
under
bmctl-workspace/[your-cluster-name]/
. So, for instance, I called my cluster bm-cluster:
export KUBECONFIG=$HOME/baremetal/bmctl-workspace/bm-cluster/bm-cluster-kubeconfig
First let’s create a deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: metrics
department: sales
replicas: 3
template:
metadata:
labels:
app: metrics
department: sales
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
Save the file (as my-deployment.yaml), and now apply it to our Anthos BM cluster:
kubectl apply -f my-deployment.yaml
Let’s check the deployment:
kubectl get deploy
You should see something like this:
NAME READY UP-TO-DATE AVAILABLE AGE
my-deployment 3/3 3 3 30s
Now let’s create a service of type LoadBalancer. Save the following YAML as my-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: metrics
department: sales
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
Now let’s apply it:
kubectl apply -f my-service.yaml
Now let’s validate it got created:
kubectl get svc my-service
You should see something like this:
Notice how my-service's IP address (external-IP) got automatically drawn from the “range” we specified earlier. Now we can curl our service, and check that everything is working as it should:
For me, I
curl 192.168.1.237
, and I see:
Hello, world!
Version: 2.0.0
Hostname: my-deployment-68bdbbb5cc-mpftc
Congratulations! Your Anthos on BM cluster is fully operational. It will also show up in the Google Cloud console.
If you want to log in to your cluster (to view node and workload information from the console) you can now follow the instructions on .