visit
There is something very powerful about automating even the little things. We sometimes think as automation as the simple maths of saving a given number of minutes/seconds by a given number of times a day but the compound effect is much greater.
As you will be setting up more automation for your deployments, you will run into and solve problems many times more critical than saving a few minutes a week.
Automating solves the problem of documentation once for all. Your deployment procedures will be turned into infrastructure-as-code and configuration-as-code and these are the best documentation. If you are using declarative frameworks like CloudFormation and Terraform, you will effectively commit in your code beautiful HCL, YAML or JSON files and Dockerfiles respectively describing the state of your infrastructure and configuration.
Automating forces you to enforce security. You will find that your previous manual procedures were relying on credentials or keys having more permissions than needed and that you still hadn’t created that deployment key like you said you would. You will not allow your automation scripts such permissive access and therefore you will finally setup the appropriate credentials and permissions for it which will make your infrastructure more secure as a result.
Automating is the only way you can grow your technical team efficiently. The deeper you automate, the more standard your processes are. You will replace your stray scripts by widely understood technologies like Docker, Terraform, Kubernetes, Consul etc and their design patterns. It means you can build and handover them to a new developer quickly. You can forget about them and get back to it months later with no brain freeze.
In the context of Laravel, I’ve already published a reference architecture for AWS, also released as an open source infrastructure-as-code stack and I would like to augment it here with continuous deployment (CD below).
There are many ways to implement CD on AWS, below is a preferred solution. We will use CodeBuild to build our Docker images, CodeCommit to detect source code change and finally CodePipeline to orchestrate the build and swap our production containers on ECS.The first issue we run into is that CodePipeline only integrates with GitHub, CodeCommit or S3. I recommend using CodeCommit as an entrypoint, so that you can handle two workflows:a. Trigger a redeploy with a git push
directly from the developer machine (for pre-prod or a hotfix)b. (Bitbucket Pipelines, GitLab CI, CodeShip etc) by having your CI trigger a git push to CodeCommit when tests pass.
This is the procedure I use to deploy my clients’ Laravel applications on AWS. I hope this can be helpful to deploy yours. If your use case is more complex, I provide on-going support packages ranging from mentoring your developers up to hands-on building your application on AWS. Ping me at [email protected]
1. Setup CodeCommit and push your Laravel code
That’s all you need to create the repository. You can find the full CloudFormation template that will effectively grant access of the repository to CodePipeline.
In the meantime, run these commands to login to the repository and publish your Laravel project:
$ git remote add codecommit CODE_COMMIT_URL
Configure the authentication to CodeCommit by adding the AWS command line credential helper to Git config:
$ git config --global credential.helper ‘!aws codecommit credential-helper $@’
$ git config --global credential.UseHttpPath true
You can now push your code to CodeCommit.
git push codecommit YOUR_BRANCH:master
If you’re using OSX, subsequent git push might fail because OSX will cache the short-lived credentials generated by the AWS credential helper. You will need to search your KeyChain and delete any entry for git-codecommit-*.After this, remember to deny access to the OSX KeyChain when prompted by OSX after a manual git push.
2. Setup CodeBuild
In our CodeBuild project, we define the commands to be ran after CodeBuild has cloned our repository. Commands are similar to what you would write in bitbucket-pipelines.yml or gitlab-ci.yml: assume you can define your environment (operating system and pre-installed tools) and that you are in the root directory of your project.Here we use an Ubuntu 14.04 with Docker, Python and Compose installed. We will install node, npm and gulp to compile our front-end assets.
Pre-build steps construct the Docker repository URL we will tag our images with. The build command is a simple docker build
and the post-build command is one or more docker push
. Ignore the last post-build command for now.
3. Setup CodePipeline and trigger ECS zero-downtime redeployment
Here is where we connect all the dots. So far we have a CodeCommit repository, a CodeBuild project and our ECS cluster running happily. We use a CodePipeline project to connect them all. It will look like this in your AWS console:
Every commit to the master
branch of CodeCommit will trigger our CodePipeline project. The CodeBuild commands we defined will build our Docker images and push them to our ECR registries and finally the last step of the pipeline will redeploy our application.
We redeploy our application by redefining our ECS Tasks Definitions’ Docker images URLs
For the Deploy step, CodePipeline relies on an images definition file. It is a JSON file we create in the post-build stage of CodeBuild that describes how our ECS cluster’s Task Definitions should be updated. Provided the cluster has enough capacity, ECS will spin up the new containers, wait for them to be reported healthy before shutting down the old ones, effectively achieving zero-downtime deployment.
This JSON file looks like this for our multiple-containers application and it can easily be built with bash
and printf
:
[ { "name": "laravel", "imageUri": "YOUR_ECR_URL_FOR_LARAVEL:COMMIT_ID" }, { "name": "nginx", "imageUri": "YOUR_ECR_URL_FOR_NGINX:COMMIT_ID" }]
Note that this is why we marked our newly built Docker images with a different tag: ECS can now pick up the TaskDefinition update and force pulling the new images onto our container instances.
ECS will use the MinimumHealthyPercent
and the MaximumPercent
settings of your ECS Service and the current extra cluster capacity (Memory and CPU) to orchestrate the redeployment. If there is extra capacity for a new instance of your TaskDefinition and your application hasn’t reached the MaximumPercent
allowed, then ECS will spin up another instance of your TaskDefinition and delete the older one, in that order. In the other case, it might swap them in the opposite order, creating downtime.
I hope it will help you to build a bit more automation in your deployments. I’m also looking forward to the next ECS feature release expected this month: automatic service discovery by DNS registration of your micro-services to Route53. If you’re the first to implement it, comment below!
Lionel is Chief Technology Officer of London-based startup Wi5 and author of the Future-Proof Engineering Culture course. You can reach out to him on