visit
With that said, how do you both provision infrastructure and deploy code to test/staging/production environments in an automated and risk-minimized way?
If you can’t answer this question easily, or some part of your deploys still require you to look at your cloud provider’s console, then this post is for you.When configuring serverless architectures in these environments, special care needs to be taken in deciding whether a single stack or multi-stack approach architecture is best for your project. Otherwise, you might be wasting a lot of time on deployments, or even worse, introducing unnecessary risk to your application.
The single stack approach shares its services and infrastructure across all environments (like test, staging and prod). To differentiate between environments at runtime, environment variables or similar mechanisms are used.
In contrast, a multi-stack approach uses a separate instance of each service for every environment and does not use API stages or Lambda aliases to differentiate between them.
Using the AWS example again, a multi-stack setup would configure a separate Lambda Function, API Gateway, and DynamoDB instance for each environment.
If something goes wrong in a single stack approach, there is a greater chance that your production systems are negatively impacted as well. After all, the environments in this approach are provisioned on top of each other, and rely only on environment variables and lambda aliases for indirection.
The main idea behind Continuous delivery is to produce production ready artifacts from your code base frequently in an automated fashion.
It ensures that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and that any business applications and services function as expected through rigorous automated testing.The most important point however, is that all of this must be automated.
Using their front end web application, Cloud providers such as Amazon Web Services make it easy to spin off a new lambda function for testing purposes or to update a function’s application code.
However, all aspects of your continuous delivery strategy should be automated - the only manual step should be the push of the deploy to production button. This is vital for many reasons:
Minimizing Risk - Even with proper resource access controls in place which is seldom the case, the risk of accidentally causing bad things to happen are higher than you think. You definitely don’t want be the reason for a disruption to your service.
Scalability - As the number of developers in the project grows, and by extension, the volume of code deliveries, any process that is not completely automated will likely slow down your team’s agility.
Traceability - When your continuous delivery strategy is an automated, often using an automated pipeline of some sort, the use of deployments, builds and other components are better documented, making it easier to resolve and recover from failures.
Using Bitbucket pipelines, I usually use the following skeleton pipeline in multi-environment, continuous delivery environments. First, it runs your static analysis tools and automated tests for every Pull Request. Once the changes are deployed to master, it automatically updates the
STAGING
environment. Finally, the PROD
environment is updated once the changes in STAGING
are known to be safe using a manual trigger. If you are using AWS, checkout for more information on the actual automated deployment of Lambda functioApp Config Is Stored in the Environmentpipelines:
default:
- step:
name: Run Static Analysis Tools
script:
...
- step:
name: Run Automated Tests
script:
...
branches:
master:
- step:
name: Run Static Analysis Tools
script:
...
- step:
name: Run Automated Tests
script:
...
- step:
name: Update STAGING
deployment: staging
script:
(deploy to staging)...
- step:
name: Update PROD
deployment: production
trigger: manual
script:
(deploy to production)...
By using code to automate the process of setting up and configuring a virtual machine or container for example, you have a fast and repeatable method for replicating the process. So if you build a virtual environment for the development of an application, you can repeat the process of creating that VM simply by running the same code once you are ready to deploy.
Additionally, IaC processes increase agility. Developers don’t have to wait for however long the IT department needs to provision a new VM for them to do work.When it comes to practicing IaC in the cloud, the is a great tool for configuring serverless architectures. It’s a command line interface for building and deploying entire serverless applications through the use of configuration template files. It, along with many other services including AWS’s Serverless Application Model (SAM) for example, provide a scalable and systematic solution to many of the operational complexities of multi-environment serverless architectures. The is probably the most popular cloud provider agnostic framework for configuring serverless applications and supports AWS, Microsoft Azure, Google Cloud Platform, IBM OpenWhisk, and more.