If you follow the Microsoft development community at all, you’ve most likely already heard of the new web development framework called Blazor. If you haven’t heard of it, here’s an overview from the :
Blazor lets you build interactive web UIs using C# instead of JavaScript. Blazor apps are composed of reusable web UI components implemented using C#, HTML, and CSS. Both client and server code is written in C#, allowing you to share code and libraries.
As a developer who’s spent most of their career working on the back-end side of .NET applications writing C# code, this is certainly appealing. I’ve always wanted to build a personal website for myself, so I took the time to do so using Blazor and found that it was very simple and the code needed is rather minimal. This post will describe, at a high-level, what you need to build out a similar application.
The source code being referenced throughout can be found in . Note that, as of writing this, the project is still in development.
Setting up the project
As you may already know, Blazor is built on top of the open-source .NET Core SDK/runtime. So I’d suggest making sure you have the latest versions of:
- (you’ll need ≥ v3.1.0)
- and
With those, we can now . When creating a new project in Visual Studio, you should now have an option for a Blazor App:
After selecting this and providing the name and location of the project, you should be presented with the following options:
These options represent the two available in Blazor. We’re going to be using the WebAssembly model, since this will allow us to serve the application (yes, the entire .NET/C# application) in the browser statically. Note that Blazor WebAssembly is currently in preview (as of April 17, 2020).
Upon creating the project, a default Blazor application will be generated for you. You should be able to build/run this and see a page similar to the following:
Develop your site
The Blazor framework is relatively simple, so this shouldn’t be too complicated. Essentially, the application contains the following:
- Pages (aka Razor pages — using the
.razor
file extension) — these define the route and layout of a given page in the UI. These pages can contain HTML elements and C# code. In the following example, you can see the C# code defined using the @code
block:
@page "/counter"
<h1>Counter</h1>
<p>Current count: @currentCount</p>
<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>
@code {
private int currentCount = 0;
private void IncrementCount()
{
currentCount++;
}
}
- Components — these are used to encapsulate some piece of UI and it’s functionality. These generally take shape as some sort of reusable form or control. Using Components can be a great way to add extensibility and create a simple, clean application. A Component is defined in a .razor file, and can optionally have a corresponding .cs file to contain the C# code. Below is an example of a component I created to contain a simple page header for my website:
@namespace Tayco.Web.Components
@inherits PageHeaderBase
<MatH2>@Title</MatH2>
<MatDivider />
<br />
using Microsoft.AspNetCore.Components;
namespace Tayco.Web.Components
{
public class PageHeaderBase : ComponentBase
{
[Parameter]
public string Title { get; set; }
}
}
Given that, you can now clean up the default pages generated by the template (
Pages\Counter.razor
and
Pages\FetchData.razor
) and add your own. I currently only have two pages, an About Me page and a Blogs page which I’ll be updating later on to contain the blogs I’ve published.
Further considerations
If you happen to follow along and create a similar site, here are some other points to consider that I came across while developing this:
- Since there is no server hosting your .NET code in the Blazor WebAssembly hosting model, your browser needs to download the necessary DLLs to run the code. This results in a relatively large payload to load the site (my small site comes to just under 6MB). It also means that the DLLs are accessible to anyone who accesses your site. If either of these are a concern for you, you may want to consider the hosting model.
- There are already a ton of third-party libraries that you can use to simplify/extend your Blazor app. Id suggest taking a look at the GitHub repo which contains a growing list of resources. I ended up using the library, which provides a ton of easy-to-use Material Design components.
- I lightly grazed over the actual functionality in Blazor in this post. There is a lot more available, so I’d strongly recommend taking a look at to learn more of what’s offered.
Hosting in AWS
One of the benefits of developing software using cloud services is the ease of use. The major providers (namely Microsoft, Amazon, and Google) all give a wide array of offerings, allowing robust and complex solutions.Since our site is going to be static, it should be cheap and simple to maintain. We don't need any sort of server-side compute, we just need a place to host our files and provide public access to them. AWS has great documentation on , so I mainly followed that to set everything up. Here's an overview of the services that will be needed:
- - As the name suggests, this is where the files for our website will be stored
- - This allows us to associate a public domain name with our content in S3
- - This one is optional. It simply provides better performance when loading our website by caching data in locations closer to end-users
This diagram from the AWS documentation gives a great outline of how these services interact:
Pricing
Aside from the low maintenance and ease of use, another benefit to be gained from this approach is the low cost. I ended up using purchasing a domain with Route 53, which has been the most expensive part of the website at around $12.The billing model of these services are base off of your usage. So there's no large upfront fees or subscription costs. For most cases, static sites will cost somewhere between pennies to a few dollars per month.
Publishing the Blazor app
As mentioned earlier, the AWS documentation for hosting a static site is pretty thorough, so I'd recommend following along with that. The whole process is easy to follow and took me less than 30 minutes.The only part that I'll give some guidance to is the portion of uploading the actual website content. Once you have the S3 bucket for you root domain created, you can then add the contents of your Blazor app.To do so, you'll first need to create the publishable contents of your app using one of the following methods:
Once your app has been published, you simply need to upload the contents into the S3 root domain bucket. Whichever method above you choose to publish with, you should ultimately end up with a directory containing your
index.html
. This is the directory that you'll want to copy into your S3 bucket.
This folder will contain everything in your
wwwroot
folder, along with the necessary DLLs and Blazor files.As mentioned previously, one important thing to consider here is that anything your copy to this S3 bucket will be publicly accessible. If this is a concern for you, then this solution is probably not appropriate. You may want to consider a solution where the private contents can be contained on a private server.
CI/CD with GitHub Actions
At this point, we've successfully published a Blazor WebAssembly app as a static website in AWS. This process essentially boiled down to the following:
- Build our Blazor app
- Grab the distributable contents
- Copy them to our S3 bucket in AWS
Now our website is live which is great. However, we're probably going to be making changes to the website over time - as with almost any piece of software. We could just repeat the process we've followed above, manually building and copying the contents into production every time we make a change. Since I'll likely be the sole contributor to my website and changes will be relatively infrequent, this would probably be manageable (even though I'd cringe every time). There are a few factors in software delivery that when increased make this process quickly unmanageable:
- Frequency of changes
- Number of contributors
This problem is generally solved under the umbrella of (i.e. CI/CD), which entails automating the build/deployment steps that we would otherwise do manually. There is an ever growing list of tools that can be used to help implement CI/CD. We'll be using to automate the process of building our Blazor WASM app and deploying it to AWS.
Building Workflows
I decided to go with GitHub Actions in this project for a few reasons:
- I already had my source code in a GitHub repository.
- There is a free tier available that meets my needs (see ).
- I found the thorough and easy to navigate.
- There's a where Actions can be created and shared by the community.
As mentioned previously, we have a few manual steps that we can follow to get our website into production. To translate that into GitHub Actions, we'll need to create a . The idea of a workflow is pretty standard across different CI/CD implementations. At it's core, it's just a definition of your build/deployment process. That definition is generally stored in a YAML file.
With GitHub Actions, to create a workflow you simply add the definition as a .yml file in the
/.github/workflows
directory in your repository. Below is the current workflow definition for uploading my website (don't worry, we'll break down the pieces of this next):
name: Upload Website
on:
workflow_dispatch:
inputs:
input_name:
required: false
default: "Upload Website - Manual Trigger"
push:
branches:
- master
paths:
- 'src/**'
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@master
- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.300'
- run: dotnet build -c Release
- run: dotnet test -c Release --no-build
- run: dotnet publish -c Release --no-build -o publish Tayco.sln
- uses: actions/upload-artifact@v1
with:
name: dist
path: publish/wwwroot
deploy:
needs: [build]
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v1
with:
name: dist
- uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
SOURCE_DIR: 'dist/'
As you can see, there are a number of grouped sections in this file. Let's break it down to understand what's going on.
on:
workflow_dispatch:
inputs:
input_name:
required: false
default: "Upload Website - Manual Trigger"
push:
branches:
- master
paths:
- 'src/**'
The
on
keyword is used to define an event that the workflow will be triggered by. In our case, this workflow can be triggered by two different events:
- - This is a recent addition that allows the workflow to be manually queued through the UI. Prior to this you would have to trigger a different event (e.g. push a commit) in order to run the workflow.
push
- As you might guess, this event is any push to the repository given a set of conditions. Our conditions are any push to the master
branch with changes in the src/
directory.
The
jobs
keyword defines what the workflow will actually do in response the events defined above. Each job generally specifies:
- : The environment that the job will run in.
- : These are the tasks that will be run within the job.
It's also important to note that jobs will run in parallel by default. So if you have dependencies, you'll need to explicitly declare those using .
Our workflow consists of two rather straightforward jobs:
build
and
deploy
.
build
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@master
- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.300'
- run: dotnet build -c Release
- run: dotnet test -c Release --no-build
- run: dotnet publish -c Release --no-build -o publish Tayco.sln
- uses: actions/upload-artifact@v1
with:
name: dist
path: publish/wwwroot
So walking through the steps for this job, we have:
- Checkout the latest version of the
master
branch. (Remember that this job runs after any push to master
in the src/
directory.) - Setup
dotnet
. This a community Action that can be found in the marketplace (see ). - Build using the Release configuration.
- Run our tests.
- Publish our contents explicitly into a
publish
directory. - Upload the publish artifacts (using ) - This step is important because it allows us to reuse artifacts between jobs, as we'll see next.
deploy
deploy:
needs: [build]
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v1
with:
name: dist
- uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
SOURCE_DIR: 'dist/'
Before jumping into the steps, you might notice that we've declared
needs: [build]
. As you can imagine, this ensures that this job will run in sequence
after the
build
job finishes successfully.And for the steps, we have:
- Download the
publish
artifact that we uploaded in the build
job. - Upload the artifacts to our S3 bucket using the .
There's another important aspect to look at here. Our workflow is now interacting with external infrastructure - AWS S3 in this case. Thankfully we can't make changes to our S3 bucket without telling it who we are, so we need to supply some information. However, that information is confidential and should not be shared. And since the workflow files are visible to anyone with access to the repository, we don't want to spill the beans in our code. GitHub has a solution to this problem, and that is with .
Secrets act as a secure key-value store that can be set once and then used throughout your actions. As you can see in the
deploy
job, we use the
${{ secrets.<SECRET_NAME> }}
syntax to access secrets that we've defined in the repository.
Automate all the things
And with that, we've now automated the build and deployment steps that we previously had to do manually. Now if we want to make a change, we just do the development locally and then push the changes to the server. From there, our Upload Website workflow will be triggered and kick off the necessary jobs to build and deploy our app.I've also setup a couple of other workflows in the repository:
- : This runs against any Pull Request submitted against the
master
branch. It just builds and runs the tests for the changes, ensuring that things are mostly working before merging into master
(which will then trigger the Upload Website workflow). - : As the name implies, this workflow runs when a change is made to the
/blogs/
directory on the master
branch. It simply copies the contents into the S3 bucket that I use to serve the actual blog content to the application at runtime.
Summary
While my website isn’t the most elegant (I certainly don’t claim to be an expert in UI/UX), I found the whole process of working with Blazor to be very enjoyable and I’m ultimately happy with my result.Going forward, if I find myself repeating any tasks manually I can look into adding them into our workflows. As mentioned earlier, I could probably maintain this repository without the automated workflows. The value of implementing these CI/CD practices increases as the frequency of changes and number of contributors increases, making it essential for effective software delivery at scale.
Previously published at