paint-brush
Serverless API with Terraform: GO and AWS [Part 2] by@danstenger
392 reads
392 reads

Serverless API with Terraform: GO and AWS [Part 2]

by DanielMarch 28th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In [part 1], I went through the basics of setting up the back-end for the infrastructure state tracking and deploying it to AWS. This time, I’ll focus on creating the remaining part, which is API, which will allow performing CRUD operations on it. For demonstration purposes, I'll create `/users` API endpoint that will allow users to perform CRUD. I'll start from the `iac/api` directory where all Lambdas, persistence layer and API gateway will be defined.

Company Mentioned

Mention Thumbnail
featured image - Serverless API with Terraform: GO and AWS [Part 2]
Daniel HackerNoon profile picture

In part 1, I went through the basics of setting up the back-end for the infrastructure state tracking and deploying it to AWS. This time, I’ll focus on creating the remaining part, which is API. So without further ado, let's get started.


For demonstration purposes, I’ll create /users API endpoint that will allow performing CRUD operations on it. I’ll start from the iac/api directory where all Lambdas, persistence layer and API gateway will be defined.


Same drill as in part 1. Variables are located in variables.tf, computed variables in locals.tf and main assembly point in main.tf file. Let’s review main.tf as everything else is very similar to module from part 1.


First off, I’m configuring the backend:


terraform {
  backend "s3" {
    region         = "eu-central-1"
    bucket         = "project-123-remote-state"
    key            = "project-123-remote-state.tfstate"
    dynamodb_table = "project-123-tf-statelock"
  }
}


The above values are taken from the module. As mentioned before, the backend configuration block doesn’t support interpolated values and all resources have to be created upfront in order to be used here. This approach of hard-coding values suits my needs and works perfectly fine. There are cases though, where this isn’t an option. Say I’d have to perform multi-region deployment, then having all these values hard-coded will cause some inconvenience.


There are a few options to handle this:


1) Using configuration file:

Create a configuration file, usually, each environment will have its own file, and populate it with variables:
# iac/api/backend_config
region         = "eu-central-1"
bucket         = "project-123-remote-state"
key            = "project-123-remote-state.tfstate"
dynamodb_table = "project-123-tf-statelock"


Finally apply this configuration:
terraform init -backend-config=backend_config


2) Using command line arguments:

terraform init -backend-config="region=$REGION,bucket=$BUCKET,key=$KEY,dynamodb_table=$DDB_TABLE"


Regardless of the option, terraform.backend block can be left empty and the whole module initiated with dynamic values.


Right below the back-end configuration I’ve defined lambda policy and role. In a nutshell, it’s a definition of what my Lambda will be allowed to access and it varies depending on application needs:


data "template_file" "lambda_policy" {
  template = file("templates/lambda_policy.json")
}

data "template_file" "lambda_role" {
  template = file("templates/lambda_role.json")
}

resource "aws_iam_policy" "policy" {
  name        = "${local.env}-${var.prefix}-policy"
  description = "policy to allow lambda use specified resources"
  policy      = data.template_file.lambda_policy.rendered
}

resource "aws_iam_role" "role" {
  name               = "${local.env}-${var.prefix}-role"
  assume_role_policy = data.template_file.lambda_role.rendered
}

resource "aws_iam_role_policy_attachment" "policy_attachment" {
  role       = aws_iam_role.role.name
  policy_arn = aws_iam_policy.policy.arn
}


As you can see from the above snippet, I like to keep my policy declarations in separate .json files so that main.tf stays DRY.


Next step Dynamodb table. I’ll be using it as a persistence layer for this example:


module "dynamodb_table" {
  source = "terraform-aws-modules/dynamodb-table/aws"

  name     = local.ddb_users_table
  hash_key = "username"

  attributes = [
    {
      name = "username"
      type = "S"
    }
  ]

  tags = {
    Env = local.env
  }
}


There’s nothing really special about it except that I’ll be using username as a hash key to uniquely identify my documents. If you need more info on Dynamodb, I find very useful.


Next, I define lambdas needed to perform CRUD operations on /users resource. I’ll briefly walk you through one lambda definition as remaining Lambdas look exactly the same and only function names and descriptions are changing:


module "create_user_lambda" {
  source        = "../modules/aws/lambda"
  function_name = "create_user"
  lambda_path   = var.lambda_path
  description   = "create user lambda, part of /users resource CRUD to handle user creation"
  role_arn      = aws_iam_role.role.arn

  environment = {
    ENV            = local.env
    REGION         = var.region
    DDB_TABLE_NAME = local.ddb_users_table
  }

  tags = {
    Env = local.env
  }
}


The first thing worth mentioning is that I use a custom Lambda module as a source. You can check it out . It helps keep my Lambda definitions DRY by abstracting common functionality and setting configuration values that I would otherwise have to type in over and over again. Each time I’ll run terraform plan and terraform apply , this module will use function_name and lambda_path values to allocate, build and deploy my Lambdas if there’s a change detected in the source.


The next stop is API gateway definition:


data "template_file" "apigw_policy" {
  template = file("${path.module}/templates/apigw_policy.json")
}

data "template_file" "api_spec" {
  template = file("templates/api.yaml")
  vars = {
    role_arn               = aws_iam_role.role.arn
    region                 = var.region
    create_user_lambda_arn = module.create_user_lambda.function_arn
    update_user_lambda_arn = module.update_user_lambda.function_arn
    get_user_lambda_arn    = module.get_user_lambda.function_arn
    delete_user_lambda_arn = module.delete_user_lambda.function_arn
  }
}

resource "aws_api_gateway_rest_api" "rest_api" {
  name        = "serverless-api"
  description = "serverless-api"
  body        = data.template_file.api_spec.rendered
  policy      = data.template_file.apigw_policy.rendered
}

resource "aws_api_gateway_deployment" "client-example-api" {
  rest_api_id = aws_api_gateway_rest_api.rest_api.id
  stage_name  = var.api_version
  depends_on  = [aws_api_gateway_rest_api.rest_api]

  variables = {
    api_version = md5(file("${path.module}/templates/api.yaml"))
  }

  lifecycle {
    create_before_destroy = true
  }
}


There are a few things happening here, but the most interesting one is open API spec. You can check it . As with policies, I’m placing it in a separate templates/api.yaml file just to keep things more organised. Note that this time, when I’m using API spec template file, I’m passing in a few variables. This is mainly to tell each resource which lambda to trigger when it gets called.


From documentation:


An OpenAPI definition can then be used by documentation generation tools to display the API, code generation tools to generate servers and clients in various programming languages, testing tools, and many other use cases.


So what terraform will do, it will take my spec and create API Gateway in AWS. This is done by assigning rendered spec template to aws_api_gateway_rest_api.rest_api.body. It also makes it very easy to design REST resources as most code editors have swagger extensions that allow previewing changes in real-time and also ensure that the spec is valid.


As for the handlers, I’ve used GO to implement them. I will not be going through those in details as it was quickly assembled for demonstration purposes. You can check the implementation .
Assuming that the stack is deployed, I can now deploy the API stack. First I have to make sure that terraform will have access to AWS API:
export AWS_ACCESS_KEY_ID=******
export AWS_SECRET_ACCESS_KEY=******


Deploying the api:
cd /iac/api/
terraform plan
terraform apply --auto-approve


/iac/api/outputs.tf file contains properties that will be printed out after terraform apply finishes running. If all goes well, I should see an output similar to this:

invoke_url = "//dorb127v21.execute-api.eu-central-1.amazonaws.com/v1"


Now with that url in place, I can use curl to test if everything’s working as expected:

# create document
curl -X POST //dorb127v21.execute-api.eu-central-1.amazonaws.com/v1/users \
 -H 'Content-Type: application/json' \
 -d '{"username": "foo"}'

# get document
curl -X GET //dorb127v21.execute-api.eu-central-1.amazonaws.com/v1/users/foo

# update document
curl -X PUT //dorb127v21.execute-api.eu-central-1.amazonaws.com/v1/users/foo \
 -H 'Content-Type: application/json' \
 -d '{"fname": "bar", "lname": "baz", "age": 100}'

# delete document
curl -X DELETE //dorb127v21.execute-api.eu-central-1.amazonaws.com/v1/users/foo

I hope you have learned something useful. If you need more information on modules I’ve used in this example, please refer to terraform . You can also find the source code and track the progress of this project .
바카라사이트 바카라사이트 온라인바카라