Blog

Pulumi and GitHub Actions – Infrastructure as Code strikes back

Category
Software development
Pulumi and GitHub Actions – Infrastructure as Code strikes back

Ever since one of Notch DevOps Jedi masters showed us a new hope in the programming world with Terraform use cases, there has been a disturbance in the force. Today, IaC will strike back with Pulumi and GitHub Actions

Honestly, Terraform is great and by far the number one IaC tool in the DevOps world, so why should we learn the syntax for yet another tool with which we can do all the same things we could do with Terraform? That is the key – you don’t need to learn new syntax at all. 

What is Pulumi?

Pulumi is a modern open-source infrastructure as a coding platform that leverages existing programming languages such as TypeScript, JavaScript, Python, Go, .NET, Java, and YAML. It also ensures interaction with cloud resources through Pulumi SDK. Basically, if you are good at TypeScript, it offers you all the infrastructure you need for your application.

We will go through how to provide AWS infrastructure written in Python using Pulumi, and to make sure that the created battle station (ok, no more Star Wars references) is fully operational, we will deploy a simple Flask application using GitHub Actions.

What are we cooking today?

AWS resources we’re going to create and use are: 

  • VPC (virtual private cloud)
  • Security groups
  • IAM roles
  • Amazon ECR (Elastic Container Registry)
  • Amazon EKS (Elastic Kubernetes Service)

We will deploy the mentioned infrastructure using GitHub Actions, which we will also use to build the Docker image we need for the Kubernetes part of the deployment. The image will be pushed on the previously created ECR repository and later used for Kubernetes deployment on the AWS EKS cluster created using Pulumi.

So, allonsy!

Pulumi program, project, stack, etc.

To define infrastructure with Pulumi, we need to create a Pulumi program, project, and stack.

The program written in the desired programming language will create infrastructure elements as it executes. The good part of the freedom to choose a preferred programming language is the ability to use their native libraries, which, combined with Pulumi resources and its packages, gives you a very powerful tool to provide the desired state of infrastructure.

Programs are defined in the project, and the project itself represents a working directory in which we put the source code for the program and needed metadata files. You can create a new project with the “pulumi new” command. In addition to this command, add the desired cloud provider and programming language, name the project, stack, and description. 

~Pulumi/pulumi_blog  pulumi new aws-python                                                   
This command will walk you through creating a new Pulumi project.
Enter a value or leave blank to accept the (default), and press <ENTER>.
Press ^C at any time to quit.
project name (pulumi_blog): pulumi-aws-python
project description (A minimal AWS Python Pulumi program): Python Pulumi program for AWS infrastructure
Created project 'pulumi-aws-python'
Please enter your desired stack name.
To create a stack in an organization, use the format <org-name>/<stack-name> (e.g. `acmecorp/dev`).
stack name (dev): develop
Created stack 'develop'
aws:region: The AWS region to deploy into (us-east-1): eu-central-1
Saved configCode language: JavaScript (javascript)

We created the Pulumi project pulumi-aws-pythonin which we will create AWS infrastructure using Python. After adding a short description of our project, we need to provide a name for our stack. In this case, we named the stack develop. We must also define the AWS region where the initial configuration will build our infrastructure. The region itself is linked to the AWS profile. 

Stacks are logical deployment environments with related resources representing a specific deployment phase. You can have as many stacks in the project as you need, and we mostly use them to create environments for development, production, and staging, or you can divide the stack by branches, which is excellent for organizing and managing our infrastructure. 

You can configure each stack via Pulumi.<name_of_stack>.yaml file. To create a new stack in the existing project, we use the command “pulumi stack init <name_of_stack>”, and the given stack name should be unique within a project.

You can list all stacks in the project with the command “pulumi stack ls”.

Pulumi/pulumi_blog  pulumi stack init production
Created stack 'production'
Pulumi/pulumi_blog  pulumi stack ls
NAME     	LAST UPDATE  RESOURCE COUNT  URL
develop   n/a              n/ahttps://app.pulumi.com/DinoJelincic/pulumi-aws-python/develop
production*  n/a      	n/a         	https://app.pulumi.com/DinoJelincic/pulumi-aws-python/production
Pulumi/pulumi_blog  pulumi stack select developCode language: JavaScript (javascript)

Additionally, we need an Access Key and Secret Key of our AWS profile. The best way to do this is to use the pulumi config set command. Be careful that the Secret Key is set with a secret flag to mask its real value.

Pulumi/pulumi_blog  pulumi config set aws:accessKey JVKJXJSH34CJNMDKJS                                     Pulumi/pulumi_blog  pulumi config set aws:secretKey 7AHDGJDEUNDPPOWJE/SHHSU89 --secretCode language: JavaScript (javascript)

Entire configuration is saved in Pulumi.develop.yaml file inside the project:

config:
 aws:accessKey: JVKJXJSH34CJNMDKJS
 aws:region: eu-central-1
 aws:secretKey:
   secure: AAABAH4G8hpMcZnb8x2m1rn5LAysUFVr6bZ3efWJv8IJrRpzSIQf2cJDk2oLzRDCR478qG7Nc0W4Code language: CSS (css)

Besides that, the file Pulumi project now has the initial structure and contains Pulumi.yaml file with metadata for our project:

name: pulumi-aws-python
runtime:
 name: python
 options:
   virtualenv: venv
description: Python Pulumi program for AWS infrastructure

The code for infrastructure deployment is in the __main__.py file. Since there is a complex infrastructure, it will be divided into modules for better readability.

Requirements.txt file contains dependencies for our project, and venv is a virtual environment where the additional libraries will be installed.

pulumi>=3.0.0,<4.0.0
pulumi-aws>=6.0.2,<7.0.0Code language: HTML, XML (xml)

With this, we finished the configuration phase and are starting with infrastructure provisioning.

AWS infrastructure provisioning

As said before, this blog’s entire infrastructure is divided into modules.

Pulumi Modules

We must deploy additional infrastructure to create and provide a fully functional EKS cluster. Specifically, we will create VPC (virtual private cloud) as a part of a virtual network in which all resources and infrastructure are isolated. 

For VPC to be fully operational, we need to assign an IP address, two subnets to ensure high availability, an internet gateway to provide internet access to our VPC, routing tables, as well as some firewall rules defined through AWS resource called security group.

VPC resources can launch in a logically separated network with IP range, tenancy, and DNS defined.

This is Python code for provisioning:

import pulumi
import pulumi_aws as aws

eks_vpc = aws.ec2.Vpc("eks-vpc",
       cidr_block="10.100.0.0/16",
       instance_tenancy="default",
       enable_dns_hostnames= True,
       enable_dns_support= True,
       tags={
           "Name": "eks-blog-vpc",
       })Code language: JavaScript (javascript)

VPC needs to have subnets. IP range of subnets has to suit the range of IP addresses in VPC, and each subnet must belong to a different availability zone to achieve high availability can be achieved. 

eks_subnet1 = aws.ec2.Subnet("public subnet az1",
           vpc_id=eks_vpc.id,
           availability_zone="eu-central-1a",
           map_public_ip_on_launch=True,
           cidr_block="10.100.1.0/24",
           tags={
               "Name": "Public subnet AZ1",
           })

eks_subnet2 = aws.ec2.Subnet("public subnet az2",
           vpc_id=eks_vpc.id,
           availability_zone="eu-central-1b",
           map_public_ip_on_launch=True,
           cidr_block="10.100.2.0/24",
           tags={
               "Name": "Public subnet AZ2",
           })Code language: PHP (php)

The next part of VPC configuration is the internet gateway, which allows communication between resources in our VPC and the internet. 

eks_gw = aws.ec2.InternetGateway("eks-gw",
       vpc_id= eks_vpc.id,
       tags={
           "Name": "eks-gw",
       })Code language: JavaScript (javascript)

We need to add a route to the subnet’s route table that directs internet traffic to the internet gateway, allowing access to and from the internet.

eks_route_table = aws.ec2.RouteTable("eks-route-table",
               vpc_id=eks_vpc.id,
               routes=[
                   aws.ec2.RouteTableRouteArgs(
                   cidr_block="0.0.0.0/0",
                   gateway_id=eks_gw.id,),
               ],
               tags={
                   "Name": "Public-blog-route-table",
               })

eks_route_table_association1 = aws.ec2.RouteTableAssociation("eks-route-table-association1",
                           subnet_id=eks_subnet1.id,
                           route_table_id=eks_route_table.id)

eks_route_table_association2 = aws.ec2.RouteTableAssociation("eks-route-table-association2",
                           subnet_id=eks_subnet2.id,
                           route_table_id=eks_route_table.id)Code language: JavaScript (javascript)

Here is the entire code for VPC configuration.

We added the firewall rules to allow access to the internet. We do that with a Security group resource virtual firewall so that there is control of inbound and outbound traffic: 

import pulumi
import pulumi_aws as aws
import modules.vpc.vpc as vpc

eks_sg = aws.ec2.SecurityGroup("eks_sg",
       description="firewall rules for cluster network",
       vpc_id= vpc.eks_vpc.id,
ingress=[{
           "protocol": "tcp",
           "from_port": 22,
           "to_port": 22,
           "cidr_blocks": ["0.0.0.0/0"],
       },
       {
           "protocol": "tcp",
           "from_port": 80,
           "to_port": 80,
           "cidr_blocks": ["0.0.0.0/0"],
       },
       {
           "protocol": "tcp",
           "from_port": 443,
           "to_port": 443,
           "cidr_blocks": ["0.0.0.0/0"],
       },
   ],
   egress=[
       {
           "protocol": "-1",
           "from_port": 0,
           "to_port": 0,
           "cidr_blocks": ["0.0.0.0/0"],
       }
   ],
  
   tags={
       "Name": "eks-security-firewall-rules",
   },
   opts=pulumi.ResourceOptions(depends_on=[vpc.eks_vpc, vpc.eks_subnet1, vpc.eks_subnet2]))Code language: JavaScript (javascript)

In the security group module, we open ports 22, 80, and 443 for incoming traffic to the cluster, while outgoing traffic is open for all with egress block specifications. With the network set, we are done with this part of the configuration.

Next, we need to provide IAM roles and policies to give our EKS cluster permissions to other AWS resources. An IAM role is a resource you can create and attach specific permissions.

We create two roles: one for the cluster and one for worker nodes in the cluster. 

A role for the cluster is attached to the policy AmazonEKSClusterPolicy. AWS official documentation states that this policy provides Kubernetes the permissions to manage resources on your behalf.

import pulumi
import pulumi_aws as aws
import json

eks_cluster_role = aws.iam.Role("eks-iam-role",
           assume_role_policy=json.dumps({
               'Version': '2012-10-17',
               'Statement': [
                   {
                       'Action': 'sts:AssumeRole',
                       'Principal': {
                           'Service': 'eks.amazonaws.com'
                       },
                       'Effect': 'Allow',
                       'Sid':''    
                   }
                           ],


           }),
   )

aws.iam.RolePolicyAttachment(
   'eks-cluster-policy-attachment',
   role=eks_cluster_role.id,
   policy_arn='arn:aws:iam::aws:policy/AmazonEKSClusterPolicy',
)Code language: JavaScript (javascript)

Several policies are attached to the workers: AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, and AmazonEC2ContainerRegistryReadOnly.

ec2_node_role = aws.iam.Role("ec2-node-iam-role",
                        assume_role_policy=json.dumps({
                            'Version': '2012-10-17',
                            'Statement': [
                                {
                                    'Action': 'sts:AssumeRole',
                                    'Principal': {
                                        'Service': 'ec2.amazonaws.com'
                                    },
                                    'Effect': 'Allow',
                                    'Sid': ''
                                }
                            ],
                        }),
   )
aws.iam.RolePolicyAttachment("eks-workernode-policy-attachment",
                        role=ec2_node_role.id,
                        policy_arn='arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy',)

aws.iam.RolePolicyAttachment("eks-cni-policy-attachment",
                        role=ec2_node_role.id,
                        policy_arn='arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy',)

aws.iam.RolePolicyAttachment("eks-container-policy-attachment",
                        role=ec2_node_role.id,
                        policy_arn='arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly',)Code language: JavaScript (javascript)

AmazonEKSWorkerNodePolicy is a policy that allows worker nodes to connect to the cluster.

AmazonEKS_CNI_Policy, as stated on AWS official documentation, provides the Amazon VPC CNI Plugin (amazon-vpc-cni-k8s) the permissions to modify the IP address configuration on your EKS worker nodes. This permission set allows the CNI to list, describe, and modify Elastic Network Interfaces on your behalf.

AmazonEC2ContainerRegistryReadOnly is a policy that provides read-only access to Amazon EC2 Container Registry repositories. Here is the entire IAM configuration.

Amazon EC2 Container Registry is the registry for docker images that makes building, storing, and deploying container images on AWS resources a lot easier.

Here is Python Pulumi code, which creates a private repository named “blog-repo”, with two policies attached. These policies allow all actions to be available on this repository, as well as the lifecycle policy for old images.

import pulumi
import pulumi_aws as aws
import json

ecr = aws.ecr.Repository("blog-repo",
       image_scanning_configuration=aws.ecr.RepositoryImageScanningConfigurationArgs(
           scan_on_push=True,
       ),
       name= "blog-repo",
       image_tag_mutability="IMMUTABLE")


repository_policy = aws.ecr.RepositoryPolicy(
   "myrepositorypolicy",
   repository=ecr.id,
   policy=json.dumps({
       "Version": "2012-10-17",
       "Statement": [{
           "Sid": "new policy",
           "Effect": "Allow",
           "Principal": "*",
           "Action": [
               "ecr:*",
           ]
       }]
   })
)

lifecycle_policy = aws.ecr.LifecyclePolicy(
   "mylifecyclepolicy",
   repository=ecr.id,
   policy=json.dumps({
       "rules": [{
           "rulePriority": 1,
           "description": "Expire images older than 14 days",
           "selection": {
               "tagStatus": "untagged",
               "countType": "sinceImagePushed",
               "countUnit": "days",
               "countNumber": 14
           },
           "action": {
               "type": "expire"
           }
       }]
   })
)
Code language: JavaScript (javascript)

The last part of our AWS infrastructure is the cluster itself. We need to attach our cluster to previously configured VPC and subnets and attach previously configured IAM roles for our cluster and worker nodes. 

import pulumi
import pulumi_aws as aws
import modules.vpc.vpc as vpc
import modules.iam.iam as iam

eks_cluster = aws.eks.Cluster("eks-cluster",
           role_arn=iam.eks_cluster_role.arn,
           vpc_config=aws.eks.ClusterVpcConfigArgs(
               subnet_ids=[
                   vpc.eks_subnet1.id,
                   vpc.eks_subnet2.id,
               ],
           ),
       tags={
           "Name": "eks-blog-cluster",
       },
       opts=pulumi.ResourceOptions(depends_on=[iam.eks_cluster_role, vpc.eks_vpc]))

eks_workers = aws.eks.NodeGroup("eks-workers",
           cluster_name=eks_cluster.name,
           node_role_arn=iam.ec2_node_role.arn,
           instance_types=["t2.medium"],
           node_group_name="eks-blog-workers",
           subnet_ids=[
               vpc.eks_subnet1.id,
               vpc.eks_subnet2.id,
           ],
           scaling_config=aws.eks.NodeGroupScalingConfigArgs(
               desired_size=1,
               max_size=2,
               min_size=1,
           ),
           update_config=aws.eks.NodeGroupUpdateConfigArgs(
               max_unavailable=1,
           ),
           opts=pulumi.ResourceOptions(depends_on=[iam.ec2_node_role, eks_cluster]))Code language: JavaScript (javascript)

With this, we finished with modules for infrastructure.  Next, __main__.py file is amended, and infrastructure can be provisioned.

import pulumi
import modules.vpc.vpc as vpc
import modules.sg.sg as sg
import modules.iam.iam as iam
import modules.eks.eks as eks
import modules.ecr.ecr as ecr
pulumi.export('vpcCIDR', vpc.eks_vpc.cidr_block)
pulumi.export("securityGroupID", sg.eks_sg.id)
pulumi.export("masterARN", iam.eks_cluster_role.arn)
pulumi.export("workerARN", iam.ec2_node_role.arn)
pulumi.export("clusterEndpoint", eks.eks_cluster.endpoint)
pulumi.export("kubeconfigCA", eks.eks_cluster.certificate_authority)
pulumi.export("repo-url", ecr.ecr.repository_url)
Code language: JavaScript (javascript)

At last, with __main__.py changed, the best practice is to check if the configuration is well written and there are no errors. You can do this pulumi preview command.

Pulumi Preview Command

The check confirms that there are no errors in our configuration, so we can bravely start with the deployment of our infrastructure. Use the pulumi up command to do so.

Infrastructure is created, and resources are provided after some time, depending on Internet quality.

Pulumi Resources

To tear down infrastructure, use pulumi destroy command, and the infrastructure will be destroyed.

Pulumi Destroy

GitHub Actions

We will use GitHub Actions for deployment to avoid the need to type commands to bring up or down our infrastructure (and because in the DevOps world, we love to have as many steps as possible automated). 

For all those who don’t know what GitHub Actions are, GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository or deploy merged pull requests to production.

Before starting our deployment using GitHub Actions, you need to set secret variables on our GitHub repository. You will authenticate the GitHub repository with AWS and Pulumi accounts using those secret variables.

On the GitHub repository, go to Settings -> Secrets and variables -> Actions, and set secrets for AWS and Pulumi. Here is a detailed description of how to set secrets on GitHub.

To authenticate with AWS, you must set secrets: AWS access key, secret key, and default region. To authenticate Pulumi, you need to set the Pulumi access token as secret, which is generated on the Pulumi console. 

Sign in to the Pulumi console using your GitHub account, and on your profile picture in the upper right corner, find Personal access tokens.

GitHub Actions are stored in the working directory under the folder .github/workflows, and the action itself is written as YAML.

First, write GitHub Action, which will create AWS infrastructure using Pulumi.

name: Pulumi blog action
on: [ workflow_dispatch ]:
jobs:
 preview:
   name: Preview
   runs-on: ubuntu-latest
   steps:
     - name: Set up Python 3.8                              
       uses: actions/setup-python@v2                              
       with:                                
         python-version: '3.8'

     - name: Configure AWS Credentials
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
         aws-region: ${{ secrets.AWS_REGION }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

     - run: pip install -r requirements.txt
     - uses: pulumi/actions@v3
       with:
         #command: destroy
         #command: preview
         command: up
         stack-name: DinoJelincic/eks-pulumi-blog/develop
       env:
         PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}Code language: PHP (php)

After some time, infrastructure has been deployed:

Now, modify GitHub Actions so it builds and pushes the docker image of the Flask application to the previously created ECR repository.

name: Pulumi blog action
on:
 workflow_dispatch:
   inputs:
     buildDockerImage:
       description: 'Build docker image (yes/no*)'
       required: true
       default: 'yes'
jobs:
 preview:
   name: Preview
   runs-on: ubuntu-latest
   steps:
     - uses: actions/checkout@v2
     - uses: docker/setup-qemu-action@v2
     - uses: docker/setup-buildx-action@v2
    
     - name: Set up Python 3.8                              
       uses: actions/setup-python@v2                              
       with:                                
         python-version: '3.8'

     - name: Configure AWS Credentials
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
         aws-region: ${{ secrets.AWS_REGION }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

     - name: Login to ECR
       id: login-ecr
       uses: aws-actions/amazon-ecr-login@v1

     - name: Build, publish, and deploy image to ECR
       id: build-image
       if: (github.event.inputs.buildDockerImage == 'yes')
       env:
         ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
         ECR_REPOSITORY: blog-repo
         IMAGE_TAG: latest
       run: |
         docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG ./flask_app > docker_build.log 2>&1
         docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG

     - run: pip install -r requirements.txt
     - uses: pulumi/actions@v3
       with:
         #command: destroy
         command: preview
         #command: up
         stack-name: DinoJelincic/eks-pulumi-blog/develop
       env:
         PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}Code language: PHP (php)

Action is successfully done, and the image is created and pushed to the AWS repository.

The last step in GitHub Action is to deploy the application to the previously created EKS cluster. For that, you need to adjust our action.yml file once more with this part:

     - name: Fetch EKS Cluster Kubeconfig
       env:
         CLUSTER_NAME: eks-cluster-9252e06
       run: |
         aws eks update-kubeconfig --name $CLUSTER_NAME --region ${{ secrets.AWS_REGION }}

     - name: Deploy to EKS
       run: |
         kubectl apply -f k8s Code language: PHP (php)

Specify the cluster name in the GitHub Actions file, and in our Kubernetes files, you need to specify the previously created image for our Kubernetes deployment. From the AWS ECR registry, you copy the image URI and paste it into the deployment config file.

   spec:
     containers:
     - name: pulumi
       image: 791433942247.dkr.ecr.eu-central-1.amazonaws.com/blog-repo:latest
       ports:
         - containerPort: 80

You are officially done with your deployment if GitHub Actions runs successfully again! Here is the entire code for our GitHub Action.

Go to your AWS console, and under EKS resources/services, find the URL to LoadBalancer, which serves your application.

Pulumi for Infrastructure Deployment

In this blog, I’ve shown how you can use Pulumi as a tool for infrastructure deployment. We can say that Pulumi essentially mirrors the very concept of DevOps with its ability to write code with any high-level programming language. 

There is no ultimate high-level programming language to use with Pulumi. With it, you can create resources on all major cloud providers (AWS, GCP, Azure…), along with resources for Kubernetes and Docker. In the end, everything depends on your preferred language use.

How to provide infrastructure with the desired programming language is well documented on Pulumi’s official website, and there are many real-life use cases on their GitHub repository.

I also used GitHub Actions as a CI/CD tool, where I automated the entire infrastructure and application deployment process.

You can find the entire code on my GitHub repository

Happy coding, and may the force be with you!

CONTACT US

Exceptional ideas need experienced partners.