Setting Up a Load Balanced API on GCP using Terraform & Ansible

Tennison Yu
3 min readJul 11, 2020
Ansible and Terraform logos from https://blog.pythian.com/rds-provisioning-a-comparision-between-ansible-and-terraform/

Recently, I took on an opportunity to learn Terraform and Ansible. To those unfamiliar, both are automation tools used for large scale deployments mostly on the cloud. Terraform is strong in deploying the infrastructure such as setting up instances, networks, buckets, etc whereas Ansible is great for deploying configurations and software updates to any instances that have been set up. Therefore, together they are a powerful team.

Terraform is developed by HashiCorp and uses its proprietary language HCL, whereas Ansible is owned by Red Hat and is primarily driven by YML files. Both are open-sourced and fairly intuitive to use which is what drew me to these technologies in particular.

The task at hand is to use both technologies to set up an API on multiple instances that can be called through routing via a load balancer on Google Cloud Platform (GCP). With regards to this article, I wanted to document my experience using these tools for a tangible project and share it with other automation enthusiasts. My code is here if you want to try: https://github.com/tyu0912/ansible-terraform-gcp. Definitely welcome any comments as well.

Terraform

In using terraform to set up the infrastructure, I firstly thought about what pieces comprised the problem. Clearly I would need instances and a load balancer but after some thought, I realized I also needed to set up an instance group, a network for all of this to operate on and also to enable certain ports as part of the firewall settings so that things can talk to one another.

After some research, I was impressed by how intuitive the language is and was able to quickly piece together the script below. Essentially, a job is comprised of different resource blocks which are defined by the type and can be named to whatever you choose for access. For example, on line 10, we can see the type is “google_compute_network” and I named it “flask_network. Within each block are the settings which are defined in terraform’s documentation.

Setting up the load balancer was a more complicated process since there are other dependencies such as defining the backend group of instances and creating the respective health checks for that group. Luckily, Google has provided some great modules (similar to packages in other programming languages) that you can define and that will be automatically compiled into the task. I used gce-lb-http which is stored in another file for cleanliness.

There are other files that I prepared as well such as a file to define variables to keep things organized and another to output the IP addresses of the instances I created. This will become important in the next section.

Ansible

For my particular use case, I leveraged a YML playbook from: https://www.digitalocean.com/community/tutorials/how-to-use-ansible-to-install-and-set-up-docker-on-ubuntu-18-04 which allowed me to not only install docker on my instances but also handled pulling down a docker image of my API from DockerHub and create the containers from it.

To do the actual deployment, Ansible also needs to know the IP addresses of the instances to deploy to. This is defined in an inventory file which can be specified when running any Ansible task. This is where the output of terraform comes in. I created a bash script to handle the transferral of information from one service to another. This bash script also outputs the URL of the load balancer to test the API call.

Conclusion

Running these two applications in series, I was able to automate the deployment of a simple API to a load balanced group of instances. This was a truly fascinating project for me. With more and more processes shifting into the cloud, it is important for an organization to set up an environment quickly and dependably. Hope you enjoyed reading. I definitely want to build on this in the future. If you have any ideas on how to take this to the next level, let me know!

--

--

Tennison Yu

Machine Learning Engineer @ Jumio Corp. I like building infrastructure to get ML done.