Read our newest book, Fundamentals of DevOps and Software Delivery, for free!

A crash course on Terraform

Headshot of Yevgeniy Brikman

Yevgeniy Brikman

JUL 11, 2022 | 44 min read
Featured Image of A crash course on Terraform
This is part 4 of the Docker, Kubernetes, Terraform, and AWS crash course series. In part 3, you learned how to deploy EC2 instances and EKS clusters in AWS by clicking around the AWS Console, which is great for learning and testing, but not suitable for production. In production, you’ll want to deploy and manage all of your infrastructure as code. This post will teach you the basics of Terraform, one of the most popular infrastructure as code tools in the world, by going through a lightning quick crash course where you learn by doing. This course is designed for newbies, starting at zero, and building your mental model step-by-step through simple examples you run on your computer to do something useful with Terraform — in minutes. If you want to go deeper, there are also links at the end of the post to more advanced resources.

Terraform is an open source tool created by HashiCorp that allows you to define your infrastructure as code (IaC) using a simple, declarative language and to deploy and manage that infrastructure across a variety of public cloud providers (e.g., Amazon Web Services, Microsoft Azure, Google Cloud Platform, DigitalOcean) and private cloud and virtualization platforms (e.g., OpenStack, VMWare) using a few commands.Instead of clicking around a web UI , the idea behind IaC is to write code to define, provision, and manage your infrastructure. This has a number of benefits:
  • You can automate your entire provisioning and deployment process, which makes it much faster and more reliable than any manual process.
  • You can represent the state of your infrastructure in source files that anyone can read rather than in a sysadmin’s head.
  • You can store those source files in version control, which means the entire history of your infrastructure is now captured in the commit log, which you can use to debug problems, and if necessary, roll back to older versions.
  • You can validate each infrastructure change through code reviews and automated tests.
  • You can create (or buy) a library of reusable, documented, battle-tested infrastructure code that makes it easier to scale and evolve your infrastructure.
Terraform is one of the most popular IaC tools out there: according to HashiCorp, it has been downloaded more than 100 million times, has more than 1,500 open source contributors, and is in use at ~79% of Fortune 500 companies. So it’s well worth your time to learn how to use it. Let’s get started!

Follow the instructions here to install Terraform. When you’re done, you should be able to run the terraform command:
$ terraform Usage: terraform [-version] [-help] <command> [args](...)
In this tutorial, you will use Terraform to deploy resources in an AWS account; you can use the same AWS account and credentials you created in part 3 of this series. In order for Terraform to be able to make changes in your AWS account, you will need to authenticate to AWS on the command line. One of the easiest ways to do this is to configure those credentials using the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (see here for other ways to authenticate to AWS from the CLI). For example, here is how you do it in a Unix/Linux/macOS terminal:
$ export AWS_ACCESS_KEY_ID=(your access key id) $ export AWS_SECRET_ACCESS_KEY=(your secret access key)
And here is how you do it in a Windows command terminal:
$ set AWS_ACCESS_KEY_ID=(your access key id) $ set AWS_SECRET_ACCESS_KEY=(your secret access key)

In part 3 of this series, you deployed a virtual server in AWS by clicking around the AWS Console. Let’s now do the same thing, but using Terraform code.

A note on cost: The examples in this part of the tutorial use the AWS free tier, so if you haven’t used up all your credits, and clean up as instructed, this shouldn’t cost you anything.

Testimonial Profile Image

Testimonial Profile Image

Terraform code is written in a language called HCL in files with the extension .tf. It is a declarative language, so your goal is to describe the infrastructure you want, and Terraform will figure out how to create it.The first step to using Terraform is typically to configure the provider(s) you want to use. Create a folder called server and create a file in it called main.tf with the following code in it:
provider "aws" { region = "us-east-2" }
This tells Terraform that you are going to be using the AWS provider and that you wish to deploy your infrastructure in the us-east-2 region (the same region you used in part 3 of the series).For each provider, there are many different kinds of resources you can create, such as servers, databases, and load balancers. Add the following code to main.tf, which uses the aws_instance resource to deploy a virtual server (EC2 Instance):
resource "aws_instance" "example" { ami = "ami-02f3416038bdb17fb" instance_type = "t2.micro" key_name = "<YOUR KEY PAIR NAME>" tags = { Name = "terraform-example" } }
The general syntax for a Terraform resource is:
resource "<PROVIDER>_<TYPE>" "<NAME>" { [CONFIG …] }
Where PROVIDER is the name of a provider (e.g., aws), TYPE is the type of resources to create in that provider (e.g., instance), NAME is an identifier you can use throughout the Terraform code to refer to this resource (e.g., example), and CONFIG consists of one or more arguments that are specific to that resource (e.g., ami = "ami-0c55b159cbfafe1f0"). For the aws_instance resource, there are many different arguments, but for now, you only need to set the following ones:
  • ami: The Amazon Machine Image (AMI) to run on the EC2 Instance. The preceding code sets the ami parameter to the ID of the same free Ubuntu AMI you used in part 3 of the series.
  • instance_type: The type of EC2 Instance to run. Each type of EC2 Instance provides a different amount CPU, memory, disk space, andnetworking capacity. The preceding example uses t2.micro, which has one virtual CPU, 1GB of memory, and is part of the AWS free tier.
  • key_name: The Key Pair to associate with the instance. You’ll be able to use this Key Pair to SSH to the instance. You should fill in the name of the Key Pair you created in part 3 of the series.
  • tags: Tags to apply to the instance. The preceding code sets a Name tag to make it easier to identify the instance.
In a terminal, go into the folder where you created main.tf, and run the terraform init command:
$ terraform initInitializing the backend...Initializing provider plugins... - Finding latest version of hashicorp/aws... - Installing hashicorp/aws v4.21.0... - Installed hashicorp/aws v4.21.0 (signed by HashiCorp)(...)Terraform has been successfully initialized!
The terraform binary contains the basic functionality for Terraform, but it does not come with the code for any of the providers (e.g., the AWS provider, Azure provider, GCP provider, etc), so when first starting to use Terraform, you need to run terraform init to tell Terraform to scan the code, figure out what providers you’re using, and download the code for them.Now that you have the provider code downloaded, run the terraform plan command:
$ terraform planTerraform will perform the following actions:# aws_instance.example will be created + resource "aws_instance" "example" { + ami = "ami-02f3416038bdb17fb" + id = (known after apply) + instance_state = (known after apply) + instance_type = "t2.micro" + tags = { + "Name" = "terraform-example" } (...)}Plan: 1 to add, 0 to change, 0 to destroy.
The plan command lets you see what Terraform will do before actually doing it. This is a great way to sanity check your changes before unleashing them onto the world. The output of the plan command is a little like the output of the diff command: resources with a plus sign (+) are going to be created, resources with a minus sign (-) are going to be deleted, and resources with a tilde sign (~) are going to be modified in-place. In the output above, you can see that Terraform is planning on creating a single EC2 Instance and nothing else, which is exactly what we want.To actually create the instance, run the terraform apply command:
$ terraform applyTerraform will perform the following actions:# aws_instance.example will be created + resource "aws_instance" "example" { + ami = "ami-02f3416038bdb17fb" + id = (known after apply) + instance_state = (known after apply) + instance_type = "t2.micro" + tags = { + "Name" = "terraform-example" } (...)}Plan: 1 to add, 0 to change, 0 to destroy.Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
You’ll notice that the apply command shows you the same plan output and asks you to confirm if you actually want to proceed with this plan. So while plan is available as a separate command, it’s mainly useful for quick sanity checks and during code reviews, and most of the time you’ll run apply directly and review the plan output it shows you.Type in “yes” and hit enter to deploy the EC2 Instance:
Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yesaws_instance.example: Creating… aws_instance.example: Still creating… [10s elapsed] aws_instance.example: Still creating… [20s elapsed] aws_instance.example: Creation complete after 24sApply complete! Resources: 1 added, 0 changed, 0 destroyed.
Congrats, you’ve just deployed a server with Terraform! To verify this, you can login to the EC2 console, and you’ll see something like this:
And there you go, you have a basic virtual server running in AWS!In part 3 of the series, after launching an instance in the AWS Console, you were able to connect to it over SSH. To connect to the server you launched with Terraform, you’ll need to make the following changes to your code:
  • 1.Create a security group
  • 2.Add input variables
  • 3.Add output variables

By default, AWS does not allow any incoming or outgoing traffic from an EC2 instance, so to be able to connect via SSH, you need to create a security group (firewall) that allows access on port 22 (the default SSH port). Go back to main.tf and add the aws_security_group resource as follows:
resource "aws_security_group" "instance" { name = "terraform-example" ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } }
This code creates a new resource called aws_security_group (notice how all resources for the AWS provider start with aws_) and specifies that this group allows incoming TCP requests on port 22 from the CIDR block 0.0.0.0/0(CIDR blocks are a concise way to specify IP address ranges, and 0.0.0.0/0 means “any IP”).Simply creating a security group isn’t enough; you also need to tell the EC2 instance to actually use that group by passing the ID of the security group into the vpc_security_group_ids argument of the aws_instance resource. To do that, you first need to learn about Terraform expressions.An expression in Terraform is anything that returns a value. You’ve already seen the simplest type of expressions, literals, such as strings (e.g., "ami-02f3416038bdb17fb") and numbers (e.g., 5). One particularly useful type of expression is a reference, which allows you to access values from other parts of your code. To access the ID of the security group resource, you are going to need to use a resource attribute reference, which uses the following syntax:
<PROVIDER>_<TYPE>.<NAME>.<ATTRIBUTE>
Where PROVIDER is the name of the provider (e.g., aws), TYPE is the type of resource (e.g., security_group), NAME is the name of that resource (e.g., the security group is named "instance"), and ATTRIBUTE is either one of the arguments of that resource (e.g., name) or one of the attributes exported by the resource (you can find the list of available attributes in the documentation for each resource—e.g., here are the attributes for aws_security_group). The security group exports an attribute called id, so the expression to reference it will look like this:
aws_security_group.instance.id
You can use this security group ID in the vpc_security_group_ids parameter of the aws_instance:
resource "aws_instance" "example" { ami = "ami-02f3416038bdb17fb" instance_type = "t2.micro" key_name = "<YOUR KEY PAIR NAME>" vpc_security_group_ids = [aws_security_group.instance.id] tags = { Name = "terraform-example" } }
When you add a reference from one resource to another, you create an implicit dependency. Terraform parses these dependencies, builds a dependency graph from them, and uses that to automatically figure out in what order it should create resources. For example, if you were to deploy this code from scratch, Terraform would know it needs to create the security group before the EC2 Instance, since the EC2 Instance references the ID of the security group. When Terraform walks your dependency tree, it will create as many resources in parallel as it can, which means it can apply your changes fairly efficiently. That’s the beauty of a declarative language: you just specify what you want and Terraform figures out the most efficient way to make it happen.

You may have noticed that your Terraform code has some duplication in it: the name terraform-example and the SSH port 22 are both copy/pasted in a few places. This violates the Don’t Repeat Yourself (DRY) principle: every piece of knowledge must have a single, unambiguous, authoritative representation within a system. If you have the port number copy/pasted in two places, it’s too easy to update it in one place but forget to make the same change in the other place.To allow you to make your code more DRY and more configurable, Terraform allows you to define input variables. The syntax for declaring a variable is:
variable "NAME" { [CONFIG ...] }
The body of the variable declaration can contain three parameters, all of them optional:
  • description: It’s always a good idea to use this parameter to document how a variable is used. Your teammates will not only be able to see this description while reading the code, but also when running the plan or apply commands (you’ll see an example of this shortly).
  • default: There are a number of ways to provide a value for the variable, including passing it in at the command line (using the -var option), via a file (using the -var-file option), or via an environment variable (Terraform looks for environment variables of the name TF_VAR_<variable_name>). If no value is passed in, the variable will fall back to this default value. If there is no default value, Terraform will interactively prompt the user for one.
  • type: This allows you enforce type constraints on the variables a user passes in. Terraform supports a number of type constraints, including string, number, bool, list, map, set, object, tuple, and any. If you don’t specify a type, Terraform assumes the type is any.
Add the following input variables to a new file called server/variables.tf (Terraform doesn’t care about file names, so you could put everything into main.tf, but using a convention such as putting all input variables in a variables.tf file can make it easier for your team members to navigate the code):
variable "name" { description = "The name used to namespace all resources" type = string default = "terraform-example" }variable "ami" { description = "The AMI to run on the instance" type = string default = "ami-02f3416038bdb17fb" }variable "instance_type" { description = "The instance type to use" type = string default = "t2.micro" }variable "key_name" { description = "The Key Pair to associate with the EC2 instance" type = string default = "<YOUR KEY PAIR NAME>" }variable "ssh_port" { description = "Open SSH access on this port" type = number default = 22 }variable "allow_ssh_from_cidrs" { description = "Allow SSH access from these CIDR blocks" type = list(string) default = ["0.0.0.0/0"] }
To use the value from an input variable in your Terraform code, you can use a new type of expression called a variable reference, which has the following syntax:
var.<VARIABLE_NAME>
Back in server/main.tf, update the aws_instance resource to use the variables you just declared:
resource "aws_instance" "example" { ami = var.ami instance_type = var.instance_type key_name = var.key_name vpc_security_group_ids = [aws_security_group.instance.id] tags = { Name = var.name } }
And update aws_security_group to use those variables, too:
resource "aws_security_group" "instance" { name = var.name ingress { from_port = var.ssh_port to_port = var.ssh_port protocol = "tcp" cidr_blocks = var.allow_ssh_from_cidrs } }

In addition to input variables, Terraform also allows you to define output variables that show the user values after apply completes. Output variables have the following syntax:
output "<NAME>" { value = <VALUE> [CONFIG ...] }
The NAME is the name of the output variable and VALUE can be any Terraform expression that you would like to output. The CONFIG can contain two additional parameters, both optional:
  • description: It’s always a good idea to use this parameter to document what type of data is contained in the output variable.
  • sensitive: Set this parameter to true to tell Terraform not to log this output at the end of terraform apply. This is useful if the output variable contains sensitive data, such as passwords or private keys.
Add the following output variables to a new server/outputs.tf file (this is the file naming convention typically used for output variables):
output "public_ip" { value = aws_instance.example.public_ip description = "The public IP of the server" }output "instance_id" { value = aws_instance.example.id description = "The ID of the server" }
Note that all these output variables use attribute references again, this time referencing the public IP and ID of the aws_instance resource.

If you run the apply command now, you should see something like this:
Terraform will perform the following actions:# aws_instance.example will be updated in-place ~ resource "aws_instance" "example" { id = "i-0647517529f69d8b3" ~ vpc_security_group_ids = [ - "sg-871fa9ec", ] -> (known after apply) }# aws_security_group.instance will be created + resource "aws_security_group" "instance" { + id = (known after apply) + ingress = [ + { + cidr_blocks = [ + "0.0.0.0/0", ] + description = "" + from_port = 22 + ipv6_cidr_blocks = [] + prefix_list_ids = [] + protocol = "tcp" + security_groups = [] + self = false + to_port = 22 }, ] + name = "terraform-example" (...) }Plan: 1 to add, 1 to change, 0 to destroy.Changes to Outputs: + instance_id = "i-0647517529f69d8b3" + public_ip = "18.222.133.175"Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
Terraform keeps track of all the resources it already created for this set of configuration files, so it knows your EC2 Instance already exists (notice Terraform says “Refreshing state…” when you run the apply command), and it can show you a diff between what’s currently deployed and what’s in your Terraform code. So the code you write in Terraform can be used not only to deploy infrastructure, but also to manage and update it over time!The preceding plan output shows that Terraform wants to (a) create a security group, (b) update the instance to use the security group, and (c) add two output variables. These are the exact changes you need, so type in “yes” and hit enter:
Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yesaws_security_group.instance: Creating... aws_security_group.instance: Creation complete after 4s aws_instance.example: Modifying... aws_instance.example: Modifications complete after 2s Apply complete! Resources: 1 added, 1 changed, 0 destroyed.Outputs:instance_id = "i-0647517529f69d8b3" public_ip = "18.222.133.175"
Copy and paste the public IP address you see in the outputs and SSH to the instance using your Key Pair from part 3 of the series as follows:
ssh -i <YOUR PRIVATE KEY> ubuntu@<IP ADDRESS>
Your SSH client will tell you it can’t establish the authenticity of the host and prompt you if you want to continue. Enter yes and you should see something like this:
Welcome to Ubuntu 22.04.2 LTS!ubuntu:$
Alright, you are now connected to a virtual server running in AWS that you deployed using Terraform! You can commit your Terraform code to version control, share it with your team, and you can all use it to manage your infrastructure.

When you’re done experimenting with this server, it’s a good idea to remove all the resources you created so AWS doesn’t charge you for them. Since Terraform keeps track of what resources you created, cleanup is simple. All you need to do is run the terraform destroy command:
$ terraform destroyaws_security_group.instance: Refreshing state... aws_instance.example: Refreshing state...Terraform will perform the following actions:# aws_instance.example will be destroyed - resource "aws_instance" "example" { - ami = "ami-02f3416038bdb17fb" -> null - id = "i-0647517529f69d8b3" -> null (...) }# aws_security_group.instance will be destroyed - resource "aws_security_group" "instance" { - id = "sg-0e68187850d58ad94" -> null (...) }Plan: 0 to add, 0 to change, 2 to destroy.Changes to Outputs: - instance_id = "i-0647517529f69d8b3" -> null - public_ip = "18.222.133.175" -> nullDo you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value:
Once you type in “yes” and hit enter, Terraform will build the dependency graph and delete all the resources in the right order, using as much parallelism as possible. In about a minute, everything you deployed using Terraform should be cleaned up for you.

In part 3 of this series, after deploying a virtual server using the AWS Console, you deployed a Kubernetes (EKS) cluster using the Console. You can automate that process using Terraform, too. Instead of writing all that code from scratch, however, you’ll use some Terraform modules to do it for you. And to do that, you’ll first need to understand what a Terraform module is.

In a general-purpose programming language, such as Ruby, if you had the same code copied and pasted in several places, you could put that code inside of a function and reuse that function everywhere:
def example_function() puts "Hello, World" end# Other places in your code example_function()
With Terraform, you can put your code inside of a Terraform module and reuse that module in multiple places throughout your code.A Terraform module is very simple: any set of Terraform configuration files in a folder is a module. The Terraform code you wrote earlier in this blog post was technically a module, although not a particularly interesting one, since you deployed it directly (the module in the current working directory is called the root module). To see what modules are really capable of, you have to use one module from another module.The server code you wrote in the previous section was in a folder called server. To use server as a module, you should make one minor change to it: remove the provider block from the code. Typically, you only define the provider blocks in the root module.You can now make use of the server module in a new root module. The syntax for using a module is:
module "<NAME>" { source = "<SOURCE>" [CONFIG ...] }
Where NAME is an identifier you can use throughout the Terraform code to refer to this module, SOURCE is the path where the module code can be found , and CONFIG consists of one or more arguments that are specific to that module. For example, you can create a new root module in a new module-example folder, with the following code in module-example/main.tf:
provider "aws" { region = "us-east-2" }module "server_1" { source = "../server" }
And there you go, you are now using the code in the server folder as a module! In fact, you could choose to use the code multiple times by adding multiple module blocks:
provider "aws" { region = "us-east-2" }module "server_1" { source = "../server" }module "server_2" { source = "../server" }module "server_3" { source = "../server" }
You’re now able to reuse all the code and logic in the server module without any copy/paste. To see it in action, first, run terraform init (you have to run init any time you add a module block to your code):
$ terraform init Initializing modules... - server_1 in ../server - server_2 in ../server - server_3 in ../serverInitializing the backend...Initializing provider plugins... - Finding latest version of hashicorp/aws... - Installing hashicorp/aws v4.22.0... - Installed hashicorp/aws v4.22.0 (signed by HashiCorp)
If you run terraform plan, you’ll see the following output (the snippet below is very truncated to fit in this blog post):
$ terraform plan# module.server_1.aws_instance.example will be created + resource "aws_instance" "example" { + tags = { + "Name" = "terraform-example" } (...) }# module.server_1.aws_security_group.instance will be created + resource "aws_security_group" "instance" { + name = "terraform-example" (...) }# module.server_2.aws_instance.example will be created + resource "aws_instance" "example" { + tags = { + "Name" = "terraform-example" } (...) }# module.server_2.aws_security_group.instance will be created + resource "aws_security_group" "instance" { + name = "terraform-example" (...) }# module.server_3.aws_instance.example will be created + resource "aws_instance" "example" { + tags = { + "Name" = "terraform-example" } (...) }# module.server_3.aws_security_group.instance will be created + resource "aws_security_group" "instance" { + name = "terraform-example" (...) }Plan: 6 to add, 0 to change, 0 to destroy.
The plan output shows that the three copies of the module are creating three EC2 instances and three security groups, which is exactly what you’d expect. But there are two problems:
  • 1.All three security groups have the exact same name parameter, whereas AWS requires each security group to have a unique name.
  • 2.All three EC2 instances have the same Name tag, which is allowed by AWS, but will make them hard to tell apart.
How can you specify a different name for each usage of the module? To make a function configurable in a general-purpose programming language, such as Ruby, you can add input parameters to that function:
def example_function(param1, param2) puts "Hello, #{param1} #{param2}" end# Other places in your code example_function("foo", "bar")
In Terraform, modules can have input parameters, using a mechanism you’re already familiar with: input variables. In fact, your server module already has an input variable called name that you can use to set a different security group name and Name tag in each module! You just need to set the name parameter to a different value in each module block:
module "server_1" { source = "../server" name = "server-1" }module "server_2" { source = "../server" name = "server-2" }module "server_3" { source = "../server" name = "server-3" }
If you run plan again, you’ll again see three EC2 instances and three security groups, but this time, they will all have unique names. You could choose to set any of the other input variables exposed by your server module too. For example, here is how you could configure a larger instance type for server_2 and disable SSH access for server_3:
module "server_1" { source = "../server" name = "server-1" }module "server_2" { source = "../server" name = "server-2" instance_type = "t2.small" }module "server_3" { source = "../server" name = "server-3" key_name = null allow_ssh_from_cidrs = [] }
As you can see, you set input variables for a module using the same syntax as setting arguments for a resource. The input variables are the API of the module, controlling how it will behave in different usages.The output variables are also part of the module’s API. In a general-purpose programming language, such as Ruby, functions can return values:
def example_function(param1, param2) return "Hello, #{param1} #{param2}" end# Other places in your code return_value = example_function("foo", "bar")
In Terraform, a module can also return values. Again, this is done using a mechanism you already know: output variables. You can access module output variables the same way as resource output attributes. The syntax is:
module.<MODULE_NAME>.<OUTPUT_NAME>
So, if you want to be able to see, for example, the public IPs of the 3 servers, create module-example/outputs.tf with the following contents:
output "server_1_public_ip" { value = module.server_1.public_ip }output "server_2_public_ip" { value = module.server_2.public_ip }output "server_3_public_ip" { value = module.server_3.public_ip }
Try running apply in module-example, and you should see three servers get deployed:
$ terraform apply(...)Apply complete! Resources: 6 added, 0 changed, 0 destroyed.Outputs:server_1_public_ip = "3.140.216.236" server_2_public_ip = "3.145.117.98" server_3_public_ip = "3.15.166.52"
When you’re done experimenting, run destroy to clean everything up.

Now that you understand how modules work, let’s use a module to deploy an EKS cluster.

A note on cost: EKS is not part of the AWS free tier. Moreover, the worker nodes you’ll deploy in this tutorial are also not part of the AWS free tier. Therefore, this part of the tutorial may cost you money, albeit, not too much: as of July, 2022, EKS costs $0.10 per hour, and the two worker nodes you’ll launch in this tutorial are about $0.01 per hour each, so even if you run this code for 4 hours, it’ll still cost you less than 50 cents.

Testimonial Profile Image

Testimonial Profile Image
The 3rd edition of Terraform: Up & Running (which is the basis for the blog post series you’re reading now) contains a very simple module for deploying Amazon’s Elastic Kubernetes Service (EKS) in the code samples GitHub repo for the book: the eks-cluster module. This module will deploy more or less the exact same EKS cluster you configured manually in part 3 of this blog post series, with a basic control plane and a managed node group.You’ve already seen how to use a module on your computer by setting the source URL to a local file path, but Terraform also supports many other types of module source URLs, including GitHub URLs, so, you can use the eks-cluster module directly from the code samples GitHub repo by setting the source URL as follows (put the code below into a main.tf file in a new eks folder):
provider "aws" { region = "us-east-2" }module "eks_cluster" { source = "github.com/brikis98/terraform-up-and-running-code//code/terraform/07-working-with-multiple-providers/modules/services/eks-cluster?ref=v0.3.0"}
A few notes on the source URL in the preceding code:
  • One line: Medium (our blogging platform) wraps code, but the source URL should be entirely on one line.
  • Double-slash: The double-slash in the URL between terraform-up-and-running-code and code/terraform/... is intentional and required. The part of the source URL to the left of the double-slash is the repo; the part to the right of the double-slash is the folder path within the repo that contains the module to use.
  • Ref param: The ref=v0.3.0 parameter tells Terraform to use the v0.3.0 tag from the repo. That way, you are using a known, fixed version of the code, instead of just fetching the latest from some branch each time.
In addition to the source URL, you’ll want to set some input variables for the module. You already know how to check the variables.tf file in a module to see what input variables it exposes in its API. For the purposes of this tutorial you can just set the following variables:
module "eks_cluster" { source = "github.com/brikis98/terraform-up-and-running-code//code/terraform/07-working-with-multiple-providers/modules/services/eks-cluster?ref=v0.3.0" name = "terraform-learning" min_size = 2 max_size = 2 desired_size = 2 # Due to the way EKS works with ENIs, t3.small is the smallest # instance type that can be used for worker nodes. If you try # something smaller like t2.micro, which only has 4 ENIs, # they'll all be used up by system services (e.g., kube-proxy) # and you won't be able to deploy your own Pods. instance_types = ["t3.small"] }
Finally, you may want to forward through some output variables in a eks/outputs.tf file:
output "cluster_arn" { value = module.eks_cluster.cluster_arn description = "ARN of the EKS cluster" }output "cluster_endpoint" { value = module.eks_cluster.cluster_endpoint description = "Endpoint of the EKS cluster" }
Run terraform init and terraform apply to deploy the EKS cluster (EKS clusters take 5–10 minutes to deploy, so please be patient):
$ terraform apply (...)Apply complete! Resources: 8 added, 0 changed, 0 destroyed.Outputs:cluster_arn = "arn:aws:eks:us-east-2:xxx:cluster/terraform-learning" cluster_endpoint = "https://yyy.zzz.us-east-2.eks.amazonaws.com"
You can use kubectl to inspect the cluster. First, use the aws eks update-kubeconfig command to automatically update your $HOME/.kube/config file to authenticate to your EKS cluster:
$ aws eks update-kubeconfig \ --region us-east-2 --name terraform-learning
To see if things are working, use the get nodes command to explore your EKS cluster:
$ kubectl get nodes NAME STATUS AGE VERSION xxx.us-east-2.compute.internal Ready 3m24s v1.22.9-eks yyy.us-east-2.compute.internal Ready 3m19s v1.22.9-eks
And there you go, you have a fully-managed EKS cluster running in AWS with a couple of worker nodes!Now you can see the power of infrastructure modules: someone else can do the hard work of figuring out how to make a piece of infrastructure work, and then you can leverage all that work with just a few lines of code.Of course, the eks-cluster module you just used is intentionally very simplified to make it easy to use for learning and testing. If you want to use EKS in production, you’ll need to think through many aspects that simple module doesn’t include, such as ingress controllers, secret envelope encryption, security groups, OIDC authentication, RBAC mapping, VPC CNI, kube-proxy, CoreDNS, and so on. All of this, and more, is available in the EKS modules that are part of the Gruntwork Infrastructure as Code Library.

You can use Terraform to not only deploy a Kubernetes cluster, but also to deploy apps into that cluster. The advantage of using Terraform to manage Kubernetes objects over pure YAML (as you saw in part 2 of this series) is that Terraform supports variables, loops, conditionals, modules, state management, and a variety of other software techniques that make it easier to have code reuse, code reviews, automated testing, CI / CD pipelines, and everything else you need to use Kubernetes effectively as a team. Moreover, if you use Terraform to manage the rest of your infrastructure, then using it to manage Kubernetes objects as well makes it easier to integrate everything together.In addition to the eks-cluster module you saw in the previous section, the 3rd edition of Terraform: Up & Running contains a very simple module for deploying Kubernetes services called k8s-app. This module can deploy the exact same Deployment and Service objects into a Kubernetes cluster as the YAML code from part 2 of the series.Create a new k8s folder with a main.tf that has the following contents:
provider "kubernetes" { config_path = "~/.kube/config" }module "simple_webapp" { source = "github.com/brikis98/terraform-up-and-running-code//code/terraform/07-working-with-multiple-providers/modules/services/k8s-app?ref=v0.3.0" name = "simple-webapp" image = "training/webapp" replicas = 2 container_port = 5000 }
A few notes on the preceding code:
  • Kubernetes provider: This is the first Terraform code you’ve seen in this blog post that doesn’t use the AWS provider. Instead, it uses the Kubernetes provider as this code deploys and manages objects in a Kubernetes cluster.
  • Authenticate using kubectl: The Kubernetes provider block is configured to use the same configuration as kubectl for authentication. So if you didn’t run aws eks update-kubeconfig in the previous section, please make sure to do so now!
  • Deploy a “Hello, World app”: This code uses the k8s-app module to deploy the training/webapp image from Docker Hub which contains a simple Python “Hello, World” web app that listens on port 5000.
You should also forward through an output variable in k8s/outputs.tf so you can easily find the service endpoint after apply finishes:
output "service_endpoint" { value = module.simple_webapp.service_endpoint description = "The K8S Service endpoint" }
Run init and apply in the k8s folder:
$ terraform apply (...)Apply complete! Resources: 2 added, 0 changed, 0 destroyed.Outputs:service_endpoint = "http://xxx.us-east-2.elb.amazonaws.com"
Copy the URL in the service_endpoint output and try it out using curl or your web browser:
$ curl http://xxx.us-east-2.elb.amazonaws.com Hello world!
And there you go, you’ve now used Terraform to deploy a Kubernetes cluster and Dockerized app in AWS! Once again, you can see the power of reusable infrastructure modules, where you get access to a ton of functionality in just a few lines of code.However, bear in mind that the k8s-app module you just used is intentionally very simplified to make it easy to use for learning and testing. If you want to deploy Kubernetes apps in production, you’ll need to think through many aspects that simple module doesn’t include, such as secrets management, volumes, liveness probes, readiness probes, labels, annotations, multiple ports, multiple containers, and so on. All of this, and more, is available in the K8S modules that are part of the Gruntwork Infrastructure as Code Library.When you’re done learning and experimenting, make sure to clean everything up (so AWS doesn’t keep charging you) by running destroy, first in the k8s folder, and then in the eks folder.

This crash course only gives you a tiny taste of Terraform. There are many other things to learn: state management, data sources, loops, conditionals, secrets management, CI / CD, and so on. If you want to go deeper, here are some recommended resources:
  • 1.Terraform: Up & Running: Much of the content from this blog post series comes from the 3rd edition of this book.
  • 2.A Comprehensive Guide to Terraform: Terraform: Up & Running started as this blog post series. So you can think of this series as a free, but shorter and less detailed version of the content in the book.
  • 3.HashiCorp Learning: Official Terraform learning resources from HashiCorp.
  • 4.Gruntwork Infrastructure as Code Library: A collection of over 300,000 lines of battle-tested, commercially supported & maintained infrastructure code (Terraform, Go, Bash, etc.) that has been proven in production at hundreds of companies.

If you’ve made it this far, then you’ve successfully used Terraform to deploy a Kubernetes cluster and Dockerized apps in AWS. Well done! Hopefully, these technologies no longer seem so scary.But this is only the beginning. Now that you have the basic mental model, it’ll be much easier to add on to it, and go deeper. Check out the Further Reading section in each of the blog posts in this series for useful resources. And if you’re interested in off-the-shelf, battle-tested, commercially supported and maintained solutions for Docker, Kubernetes, Terraform, and AWS, make sure to check out Gruntwork!
Share
Grunty
Resources

Explore our latest blog

Get the most up-to-date information and trends from our DevOps community.
TerraformResouces Image

Promotion Workflows with Terraform

How to configure GitOps-driven, immutable infrastructure workflows for Terraform using Gruntwork Patcher.
avatar

Jason Griffin

October 3, 2023 7 min read
TerraformResouces Image

The Impact of the HashiCorp License Change on Gruntwork Customers

How to configure GitOps-driven, immutable infrastructure workflows for Terraform using Gruntwork Patcher.
avatar

Josh Padnick

October 3, 2023 7 min read