How to build an automated end-to-end Infrastructure on AWS using Terraform?

Ishita Mittal
6 min readJun 15, 2020

--

Overview:

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can help with multi-cloud by having one workflow for all clouds. The infrastructure Terraform manages can be hosted on public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform, or on private clouds such as OpenStack, or CloudStack. Terraform treats infrastructure as code (IaC) so you never have to worry about your infrastructure drifting away from its desired configuration.

Plan of Action:

Our plan is to build an automated end-to-end infrastructure with the help of IaC (Infrastructure as code) on AWS. For this, we first need to generate a key-pair and a security group, so as to launch the EC2 instance with an extra attached EBS Volume for persistent storage. Then, we need to launch a webserver on the EC2 instance which will deploy its code from a Github Repository. Now, we will create a S3 bucket in which we will store some static content (i.e. image) as an Object which will be deployed again from Github Repository, and, also this bucket will act as origin for the Cloudfront. Cloudfront will provide us with a unique URL with which we will deploy the static content from S3 bucket to the previously launched webserver on the EC2 instance in a very few seconds and with very low latency. This whole infrastructure will be automated and built using Terraform.

# Make sure Terraform is installed on your system and its path is set in environment variables before going through the following steps.

Step-by-Step Walkthrough:

  1. Configuring the profile of the respective user:
  • AWS CLI should be installed on your system to run this command on CMD.
  • Enter your access key ID and secret access key for the respective profile here. (I have already entered mine)
command to configure profile of the respective AWS user

2. Creating a separate folder to store all the necessary data in one place and then writing the code in the following steps in the .tf file created here :

  • A notepad file named ‘ ec2.tf ’ will now appear on the screen. Now, we will start writing the code in the following steps here in this file.

3. Specifying providers:

code to specify provider

# STEPS FROM HERE ON ALSO HAVE GLIMPSES OF THE RESULTS ON AWS WebUI AS WE GO THROUGH THEM ONE BY ONE.

4. Launching a key-pair:

  • A key-pair (public-key as well as private key) is created using the following command on cmd.

ssh-keygen -f key_name

  • I have already created a key-pair named ‘ myawskey ’ using the above command. Now, launching this key-pair on AWS as well :
code to launch a key-pair
launched a key-pair on AWS WebUI successfully

5. Launching a Security group:

code to launch a security group to allow port 80, SSH and git
launched a security-group on AWS WebUI successfully

6. Launching an EC2 Instance with key-pair and security groups created in the previous steps:

code to install the required softwares (git, httpd) and also to start the httpd server
EC2-Instance has been launched on AWS WebUI successfully

7. Launching the EBSvolume in the same availability zone as that of the EC2 instance :

code to launch the ebs volume
ebs volume has been launched in the same availability zone (ap-south-1a) as that of ec2 instance successfully

8. Attaching EBS volume to the EC2 instance and also mounting the volume to /var/www/html :

code to attach the ebs volume and also to mount it to /var/www/html present in instance
ebs volume has been successfully attached to ec2 instance
ebs volume has been successfully mounted to /var/www/html

9. Creating a S3 bucket :

code to create a S3 bucket and also to make a directory on the local system where image will be downloaded (cloned) from github repository and this directory will be automatically removed as soon as the infrastructure is destroyed
s3 bucket has been successfully created

10. Creating a S3 bucket object which is publically readable:

code to upload the static content i.e image from git repository as s3 object in bucket which is publically readable
s3 object with permission to be read publically has been successfully created

11. Creating a Cloudfront distribution with S3 as origin:

code to create a cloudfront distribution with S3 as origin and using the cloudfront URL to update the code in /var/www/html
cloudfront distribution has been successfully created with s3 as origin

12. Creating a null resource to execute the command to display webpage on local system.

This will display our webpage on chrome

SOME BASIC TERRAFORM COMMANDS:

terraform init : initialize a working directory containing Terraform configuration files.

terraform apply : used to apply the changes required to reach the desired state of the configuration.

terraform validate : validates the configuration files in a directory, referring only to the configuration and not accessing any remote services such as remote state, provider APIs, etc.

terraform apply -auto-approve : skip interactive approval of plan before applying.

terraform destroy : used to destroy the Terraform-managed infrastructure.

terraform destroy -auto-approve : used to destroy infrastructure without asking for confirmation.

ALL THE STEPS REQUIRED TO BUILD THE INFRASTRUCTURE HAVE BEEN COMPLETED , AND NOW, WE ALSO KNOW THE TERRAFORM BASIC COMMANDS.

SO, LET’S FIRST INITIALIZE AND THEN RUN THE CODE:

successfully ran command : terraform init
successfully ran command : terraform apply -auto-approve

NOW, OUR WEBPAGE WILL AUTOMATICALLY APPEAR ON THE CHROME.

LOOKING GOOD, RIGHT.

NOW, LET’S DESTROY ALL THE BUILT INFRASTRUCTURE BY JUST ONE COMMAND.

successfully ran command : terraform destroy -auto-approve

Github code for your reference:

html code to create webpage

So, this is how we can build an entire end-to-end automated infrastructure on AWS using Terraform.

THANK YOU!!

A special thanks to World Record Holder, Vimal Daga sir for his extraordinary teaching skills and to provide such a platform where we can develop ourselves and learn new technologies, their integration, etc. from his years of hard work and research. I consider myself really lucky to be a part of his trainings where I get to improve myself and learn new & exciting things every day.

--

--

No responses yet