Creating and Mounting EFS in ‘html’ Folder for high Availability using Terraform.
Terraform is the best Infrastructure As Code tool in the current market. In my previous post I Created a complete web page setup using terraform. The best part of terraform is minimal human interference so no more errors :-p.
In this post I will use the previous setup for setting up the web page. But instead of attaching a EBS volume I will go for attachment of EFS. Which will give High availability and unlimited storage. So in case our Web server crashes or the availability zone goes offline we still have two other zones which will give 100% availability.
So, without wasting time in theory, let’s jump into the setup.
First step first, Create the provisioner. Create a terraform file named provider.tf
and add these lines,
provider "aws" {
region = "ap-south-1"
}
The next step is two create the keypair and upload that to AWS. In terminal type,
$ ssh-keygen -t rsa -f mykey
To upload the key using terraform we need to write the below terraform code in our keypair.tf
file.
resource "aws_key_pair" "key" {
key_name = "mykey"
public_key = file("mykey.pub")
}
Now, it’s time to create the security group which will allow port 22(SSH), 80(HTTP) and 2049(NFS). For simplicity I am opening these ports to all i.e. anywhere but in real time this need to be specific IP range for better security. Create a terraform file named sg.tf
and add the below content.
resource "aws_security_group" "web-sg" {
name = "web-sg"
description = "Allow port 22 and 80"
vpc_id = "vpc-18819d70"ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}ingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "web-sg"
}
}
Let’s create the EFS configuration in terraform file. Create a terraform file named efs.tf
and update the below content.
resource "aws_efs_file_system" "efs" {
creation_token = "efs"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "EFS-File-System"
}
}resource "aws_efs_mount_target" "efs-mt-a" {
file_system_id = aws_efs_file_system.efs.id
subnet_id = "subnet-68711b24"
security_groups = [aws_security_group.web-sg.id]
}resource "aws_efs_mount_target" "efs-mt-b" {
file_system_id = aws_efs_file_system.efs.id
subnet_id = "subnet-7cd46807"
security_groups = [aws_security_group.web-sg.id]
}resource "aws_efs_mount_target" "efs-mt-c" {
file_system_id = aws_efs_file_system.efs.id
subnet_id = "subnet-948386fc"
security_groups = [aws_security_group.web-sg.id]
}
Finally, let’s create the EC2 instance. In that instance we will first install ‘git’, ‘httpd’ service. And mount the EFS after installation. Cloning the ‘git’ into the /var/www/html
folder. So, create a terraform file named ec2.tf
and put the below content.
resource "aws_instance" "myinstance" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "mykey12345"
security_groups = ["web-sg"]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("mykey12345")
host = aws_instance.myinstance.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd amazon-efs-utils -y",
"sudo sleep 3m",
"sudo mount -t efs '${aws_efs_file_system.efs.id}':/ /var/www/html",
"sudo su -c \"echo '${aws_efs_file_system.efs.id}:/ /var/www/html nfs4 defaults,vers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 0 0' >> /etc/fstab\"",
]
}
tags = {
Name = "SaumikOS"
}
}output "InstancePIP" {
value = aws_instance.myinstance.public_ip
}
It’s time to create the CloudFront
Distribution which is also known as Content Delivery Network
. Create a terraform file for CloudFront and name that as cf.tf
.
resource "aws_cloudfront_distribution" "cf_distribution" {
origin {
domain_name = aws_s3_bucket.bucket.bucket_regional_domain_name
origin_id = "myweb"custom_origin_config {
http_port = 80
https_port = 80
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}enabled = true
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "myweb"forwarded_values {
query_string = falsecookies {
forward = "none"
}
}viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}restrictions {
geo_restriction {
restriction_type = "none"
}
}viewer_certificate {
cloudfront_default_certificate = true
}
}
The create a terraform file for creating and uploading content to ‘S3’. Give the file name as s3.tf
.
resource "aws_s3_bucket" "bucket" {
bucket = "saumik-test-bucket"
acl = "public-read"
tags = {
Name = "Code"
Environment = "prod"
}
}resource "aws_s3_bucket_object" "file_upload" {
depends_on = [
aws_s3_bucket.bucket,
]
bucket = "saumik-test-bucket"
key = "image.jpg"
source = "image.jpg"
}resource "aws_s3_bucket_policy" "policy" {
bucket = aws_s3_bucket.bucket.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid":"AllowPublicRead",
"Effect":"Allow",
"Principal": {
"AWS":"*"
},
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::saumik-test-bucket/*"
}
]
}
POLICY
}
The final step is to download an image
in jpg
format and rename that to image.jpg
.
Now our complete setup is ready. just run the below command and see the magic. :-)
$ terraform init
$ terraform plan
$ terraform apply --auto-approve
The terraform apply will return an IP, Goto browser and paste that IP. If everything is perfect then it will open the web page.
All the terraform files can be found here,
https://github.com/saumik8763/efs-terraform