How this site was made

I wanted to create a companion website for my youtube channel Independent Pixels where I could write content related or complementary to the video tutorials I’m posting on the channel.

I knew that something like a static generated website would be the best way to start. I was just not sure where to host it. Github pages would definetly be an option, other options could be Netlify, Vercel, etc.

As for the static site generator I could opt to use something fancy like Gatsby, Next, etc. However, when I started playing around with options like Gastby + Sanity, or Next + Strapi, things just didn’t feel right. It felt I was doing too much just for the sake of creating a blog.

All things considered, I decided to create this website using Jekyll, a Ruby based static site generator super that is super easy to use and offers what I need at the moment without the hasle or learning curve that other options would impose on myself.

Now, the last piece of the puzzle was the hosting. Where should I host it? I decided to do an experiment - just shifting the complexity of this task to the backend basically -, and host this website on AWS using S3 bucktes and with the infrastructure created using Terraform.

I’m familiar with all these technologies already, so it shouldn’t be too hard to put all the pieces in place and also, as I’m not expecting lots of traffic to this website at the moment, hosting it on AWS wouldn’t be a problem in terms of costs.

By the time I am writing this line, I figured out how to terraform all the necessary infrastructure and also kicked-off the initial Jekyll based website using a template as starter. Yay! 🎉


AWS Infrastructure

Let’s take a look on how the infrastructure on AWS look like:

Schematics of the website's infrastructure on AWS
Hosting infrasctructure on AWS

It’s a pretty common setup for this type of use case. Perhaps what’s interesting here is the handling of the non-www traffic via a S3 bucket redirect. Other than that, the rest of the components are pretty standard.

Prerequisites

There’s a couple of tools you’ll need if you want to use the terraform template created for this website.

  • Terraform to automate the creation/upadte/deletion of the infrastructure.
  • AWS CLI to interact with AWS services.
  • Awsume to facilitate assuming role credentials.

Double check your credentials

Attention ⚠️ Make sure the AWS credentials are properly set. Double check your ENV variables and your ~/.aws/config and ~/.aws/credentials files.

Assume the role you want to use and set the credentials:

$ awsume myuser-profile

myuser-profile is a ficticious name, you’ll have to use your profile, just to be clear.

You can check your configuration with awsume by typing:

$ awsume -l

In order to verify that the right user is configured properly, you can type:

$ aws sts get-caller-identity

Confirm that the identity returned matches with the one you actually want to use. If that’s alright, then we have green light to proceed. If not review this process and make sure your credential setup is correct. You don’t want to create infrastructure on an account that is not yours, right? (Just in case you have multiple accounts/profiles configured in your machine).

Terraform configuration

The infrastructure configuration was split into different files in order to keep everything organised.

.
├── acm.tf
├── cloudfront.tf
├── providers.tf
├── route53.tf
├── s3.tf
├── templates
│   └── s3-policy.json
├── terraform.tfvars
└── variables.tf

The source code for this setup can be downloaded from this Github repository.

Remote state

I’m using S3 also as the backend for terraform to store the state. The creation of the bucket that will hold the state needs to be done manually and before running the terraform scripts. I created mine as follows:

AWS S3 bucket to store terraform's remote state
Hosting infrasctructure on AWS

For this bucket you want to block all public access:

AWS S3 terraform bucket blocking all public access
Hosting infrasctructure on AWS

Add a policy granting access to root:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<your account number>:root"
            },
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<your bucket name>-terraform"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<your account number>:root"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::<your bucket name>-terraform/*"
        }
    ]
}
S3 Account Settings

The next step is to make sure S3 is configured to allow the definition of ACLs and Policies.

By default it might be configured to block such action for all buckets you create. If that’s the case, terraform will fail when you try to execute it with an Access Denied error.

You’ll try to look at the permissions of the user, group, or, role and won’t find anything odd. The issue is that when your S3 account is globally set to block public access setting definition, even with admin permissions set to your AWS user/role/group, you won’t be able to change them.

The configuration should be as shown below:

AWS S3 configuration to unblock public access settings for the account
Hosting infrasctructure on AWS

Warning ⚠️ Pay attention when creating buckets manually to not let them publicly accessible over the internet. By disabling the default, you’ll have to ensure that your buckets are now properly configured to restric access only to the resources that can access it.

Certificate validation

The infrastructure is using a DNS based certficate validation method. Meaning that a record will be added to your hosted zone in Route53 so that the AWS Certificate Manager can validate it.

Once the infrastructure is created in Route53, you need to go there, access your Hosted Zone and grab the DNS servers that were provided and input them into your domain register or your DNS server. In my case, I’m unsing Namecheap, so the configuration is similar to the one below:

DNS adjustement in Namecheap in order for the domain to validate the certificate
Hosting infrasctructure on AWS
Applying the infrastructure

Now we can init our terraform providers and backend and apply the infrastructure on AWS.

$ terraform init

This will initialize the providers and the backend. Once that’s done, type:

$ terraform apply

You’ll be prompted before actualy the changes are applyed. Review the changes, type “Yes”, and hit enter.

Attention ⚠️ Once the Route53 hosted zone is created, head to the AWS Route53 dashboard, access the hosted zone, and copy the DNS server addresses to you domain register.

After configuring the DNS, it may take a while for the changes to take effect. The terraform execution will continue running until it can reach the DNS servers and validate the certificate entry. Be patient!

Once the certificate is validated, a couple of other resources will be created and at the end you should expect your terraform execution to complete successfully.

At this point the infrastructure setup is completed!

(Optional) Destroying the infrastructure (only if you want to get rid of everything)

In case you wan to destroy all the infrastructure created, first you’ll have to empty your buckets 🙂, otherwhise terraform won’t be able to delete the buckets because they’re not empty.

Friendly reminder: This will delete everyting from the S3 buckets, so if you want to keep a backup of something, do it first.

You can use the following command to delete the bucket’s content recursivelly:

$ aws s3 rm s3://bucket-name --recursive

Remember that two buckets were created, the www bucket and the root bucket. You should empty both.

Now, we can type:

$ terraform destroy

Confirm the destruction of the resources when prompted, and wait until the process is completed.


Uploading the website to S3

With the website created and build on my machine. I can head to its directory and from the _site folder type:

$ aws s3 sync . s3://www.codingwithjulio.com

Once the upload is ready, we need to invalidate the CloudFront cache.

$ aws cloudfront create-invalidation --distribution-id E1P7APW92TLS6S --paths "/*";

Where E1P7APW92TLS6S is the CloudFront ID associated with your www S3 bucket.


updated_at 26-05-2021