Cloud

TL;DR

  • Use Route53 to register your domain and handle your DNS.
  • Use S3 to make two buckets: example.com and www.example.com. The former redirects to the latter.
  • Use ACM (amazon certificate manager) to manage your HTTPS/SSL cert.
  • Use Cloudfront to serve your site over HTTPS.
  • Use Hugo / Jekyll or whatever you like to create your site.
  • Use awscli to upload your site with your preferred cache settings.
  • Laugh all the way to the bank as your blog is safe from the occasional ‘slashdotting’ (high-traffic day) for pennies a month.

Domain registration

First, register your domain using Route 53.

This will automatically create a Route 53 ‘hosted zone’ for your domain, helpfully.

S3

Next you need to make two S3 buckets.

One bucket will be your ‘apex’ domain (eg example.com). The other bucket will be your www domain (eg www.example.com). The apex will redirect readers to the www subdomain. Some may prefer vice-versa, but it’s trickier to configure CNAME-style aliases on an apex DNS A record so I’ll leave that to smarter folk than me, and stick with www.

To create the buckets, follow these steps:

  • Go to the S3 console
  • Select whichever region you think is closest to your target audience
  • Click ‘Create bucket’
  • Bucket name: example.com (this will be your domain name, not literally ‘example’)
  • Click ‘Next’ 3 times, leaving the options and permissions to their standard values
  • Click ‘Create bucket’
  • Repeat the above steps for www.example.com

Next configure the apex bucket: (you could have done this while creating the buckets to be honest)

  • Click the example.com bucket name (not the checkbox next to it)
  • Click Properties > Static website hosting
  • Select ‘Redirect requests’
  • For ‘Target bucket’ enter www.example.com
  • For ‘Protocol’ enter https
  • Click Save

Next configure the www bucket:

  • Click the www.example.com bucket
  • Click Properties > Static website hosting
  • Select ‘Use this bucket to host’
  • For ‘Index document’ enter index.html
  • For ‘Error document’ enter 404.html (this suits Hugo, the file name depends on your site generator)
  • Click Save
  • Click Permissions > Block public access > Edit
  • Turn off ‘Block all public access’, click Save
  • Click Permissions > Bucket Policy
  • IMPORTANT: In the next step, change www.CHANGE THIS TO YOUR DOMAIN.com to your domain name
  • In the Bucket policy editor paste the following and click Save:

      {
          "Id": "PolicyForPublicWebsiteContent",
          "Statement": [
              {
                  "Sid": "PublicReadGetObject",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "*"
                  },
                  "Action": "s3:GetObject",
                  "Resource": "arn:aws:s3:::www.CHANGE THIS TO YOUR DOMAIN.com/*"
              }
          ]
      }
    

You may as well put in some test content at this point while you’re in the S3 console:

  • Click the www.example.com bucket
  • Click ‘Upload’
  • Drag in a simple index.html file, click Next
  • Leave the permissions as-is, click Next
  • Scroll down to ‘Metadata’
  • For ‘Header’ select Cache-Control with the value max-age=300
  • Save that row, click Next > Upload

Certificate Manager aka ACM

  • Go to the ACM console
  • Select the ‘N Virginia’ zone (this is ESSENTIAL: Cloudfront can only use NV certs for some reason)
  • If this is your first cert, click Provision > Get started > Request a public cert
  • If you have other certs already, click Request a certificate > select Request a public certificate > click Request a certificate
  • You should now be on the ‘Request a certificate’ screen with 5 steps listed in the left column
  • For ‘Domain name’ enter example.com (not www.example.com)
  • Click ‘Add another name’
  • For the new domain row enter *.example.com
  • Click ‘Next’
  • Select ‘DNS verification’ which is very easy because you’re using Route 53
  • Skip tags and click ‘Review’
  • Click ‘Confirm and Request’
  • You should now be in the ‘Validation’ step
  • Allow it to add verification CNAMES to Route 53, by expanding each of the domains and clicking ‘Create record in Route 53’ for each.
  • It should show ‘Success: The DNS record was written to your Route 53 hosted zone. It may take up to 30 minutes for the changes to propagate, and for AWS to validate the domain’
  • Click ‘Continue’
  • It should say “Validation not complete The status of this certificate request is “Pending validation”. No further action is needed from you. Amazon is validating your domain name.”
  • Go and do something else for 30mins or an hour, get a coffee or something while ACM and Route 53 talk to each other.
  • After a while the certificate’s status should display Issued and you can proceed to configuring Cloudfront.

Cloudfront

We need to create 2 Cloudfront distributions, one for the ‘apex’ domain, one for www.example.com.

First, create the www distribution:

  • Go to the Cloudfront console
  • Click Create distribution
  • Select Web > Get started
  • For ‘Origin domain name’, grab it from S3:
    • Open a new browser tab and go to the S3 console
    • Click on www.example.com > Properties > Static hosting and copy the ‘endpoint’ address without the protocol
    • Eg you’ll have www.example.com.s3-website-ap-southeast-2.amazonaws.com
    • Careful not to use the REST endpoint that it may suggest in the dropdown box, eg: www.example.com.s3.amazonaws.com, as this makes permissions different to configure
  • Leave ‘Origin path’ empty
  • ‘Origin id’ should be autofilled, leave it as-is.
  • For ‘Viewer protocol policy’ select Redirect http to https
  • Leave the caching settings as default
    • We’ll configure this in S3 so that the browser gets a cache-control header
    • If you try to configure your caching in Cloudfront here instead of in S3, it doesn’t send a cache-control header to the client, so you won’t get reliable/fast client-side caching.
  • For ‘Alternate domain names’ enter www.example.com
  • Select SSL Certificate > Custom SSL Certificate
    • Click in the empty box just beneath and select the appropriate ACM cert which should appear
    • This particular interface is broken in Firefox for me, you should use Chrome I guess.
  • Click ‘Create distro’

Next, create the apex distribution:

  • Click Create distribution
  • Select Web > Get started
  • For ‘Origin domain name’, grab it from the apex S3 bucket as described above
    • It’ll likely be exactly the same, just without the www. prefix.
    • Eg example.com.s3-website-ap-southeast-2.amazonaws.com
  • For ‘Viewer protocol policy’ select Redirect http to https
  • For ‘Alternate domain names’ enter example.com
  • Select SSL Certificate > Custom SSL Certificate
    • Select your certificate in the box below, as with the other distribution
  • Click ‘Create distro’

This will take a while for Cloudfront to spin up the distributions, for me it took 18 mins. Maybe go for another coffee break. Wait for their State column to show Enabled.

Route 53 DNS

Next we need to configure Route 53 so that your DNS entries point to Cloudflare.

  • First we need to get the domains from Cloudflare
    • Go into the cloudflare distributions list
    • Look for the columns ‘Domain Name’ and ‘Origin’.
    • For the row where the origin starts with ‘www.’, copy the domain name. I will call this your ‘www cloudfront domain’ and looks like abcdefghijklmn.cloudfront.net
    • Do the same for the row where the origin doesn’t start with ‘www’; this is your ‘apex cloudfront domain’
  • Go to the Route 53 console
  • Select Hosted zones
  • Click on the ‘example.com’ link (not its circle ‘radio’ box to its left)
  • Create the apex IPV4 record set:
    • Click ‘Create record set’
    • Leave ‘Name’ empty
    • For ‘Type’ select A
    • For ‘Alias’ select Yes
    • For ‘Alias target’ paste the ‘apex cloudfront domain’ from earlier, eg abcdefghijklmn.cloudfront.net
    • Click ‘Create’
  • Create the apex IPV6 record set:
    • Click ‘Create record set’
    • Leave ‘Name’ empty
    • For ‘Type’ select AAAA
    • For ‘Alias’ select Yes
    • For ‘Alias target’ paste the ‘apex cloudfront domain’ from earlier, eg abcdefghijklmn.cloudfront.net
    • Click ‘Create’
  • Create the WWW record set:
    • Click ‘Create record set’
    • For ‘Name’ enter www
    • For ‘Type’ select CNAME
    • For ‘Alias’ select No
    • For ‘TTL’ select 300 - this is a 5 min cache lifetime. Feel free to adjust this later once everything’s working, if you like.
    • For ‘Value’ paste the ‘www cloudfront domain’ from earlier, eg abcdefghijklmn.cloudfront.net
    • Click ‘Create’

Test all the things

First, test the apex domain:

  • Run curl --verbose http://example.com/
    • Should see < Location: https://example.com/ to show it’s upgrading you from HTTP to HTTPS
  • Run curl --verbose https://example.com/
    • Should see < location: https://www.example.com/ to show it’s redirecting you to the www subdomain
  • Run curl --verbose http://www.example.com/
    • Should see < Location: https://www.example.com/ to show it’s upgrading you from HTTP to HTTPS
  • Run curl --verbose https://www.example.com/
    • Should see < cache-control: max-age=300 which tells the web browsers to cache things for 5 mins
    • You should see < x-cache: Miss from cloudfront on first run, followed by ‘Hit’ on subsequent runs
  • Finally open it in a browser and you should see the index.html you uploaded earlier.

Uploading

Once you’ve generated your site with Hugo or Jekyll or Notepad or whatever, you’ll want to upload it. Here’s how I like to make this convenient and cache-friendly:

  • Install the AWS command line utilities
    • macOS: brew install awscli after installing Homebrew
    • Windows/Linux: I’m not sure, sorry
    • API keys can be found in the console by clicking your name (top right) > Credentials > Access Keys
    • aws configure to enter your API keys
    • Careful not to expire an existing key or it’ll potentially break things for your colleagues!
  • Copy your files up:
    • Run this command when the current working directory is one up from your site’s root index.html
    • public (as specified below) is what Hugo calls the folder containing the files to upload, your static site generator might be different, use whatever is appropriate
    • aws s3 sync public s3://www.example.com --cache-control max-age=300 --exclude ".*"
    • The above uses –exclude to skip files like .DS_Store
    • The above sets the cache to 300s aka 5 mins to ensure your site is cached and snappy but also quick to update when needed. Adjust if you like.

You can also manually update cache settings in the S3 console like so:

  • Open S3 console
  • Click your www.example.com link
  • Select all files
  • Click Actions > Change Metadata > Key: Cache-Control; Value: max-age=300 > Save

If you’ve already uploaded your site but forgot to set the cache header, here’s a trick that can be used:

aws s3 cp s3://www.example.com/ s3://www.example.com/ --recursive --metadata-directive REPLACE --cache-control max-age=300

Phew, that was long! I sincerely hope I didn’t miss anything.

Thanks for reading, I hope this helps someone, and have a great week!

Photo by NASA on Unsplash

Thanks for reading! And if you want to get in touch, I'd love to hear from you: chris.hulbert at gmail.

Chris Hulbert

(Comp Sci, Hons - UTS)

iOS Developer (Freelancer / Contractor) in Australia.

I have worked at places such as Google, Cochlear, Assembly Payments, News Corp, Fox Sports, NineMSN, FetchTV, Woolworths, and Westpac, among others. If you're looking for help developing an iOS app, drop me a line!

Get in touch:
[email protected]
github.com/chrishulbert
linkedin
my resume



 Subscribe via RSS