Ever since in 2006 the launch of Amazon’s its S3 cloud storage service (Simple Storage Service), it has become a popular way for people to prop up Websites hosted with other service providers.
Initially S3 was used to hose large files like movies. In those situations S3 is faster and cheaper than most of the other Web hosting providers. Initially the hosting of other files such as images was problematic as a result of what is referred to as latency issues. This is when there is a delay between a browser requesting a file and then receiving it. When S3 introduced more data centers worldwide and have designed a better load balancing it resolved this problem.
However, up until recently it was not possible to host your entire Websites at S3 because you were unable to define root and error documents within an S3 ‘bucket’ (individual storage configurations).
This means that there wasn’t a way that you could configure S3 to serve index.html when you directed visitors to an S3 bucket, and there wasn’t a way to serve something like error.html if something went awry.
Amazon’s hosting technology
Now this has changed. Each bucket can now have root and error documents, but there are a few important caveats.
First, because S3 is just dumb storage, you can only host an entire website here if all of your content is static – so nothing more than images, HTML files, etc. If you want to use PHP or similar you will have to use Amazon’s EC2 (Elastic Compute Cloud) or a standard hosting provider. For example, you would not be able to host something like a WordPress blog on S3.
It is also not possible to host a root domain at S3 because it can only be accessed using CNAME redirection in DNS records. There is no static endpoint IP address with S3, so you cannot configure it for the A-record. For example, you can host www.123.com, but you cannot host 123.com. You would need a separate hosting service configured for the DNS A-record in order to direct visitors arriving at 123.com to www.123.com. By the way, some experts think that the adding of www as a CNAME record is not a good practice.
How you can host your website with Amazon
Setting up complete Website hosting with S3 is easy. Begin by visiting the S3 control panel. Once there you will create a new bucket in your AWS, which will be named after the Web address that will direct there. For example, If I want visitors to www.xyz.com to be redirected to a bucket, it would be named www.xyz.com.
Then you will upload all your site files using the Upload button located on the S3 console, or by using a separate client application if that’s your preference. Do not forget to set files so that they are publicly accessible, which can be done from the Upload window.
Next, you are going to right-click the new bucket, which is listed on the left of the console – choose Properties. In the new panel located at the bottom of the window, you need to click the Website tab and then make sure that Enabled is checked, before you type the filename of your index and error documents. You cannot specify specific 4xx documents, like 401, 403, 404, etc, ; all errors need to be directed to the same error page. When you are done, click the Save button.
Not the address that is listed alongside Endpoint, which is located beneath the index and error document filenames in the same panel. Now go to your domain registrar’s configuration panel and configure a new CNAME record for www, that specifies the S3 endpoint address – you will need to remove the http:// prefix from the endpoint address and any slashes trailing at the end.
How CNAME configuration is done will vary from one provider to another. It’s likely that you will also have to delete the A-record for www.
That’s it! Once the DNS changes propagate, which might take at least a couple of hours, visitors to your site will be directed straight to the S3 bucket that contains your website where you will see the index.html file.
An additional step that’s a good idea, is to point the A-record address (no www) to a simple hosting service where you can then configure automated redirect to the www address.
If you have a high traffic site, using Amazon’s CloudFront service might be a good idea, to ensure traffic gets directed to the nearest geographical server, avoiding any latency issues. However, keep in mind that this will cost more.
Is Amazon better than your current provider?
How does using S3 compare in cost to a standard provider? I use Dreamhost‘s basic Web hosting package for the website I run, with an annual cost of $119 and it provides unlimited bandwidth and storage, although servers are shared and that tends to limit the speed my site. Unlike S3, the Dreamhost price includes PHP, databases, in addition to the numerous useful extras, like one-click installs of popular site software.
One of my static websites running from Dreamhost has around 300 visitors a day with a 2MB file for download. From the Dreamhost configuration panel, I am able to see that the website burns through around 350MB of bandwidth every day, including weekends.
Using a rough calculation I can quickly see that’s 10.65GB/month, which if served using S3 would actually have no cost to me, if I was to sign up to the Amazon Web Services Free Usage Tier. This provides up to 15GB of data transfer monthly, so while throughout the year I could occasionally incur a few dollars because of GET requests, but it is still cheaper. The Free Usage Tier permits 20,000 GET requests per month, and typically my 300 visitors a day will visit a number of pages and therefore download numerous images, each incurring a separate GET.
A comparison with Dreamhost is unfair because Dreamhost provides offers a lot more than simple storage, the S3 price is very convincing if you are paying per-GB or per-TB fee with your current provider. In addition, S3 is always going to be fast, and you are never going to run out of storage.