Overview
If you’re reading this, you’ll probably notice that my website looks, the very list, a little different. That’s because along with a new theme/template, I’m now using Jekyll to manage my site.
Rendering of responsive site from Am I Responsive
Reasons for the move
Although there was no single reason why I decided to move my site over to Jekyll, there are definitely a few top reasons why I decided to make the move, mainly, Ease of use, and Hosting.
1. Ease of use
When The Whisky Oriented Development organizers and I started out last year, we decided that a web presence would be important, but that our needs were so simple that we didn’t need to get a big hosting package, a big fancy website, or pretty much anything flashy. We just needed to be able to post up details about our events, where you could get tickets, and perhaps some information about our members. Wordpress would almost certainly be overkill, but I had recently come across GitHub Pages and it seemed like it would fit the bill. Best of all, the price seemed really compelling. Free.
I through the first version of the site together in a few hours. Nothing sophisticated, just a basic site, with posts, and the home page linking to the posts. Over time we’ve enhanced the site, and added member pages, redirects, more details about the group on the main page, and using Github Pages CNAME feature to get our own custom domain: whiskydev.com.
Many of the templates also have built in support for DISQUS, which means we wouldn’t even have to give up the ability for people to leave comments. One thing that I personally loved, due to the fact that the DISQUS widget loads through JavaScript, the comments aren’t part of the page content, which means Spammers don’t have an incentive to comment on my pages, hoping I will let them through, and they’re associated links will be indexed by Google. I actually switched my existing site over to DISQUS a few weeks back, and I haven’t got a notification email asking me to moderate a spammy comment since!
2. Hosting
For the past 3 years, I’ve had my sites running through Vexxhost. When I first started with them I was looking to do a project using Ruby on Rails. They, like many shared hosting providers provided support for Rails, and they happened to have a crazy deal on their shared hosting at the time: $2.50/month when you paid for 3 years. Given most shared hosting providers at the time were $5+, I figured I’d give it a try.
Vexxhost wasn’t bad. There servers seemed to be a little more responsive than 1&1, which I’ve also used previously. Very little went wrong with them while I was there, but there wasn’t anything that when spectacular either. They were just another cpanel hosting provider.
As I spun up more side projects, I really started to care that they stayed up, and available. So I started to use Uptime Robot to keep track of any down time. I figured that If I was going to be monitoring the uptime of a side project like my Rental Map, I should at least monitor my own site.
I like to be really on top of my sites, so I set up Uptime Robot with my Twitter account. To my surprise, Uptime Robot is the user I would hear from the most. On good days, I would hear nothing at all. On bad days, would hear from them a lot. For example, on July 26th, I heard from them 10 times. The site was down. Now it’s back up. Down again. And so on.
There was also the comments like I mentioned in the ease of use section above. Most of the
time, it would be just spam. I eventually got to the point where I set a flag in Wordpress
that any comment that had http
in the comment body or url would need to be moderated.
You might not be surprised how many comments passed moderation, but I was. Since I started
that flag, nothing with an actual url was a legitimate comment. I was getting to the point
where the only time I would interact with my site was when I had to delete some spam
comment, updating WordPress because a new security fix, or one time, when shared server
got hacked. The vast majority of my time, my content would just sit there, statically.
This fall when I realized that it’s been almost 3 years since I subscribed to Vexxhost, I got to thinking. There was no crazy renewal price, in fact, the plan I had, doesn’t even exist anymore. If I wanted to stick with Vexxhost, it would cost me $10/month. For what I do with it, and the amount of traffic I get, $10 was not even conceivable. But there were alternatives. Since my content stayed static most of the time, there was no reason I couldn’t look into Static hosting providers.
Static hosting
Almost any hosting provider would work for what I wanted to do. But there were two big ones that had compelling cases for them: GitHub Pages and Amazon S3 through a feature that you probably couldn’t even find out about if you didn’t already know it existed: Hosting a Static Website on Amazon S3.
I’ve already had experience with GitHub Pages. It’s free which is definitely
compelling, but the one thing I didn’t think I’d like, was that each site is associated
with a repository. Sure, I could use a robot.txt
file to block crawlers from indexing
specific files, while still making them available to people, but could I do the same
through my repo. My gut said no.
At this moment, you might be asking, why I’d want to have a file accessible, but not indexable. The case for me is my Resume and CV. By having making people have to go through a page to get to those files, I can tell you that I’ve had 22 visits to CV and 11 visits to my Resume in the last 30 days. It might not be that useful, but it makes me feel good. If Google could index it directly, then people would be more likely to just skip the page, and go directly to the file, and I learn nothing.
S3 on the other hand provided a lot of control, over what people could see, what they couldn’t, and even redirecting old pages to new ones more cleanly. It also had one other major thing going for it. My friend and colleague Graham Baradoy uses it for his site and raves about how fast and cheap it is.
To S3’s benefit, the price is probably as close to free as you can get, without being free. Based on my modest website, I was looking at about 16 MB of storage. At S3’s rate of $0.03/GB, even if they rounded up, I would be paying $0.01 a month to store my site. I get about 1400 pageview a month to my site (it varies from month to month). If each page takes 13 requests (that’s what my homepage takes) that would be about 18200 requests a month for files. Clearing my cache, and loading my main page through Chrome, I can also tell you that it takes 330K to load the first page. If each page takes that much to load, that would be about 0.4406 GB/month of transfer required. Here’s how my math breaks down on the price. I’ll assume they round up whenever it’s a fraction of a penny.
Component | Price | Amount | Component Total |
---|---|---|---|
Storage | $0.03/GB | 0.016 GB | $0.01 |
GET Requests | $0.004 / 10000 Requests | 18200 | $0.01 |
Transfer Bandwidth (First GB) | $0.00 / GB | 0.4406 GB | $0.00 |
Monthly Estimated Total | $0.02 |
So if barring any major oddities, I would be paying $0.02/month, which is significantly less than the $2.50/month I have been paying. Even if one month I got a massive spike in traffic, I could probably still pay less than I’m paying now.
On top of that, if you’re not already an Amazon Web Services customer, you should really look into the AWS Free Tier which makes gives a large amount of usage free for the first year. For example, For S3, You get 5 GB of standard storage, 20000 Get requests, 2000 Put requests, and 15 GB of data transfer each month for the first year.
This all sounded pretty good. But I still had one more trick up my sleeve.
CDN: CloudFlare
The piece of using S3 as a static host that could be the most variable, and least predictable is how much people will visit your site. The more people visit, the higher the number of Get requests you’ll make, and the higher the amount of Outgoing Transfer you will use. This is where CloudFlare comes in.
CloudFlare’s main feature is a Content Distribution Network. Simply, when someone requests a page on your site, rather than going to S3 and downloading it fresh each time, CloudFlare will check if it has seen a copy of the page recently (a cached copy) and if so, it will serve it instead. This reduces the amount of requests made to S3, and the total bandwidth consumed by your site. As an added benefit, the files will likely be moved to a server closer to the requesting user, which means that they will likely load faster.
Amazon has a similar service called CloudFront, but mainly because it was free, I decided to go with CloudFlare. From what I have read, by going with CloudFront, you pay less for the transfer out of S3, but you pay $0.12-$0.25/GB for transfer from out of CloudFlare, depending on the requesting region. That being said, I have heard that the performance of CloudFront is better than CloudFlare.
Conclusion
So there you have it, after having made the transition from Drupal to Wordpress a few years ago, I’m now making the move from Wordpress to Jekyll. The most hularious part for me is that I’m actually making the transition, mostly for the same reasons: Speed, and reducing the content creation/maintenance ratio.
To be fair though, I’ve been rarely making new content, so any maintenance outside of the content creation cycle was too much.
This post is already getting fairly long in the tooth, so I figure that it’s now time to call it on this one. If you have any questions, don’t hesitate to ask them in the comments and I’ll try to answer them as best I can. I probably have enough from this experience to write another article on using Jekyll, and some useful Jekyll/S3 tools I learned about along the way. If you’d be interested, please let me know in the moments below.