For smaller or newer web development companies, opting to host in the cloud is an obvious choice. However, our company has hosted websites and web applications for many years, long before cloud hosting was a realistic option. So for us, shifting from physical hosting to cloud hosting was a significant decision. Going cloud should have been a "no brainer", but there was some journey to go though first.
Our physical hardware
We were rather comfortable with our physical hardware. Maffey.com rented a half cabinet in a good data centre from a reputable ISP (Voyager NZ). We ran extremely reliable older servers (approx 10 years old). Because these servers were inexpensive, we were able to carry many spare interchangeable parts to replace at a moments notice. The physical servers were so reliable we never had to swap out any parts, not even a power supply.
As a nice side effect of our low cost hardware, we were able to avoid using virtual servers. Each physical server ran a single Linux instance. Our larger customers would have a private server, smaller customers would share a server. When I say this out loud today, it sounds very backwards. However, it was comfortable, and it worked.
And here lies the dilemma. The situation we had with our physical sever setup was good. The setup was reliable and the cost was low. We had significant over capacity in terms of processing power and storage. It is hard to walk away from something that works.
So why shift to cloud hosting
It felt like it was time to reassess things. Our physical servers were old and were going to start becoming unreliable, and there is nothing customers dislike more than unscheduled downtime. We either needed to purchase newer, more powerful hardware and learn how to do visualizing for ourselves, or we take the step into cloud hosting. There were a few other aspects that pushed us towards cloud.
- Performance. After testing cloud machines at Vultr, I was well impressed with the performance. Even though on paper our physical servers had a lot more resources available than a tiny cloud server, the cloud servers running on up to date hardware performed much better.
- Perception. Even though we were happy with our physical servers, explaining our hosting situation to customers was a little tricky. Justifying our use of 10 year old machines by over supplying CPU, memory and storage resource is not a great look. That said, most of our customers were happy with the status quo and were hesitant to move to the cloud.
- Required effort. Upgrading our physical gear and learning how to use virtualization would require a lot of effort on our part and some new skills. Like many owner operators, I am not good at factoring my own time and effort into the total cost of sales. While it's true that physical Linux servers are relatively self-sustaining, they still demand attention when things go awry. Even small issues can become time-consuming, affecting productivity.
- Risk. Running multiple virtual servers on a single machine means we are exposed to significant down time and grumpy customers if one machine fails. Our existing physical setup, having customers spread across multiple physical servers limited this risk. However the idea of maintaining our own physical servers running multiple virtual servers felt too exposed.
- Equipment availability. At the time of our move, ordering new servers came with a backlog of several months and second hand servers were hard to come by. As well as the wait and cost for new servers, the delay to get replacement machines and parts would be unacceptable.
Portable software stack
We stick with a generic open source software stack which makes moving between hosting providers fairly painless. We use Ubuntu Linux, because it is one of the most common flavors of Linux, widely supported and easy to work with. All of the rest of the software in our stack is installed from the standard packages supplied by Canonical, the people who make Ubuntu.
This choice makes the software stack easy to maintain. It is also portable between physical hardware, virtual environments and cloud environments.
Our vanilla software stack made it easy to move to a cloud provider.
So what held us back?
If cloud has all these benefits, and moving is rather easy, what made the decision to move to cloud hosting so hard?
Perceived Cost, it is as simple as that!
The cost of renting a physical half cabinet (20 server spaces) in a good data centre was less than renting a single whole physical server in the cloud. So if renting raw infrastructure (server space, cooling, electricity and internet connectivity) is the only consideration, jumping to cloud hosting seamed expensive, up to 20 times more expensive, depending on how you cut it.
This is where I was deceiving myself, not comparing apples to apples. Good cloud providers are not just providing infrastructure, they are providing the expertise to maintain the infrastructure. They are also regularly updating the infrastructure, so I don't have to go shopping for new hardware.
I failed to take into account the amount of my own time and energy required to maintain the physical hosting environment. Even though our physical hardware ran mostly without human intervention, it still required some maintenance. And the requirement for maintenance seamed to crop up at inconvenient times.
To accurately evaluate the cost-effectiveness, one must consider all the factors that could affect the total cost of ownership (TCO) - things like scalability, time spent on server maintenance, hardware updates, and even the opportunity cost of potential downtime due to physical hardware failures. When viewed from this broader perspective, cloud hosting, should provide significant value that's not immediately apparent on a surface level comparison.
Choice of cloud providers.
At the time of writing, there are three large cloud providers, AWS, Azure and Google.
There are also, 3 second tier providers, Digital Ocean, Vultr and Linode.
We opted for a second tier provider, mainly because they are far simpler to work with. Trying to understand the dashboard and options for AWS and Azure is a special set of skills on its own. The control panel for Digital Ocean, Vultr and Linode are much simpler. And the pricing structure for the second tier providers is easier to understand and often lower cost.
Digital Ocean, at the time, were not available in Australasia. We chose Vultr over Linode because the Vultr servers seamed much faster.
This was the easy bit. After deciding to move, actually moving customers to the new hosting environment was rather easy.
A nice benefit of the virtual world was dealing with smaller customers who's applications run on older platforms. In the physical server world, these few customers become an inconvenience as they need a whole server kept on just for them. In the virtual server world, it is easy to leave these customers on a virtual machine running older versions of our software stack.
To help keep costs down, we visualized development and backup servers and host these on premise. On premise hosting is even lower cost than hosting in the data centre. On premise is far less reliable that data centre or cloud hosting, but this is perfectly acceptable for the less important servers.
This move should have been a no brainier.
The combination of using on premise hosting for less important servers and carefully sizing cloud servers, rather than just over sizing everything, our hosting bill is now less that it was in the physical hosting world, again eliminating my main fear of a cost blowout.
A few months after we finished shifting, one of Vultr's physical servers failed and rebooted. I received an automatic email from Vultr saying they were looking at it, and 10 minutes later it was fixed. This one experience sums up the key benefit of cloud hosting which I overlooked early on. When there are hardware issues, someone else who is far more expert than I am, will deal with it. That is worth every cent.