AWS – The 10,000-pound gorilla
We know that Amazon Web Services (AWS) — Amazon’s Cloud — has its sights set on the enterprise market and is looking to make a splash. They have a lot of our competitors on the run. Amazon is a big name, and cloud adoption is good, but I am here to tell you that AWS is not for everyone. Here is why: An overwhelming 90% majority of enterprise workloads are better suited for a dedicated server. In fact, dedicated server infrastructure wins on price and performance, with costs typically three times less than running similar workloads on AWS.
To give you an example, let’s take a look at one of our most popular mid-level dedicated server solutions:
*1TB Persistent EBS Storage required assumes you want to keep configs, data through maintenance, etc
Yes, you read the chart right. This is showing a 3.3x differential in monthly price. Why then would you pay a hefty premium for EC2 from AWS? AWS is focused on flexibility, scale, and other elastic features that make cloud computing great. Despite its continued cost drops, AWS was not built with “constant” workload cost effectiveness in mind. Think of it like a hotel — if you want to live somewhere 365 days a year, a hotel is not the most economical or practical option. A house or an apartment is the better alternative. In this way, computing and housing are very similar. 90% of IT workloads available right now are largely predictable and constant.
AWS (and its cloud competitors CloudStack and OpenStack) is and has always been about horizontal scalability – which is the right solution in only these very specific scenarios:
- Need for quick scalability – Think of a company that anticipates a boom in audience, or perhaps does not expect it. Recently, the BitCoin currency exchange marketplace MtGox became a victim of its own success when its servers were unable to keep up with surging demands. MtGox runs its infrastructure and apparently has significant high-performance needs along with others. This creates a specific situation, however, where a rapid, perhaps even automated scale-out could have helped to avert an outage.
- Flexible workloads – Let’s say you are starting your own Netflix. When planning your infrastructure you have to look at your network, disk and CPU utilization throughout – not only the day, but also the week and even seasonal schedules. It’s not much of a stretch to assume that viewership will increase across the board – over spring break, in the evenings, on the weekends, etc. You need the flexibility to turn systems up or down in relation to your workload. That’s exactly why Netflix lives on AWS. It’s also why Zynga, which has a similar spikiness footprint, retains capacity with AWS in a hybrid deployment.
- Applications built for scale-out infrastructure – At the heart of a successful enterprise scale-out operation is a distributed software application. Whether out of the box or built in-house, these applications have to be flexible in recognizing additional computing resources with as little reconfiguration as possible.
- Cost not factoring into the decision – If all of these requirements are met and operational costs are not an issue at this particular stage, then it may indeed make sense to look beyond dedicated to a cloud-only solution. But when is budget not a concern? You will indeed find that this is the case the minority of the time in the big picture.
A gorilla can’t outrun a cheetah
Customization is king, and the AWS cloud is too big to customize. Customers may have uncertainties and custom requirements around storage requirements, security, privacy, SLAs, automated scaling needs, resource deficiencies in backup and disaster recovery capabilities. The overwhelming majority of computing workload requirements are still customized.
“Not everything will move to the cloud as there are many business processes, data sets and workflows that require specific hardware or proprietary solutions that can’t take advantage of cloud economics.” – James Staten, Forrester Research, Playbook on Cloud Computing
Most workloads are consistent.
Most workloads are relatively consistent and predictable. In cases of high-performance requirements, they are not meant to be competing for I/O in some infrastructure somewhere. Real-world data processing does not always scale linearly so paying for that capability when it’s not even a requirement is simply unnecessary. In the majority of cases traffic demands are predictable, so the cost of rapid scalability doesn’t add up either.
When part of a complete strategy, dedicated servers are still the kings of the Savannah
Dedicated servers are important enterprise workhorses. For years dedicated servers have delivered reliable and cost-effective computing to many organizations. The word ‘dedicated’ makes it very clear; dedicated systems are specifically built and tuned for a single customer application and the users who rely on them.
As always, let me know your thoughts, and if you have any questions by commenting below or chatting with us now.