Calculating Total Cost Of AWS Deployments Is Easier Said Than Done

By | May 14, 2015

A version of this article appeared on TechTarget SearchAWS as Calculating the true cost of AWS application development

Cloud services are an easy sell: the continuously declining prices, frictionless setup, low cost of entry, offloaded admin headaches and inherent scalability are a compelling combination. It’s equally hard to argue with the long-term economic advantage of warehouse scale computing in which cloud giants like Amazon, Google and Microsoft deploy servers by the thousand, turning them into disposable units of computation and storage: metaphorically treating servers like cattle, not pets. The scale and intense competition have translated into dramatic price declines for cloud infrastructure, with one IaaS price analysis showing that “the  average price drop for the base level services included in this survey from the initial 2012 snapshot to today was 95%: what might have cost $0.35 – $0.70 an hour in 2012 is more likely to cost $0.10 – $0.30 today.” Another study finds that the hourly cost of AWS EC2 instances has dropped 56% in the last two years. However shopping for cloud application platforms is more like pricing a car with an extensive option sheet and complex lease option financing plans than a movie download. The base price, whether in VMs per hour or GB per month, is a loss-leader to snag drive-by shoppers, but one can easily get sticker shock after pricing in all the necessary details. Below, I’ll describe how to avoid nasty surprises when reviewing your first cloud invoice and given usage trends, it’s advice more organizations should heed.

Source: ScienceLogic Blog

Source: ScienceLogic Blog

According to a recent RightScale survey, AWS remains the most popular public cloud service, although Azure is closing the gap. It’s maturity, mindshare and rich set of application services make it the default choice for many, but before assuming AWS is the best target for cloud application deployment, it’s wise to build a more accurate model of the underlying application, its various service and capacity requirements and run it through a complete price analysis. Although tracking by Cloud Spectator and Strategic Blue typically find AWS instance pricing in the middle of the pack, the cloud cost equation has too many degrees of freedom to allow easy summarization by a single number. One attempted metric is the Cloud Price Index from 451 Research, which measures the average hourly price for a typical Web application including compute, storage, relational and NoSQL databases and network traffic. Its aggregate from over ten vendors finds the typical Web application costs $1.70 per hour, or around $15,000 a year, a figure that 451 Research finds can be cut by almost half in a best-case scenario with longer-term service commitments. Much like car fuel efficiency claims: your mileage may vary.

Source: RightScale 2015 State of the Cloud Report

Source: RightScale 2015 State of the Cloud Report

The most accurate, albeit time consuming option for understanding cloud costs is to model an application’s service requirements and run the mix through a spreadsheet using one of the various online price comparison sites. Besides the calculators available from each vendor, we found four good tools to assist with cloud price shopping:

The most sophisticated and automated analysis tool is PlanForCloud, which supports complex application configurations using an arbitrary mix of compute servers, databases, storage and data transfer. Using the other sites requires manually building a spreadsheet with the various resource types and plugging in pricing information by hand.

Example Pricing Exercise

The effort required to build an accurate cost model is obviously a function of the application’s complexity. Developers trying to ballpark a new design should turn to the AWS Reference Architectures, 16 datasheets that include a high-level schematic and basic description of prototypical designs for deployments as varied as legacy batch processing to online gaming. One problem with using the reference designs for price comparisons is that they employ many of the AWS platform services like CloudFront (global CDN), Elastic Load Balancing, DNS or DynamoDB (NoSQL) that might not be available or have close equivalents on other clouds, so it’s best to stick with the core infrastructure.

Source: author

Source: author

By way of example, we’ve built a simple three-tier Web application with the following configuration (details below):

  • 3 front-end Web servers (medium Linux)
  • 2 mid-tier application servers (large Linux)
  • 2 SQL databases (large MySQL, multi-zone)
  • 1 TB object store (S3 on AWS)
  • 1 TB block store (EBS on AWS)

Starting with AWS instance types, we then duplicated the configuration as close as possible on Azure, Google and Rackspace. Assuming 24×7 usage and month-to-month pricing (no reserved instances) and adding estimates for data traffic to and from external users and between each tier of the design, we then ran each through the PlanForCloud calculator. The following chart (online here) summarizes the results:

Source: author

Source: author

Going through this exercise demonstrates several things, chiefly that cloud services aren’t uniform making it almost impossible to clone a particular configuration from one to another. Second, there are significant price differences for roughly comparable servers. For example, a large SQL DB with 100 GB of local storage runs $248 per month on AWS versus $176 on Azure. In contrast, a medium Web front end goes for $52 monthly on AWS versus $119 on Azure and almost $600 on Google.

The analysis is also overly simplified, assuming hourly pricing for systems operating 24×7, which completely neutralizes the dynamic pricing and scalability benefits of the cloud. Furthermore, anyone in that scenario could save a substantial amount of money (generally 60% or more on AWS) by using reserved instances with an annual lease (see CloudVertical’s comparison table for details). Complicating the calculus even further is the fact that Google offers sub-hour pricing with lower rates for sustained use. For example, using an instance for 100% of the billing cycle as we assumed nets an automatically applied 30% discount. Google’s model is much more attractive for highly variable workloads, while AWS is cheaper for sustained or reserved applications. Finally, we’ve discovered that PlanForCloud’s instance choices and prices aren’t always up to date, for example, it only includes a subset of the available RDS instances, meaning users should always double-check figures on the vendor’s own site.

AWS still the best default choice

Unfortunately, cloud pricing is like some Facebook relationships: it’s complicated. Storage is straightforward, but mapping different workloads to the most appropriate server instance isn’t a science. AWS has the richest service offerings, both in variety of instance types and higher level application services so it’s a great place to start your cloud shopping. It’s probably isn’t the cheapest for any particular application, but also not the most expensive. Still, it pays to shop around and those doing some up front planning may find cheaper and more appropriate cloud alternatives.


AWS Server Details

AWS Configuration Summary Source: PlanForCloud

AWS Configuration Summary
Source: PlanForCloud