Menu

At this stage of the game - the many value propositions of cloud IaaS services have become self evident - a computing revolution that has truly been realized in almost every conceivable way. In particular the beauty of abstracting away the toils of infrastructure management is a clear boon for enterprise - and in this sense the cloud has delivered on its promise.

However, at the same time, the faith we place in the magic of "it just works" also presents an opportunity for improved transparency into the nuts and bolts of these amazing platforms that many of us now consider as basic utilities for the modern world, worthy of their place alongside water, electricity, and animated GIFs.

What exactly is an AWS ECU and how does it compare to the Xeon E3 in my on premise server? How does a Google n1-standard-2 compare to an Azure DSv2 in terms of price vs performance? From a financial perspective, does it make better sense to spin up a larger instance or a couple of smaller ones? How much faster are CPU optimized instances compared to standard ones? Is performance constant, or is it affected by events such as major retail holidays? Which cloud vendor took the biggest performance hit when they patched Meltdown?

We don't know!

But these are the types of questions that CloudSmart hopes to answer. Our software automatically spins up a range of different VM instance types each week running on Amazon Web Services, Google Cloud, and Microsoft Azure (more vendors to come!); then runs popular benchmarking tools designed to test CPU, disk IO, memory, and GPU performance.

Here's the deal... We'll publish the latest data right here for you all in black and white, throw in the most recent pricing information and top it off with comparative results from well known on premise hardware configurations. From there you can draw your own conclusions and we'll chime in occasionally when we think we've found something interesting. If this has piqued your curiosity - then bookmark this page and come back anytime you need a fresh serving of digital insight.

Date Start: 2018-01-01 | Date End: 2018-04-01 | Benchmark: | Instance Types: All | | Data Last Updated:

Notes:

Operating Systems: CentOS 7.4 was used for all benchmarks except for the TensorFlow benchmarks focused on GPU performance, which were run on Ubuntu 16.0.4

Benchmarks: for the curious - information on how the specific benchmarking tools were acquired & run can be found here

Data Normalization: To do a price vs performance comparison on benchmarks like C-ray where lower is better, the data needs to be normalized so that higher is better. The formula used to do this was:
Range = (max+min)
Normalized = (1 - / ((100 / Range) * Value * 0.01)) * Range

Where:
Max = the largest value in the current data set before normalizing
Min = the smallest value in the current data set before normalizing
Value = a given value in the current data set that this formula is applied to
Normalized = The normalized result of a given value in the current dataset

Hardware Specifications: the hardware specifications for the Dell PowerEdge R230 comparison server can be found here

Per Hour Pricing for the Dell PowerEdge R230 is based on these assumptions:
You are hosting the server yourself. In reality, if you were leasing the server from a hosting company you'd likely pay a higher rate of at least $0.36 USD p/h
An initial purchase price of $2,000 USD divided over a useful lifespan of 5 years ($0.046 p/h)
A single 250W power supply with the machine running on average at 50% capacity, Power Usage Effectiveness of 1.7, and an average electricity price of 0.11 per kW/h ($0.023 p/h)
A real estate cost of $10 per square foot with a footprint of 8 square foot ($0.11 p/h)

This gives a total operating cost of $0.179 USD p/h

GPU benchmarks: are be performed using NVidia's CUDA Toolkit and Google's TensorFlow benchmarking scripts.