Exploiting Hardware Differences in Cloud Data Centers
Every large cloud vendor has several data centers located across the world: thousands of disks, CPUs, RAM chips and networking devices. All of this equipment is of differing vintage. Even if all data centers start off with identical equipment (seldom the case), over time they are subject to differing schedules of hardware maintenance and component-replacement. With the passage of time, even the most strictly-managed data centers tend to become heterogeneous.
The end result is that differences in performances begin to appear for different cloud instances. Since cloud usage is typically charged over the hour, higher performance clusters can achieve significant savings. The question is how to find clusters that are faster and more efficient.
Recent research at Aalto University Finland and Deutsch Telekom Laboratories, Germany focused on understanding hardware variations between data centers of the same vendor and studied the performance variation.
The researchers used standard operating system commands to obtain the CPU details of their clusters and verified the output using other system calls. After considerable effort, they determined that the vendor had the following different types of CPUs –
S No |
CPU Name |
% availability in 2011 |
% availability in 2012 |
1 |
E5507 |
58% |
40% |
2 |
E5430 |
29% |
17% |
3 |
E5645 |
5% |
42% |
4 |
2218HE |
4% |
1% |
5 |
270 |
4% |
0% |
The trend is very clearly visible – the vendor is slowly phasing out the processors at serial numbers 1,2,4 and 5 of the table above and is switching over to E5645 (serial number 3).
When the list of CPUs was broken down by availability zones, there were some where the newer E5645 processor was barely present (<10%) and in some others, the numbers had risen to nearly 90%. Obviously, new data centers had the most modern processors. When the output of the different processors was compared, there were differences all. New ones were delivering 1.25 times’ the throughput of the next-best ones. Compared with the earliest CPUs, there was a performance difference of more than 1.6 times.
Armed with these results, the research team began to calculate costs. If a large task took 100 server instances and a year to complete, switching instances from old processors to new (with a 1.6 times performance advantage) would deliver a net saving of $40,664. For a small business, this kind of saving is very significant.
Users taking care to select data centers where infrastructure is new can automatically get better processors and other hardware. In times such as this, where every dollar counts, this research offers companies a blueprint to further reduce expenditure.
Be Part of Our Cloud Conversation
Our articles are written to provide you with tools and information to meet your IT and cloud solution needs. Join us on Facebook and Twitter.
About the Guest Author:
Sanjay Srivastava has been active in computing infrastructure and has participated in major projects on cloud computing, networking, VoIP and in creation of applications running over distributed databases. Due to a military background, his focus has always been on stability and availability of infrastructure. Sanjay was the Director of Information Technology in a major enterprise and managed the transition from legacy software to fully networked operations using private cloud infrastructure. He now writes extensively on cloud computing and networking and is about to move to his farm in Central India where he plans to use cloud computing and modern technology to improve the lives of rural folk in India.