The Case for Migrating From a Dedicated Server Environment to the Cloud
In my previous post I discussed various aspects of server load and its implications. The crux of the problem is the difference in perspective between the server administrator and the operations manager looking after the website. As a result, the operations manager keeps complaining about the poor performance of the website, while the System administrator maintains that everything is fine and couldn’t be better.
Why this difference in perspective?
Let’s look at the systems administrator’s point of view. As long as the server is up and running, he claims that there is no problem from his side. The parameters which he monitors are probably the load on the server and the response. Beyond this, he has no reason to suspect anything.
The operations manager looks at some other aspects of website performance. He looks at the download speed of web pages. He is concerned about slow online transactions. He is bothered when users complain about dropped website access.
Now, when the operations manager complains to the server administrator, it is obvious that they are looking at two different aspects of performance. In actual fact, as the server is loaded beyond a certain level, the website performance starts deteriorating. The server administrator is not able to appreciate this fact.
Now, the only solution here is to add an additional dedicated server, when the load reaches a certain point. If you have a dedicated server in a traditional data center, this would mean doubling your expenditure on hardware. The management of many organizations would never buy this argument. As a result the website hobbles along without giving desired performance. This can be a disastrous situation.
The cloud as an alternative
This is where the system administrator and operations manager can find a solution together. IAAS can be considered as an elegant solution to both their problems. Here, you can specify under what conditions of load the server instance is to be added. This has several advantages. Your website performance remains stable at all times. If the traffic suddenly increases beyond a certain level, thereby influencing performance, you can automatically add additional resources. When the demand on the server reduces, the additional resources can be automatically released. This is termed ‘high availability‘.
Auto scaling features in a cloud environment
Every cloud service provider may have his own flavor of auto scaling. In short, this feature allows for automatic expansion or shrinking of resources according to demand. As I explained earlier, there is no direct method to estimate server load in terms of traffic. For this, you have to look at CPU load and memory utilization. The real problem in server performance is managing the spikes and sudden loads. It is difficult to cater for such situations. Load balancing becomes essential under the circumstances. The cloud computing environment once again score high on this count. Distribution of load evenly between server instances can mitigate sudden spikes in traffic. You can learn more about load balancing here.
Conclusion
Cloud computing can ensure that the performance of your online operations are not compromised due to fluctuating load. The solution is both elegant and economical.
Be Part of Our Cloud Conversation
Our articles are written to provide you with tools and information to meet your IT and cloud solution needs. Join us on Facebook and Twitter.
About the Guest Author:
Sankarambadi Srinivasan, ‘Srini’, is a maverick writer, technopreneur, geek and online marketing enthusiast rolled into one. He began his career as a Naval weapon specialist. Later, he sold his maiden venture and became head of an offshore Database administration company in Mumbai. He moved on as Chief Technology Officer of one of the largest online entities, where he led consolidation of 300 online servers and introduced several Web 2.0 initiatives. He holds a Master’s degree in Electronics and Telecommunication.