Understanding the results
In the previous part of this paper we discussed the essential differences between cloud based applications and on premise hosted ones when it comes to the testing of applications. A major difference lies in the control that the cloud service provider gives you over your test environment. In case you require tests at a deeper level, the answer in all probability lies in writing a program that uses your application mimicking a user. Such a program can continue to check your application at regular intervals to ensure that it is functioning as expected. Any aberrations can be detected and administrators alerted in near real time.
In most other cases, test results may need to be analyzed more deeply since subtle error conditions may give rise to situations that cannot be detected so easily. Therefore analysis may need to go deeper. This can take the following forms –
Simple Boolean checks – this is the simplest method possible. It checks if your application is giving a response or not. You can compare it to the ‘ping’ command used to check networks. If you get a response, it confirms that the application is on line and the intervening network is working.
Response threshold – under normal conditions of use, your application is expected to give a response within a certain number of milliseconds. Using the cloud, you expect this time to be maintained rather rigidly because if the application load exceeds a value, a new instance should be automatically created. If this is exceeded beyond limits you set, it indicates a problem that needs to be investigated.
Check for consistency of response – quite often, a set of answers from an application will correlate with each other. For example, the numbers of items in your shopping cart must correlate to those in the invoice. If there is a variation, it could be a serious error that needs to be corrected straightaway.
Checking statistically – while there is value in checking spot parameters such as the response time at a given instant, it is the statistical value that gives an indication of a long time trend. Good monitoring programs are able to generate statistical means and compare the spot value with this to determine how much is the variation. This ensures more intelligent checking.
Triggering responses – based on how you have programmed your testing environment, you can get it to trigger a response to critical events. Since cloud based systems are natively well-suited to connect to mobile and smartphone users, one response can be to send a mail or an SMS to an administrator. This ensures that the error condition is handled faster.
Adaptive systems – the more capable monitoring programs come with machine learning and artificial intelligence. They are able to understand results and based on this learning, their future interpretation of results could vary. They are also able to draw inferences and initiate deeper checking if required. For example, if a program is giving out an output faster than anticipated, it could mean that another module that this program depends on is not actually doing a computation but is returning a default value.
Managing corrections – many monitoring programs can follow through on a fault until it is finally resolved. The system works as both a monitoring device and a corrections manager to ensure that the error condition detected is not overlooked.
Multiple warnings – monitoring programs can use multiple methods to attract the attention of administrators to critical problems. A background script decides which problems are critical and which ones aren’t. It is also able to decide on the individuals to be alerted based on the classification of the problem. As mentioned earlier, cloud-based systems are tightly integrated with mobile phone systems and use this capability very effectively to generate appropriate warnings.
As is evident from a reading of the above sections, cloud based application monitoring systems have been evolving into extremely capable tools that can take over a number of managerial tasks themselves. This allows for a faster response to emerging problem areas and allows administrators to correct critical issues much earlier than otherwise possible. These monitoring systems are also being used to determine when an application is experiencing light loads and this information is used to work computationally heavy tasks during this period. This kind of optimization can ultimately reduce costs by distributing load over time and ensuring that fewer numbers of processors are hired.
Be Part of Our Cloud Conversation
About the Guest Author:
Sanjay Srivastava has been active in computing infrastructure and has participated in major projects on cloud computing, networking, VoIP and in creation of applications running over distributed databases. Due to a military background, his focus has always been on stability and availability of infrastructure. Sanjay was the Director of Information Technology in a major enterprise and managed the transition from legacy software to fully networked operations using private cloud infrastructure. He now writes extensively on cloud computing and networking and is about to move to his farm in Central India where he plans to use cloud computing and modern technology to improve the lives of rural folk in India.