Article Preview
TopIntroduction
Cloud computing paradigm is an internet-based model composed of an extensible and scalable computing entity which requires minimal management effort and service provider interaction (Mell & Grance, 2011). Cloud computing promotes availability, scalability, reliability and portability in a computing system (Siegel & Perdue, 2012). The anytime-anywhere access policy of resources provided by cloud computing environment combined with offered storage capacity has improved the quality of service for the end-users to a great extend (Khaire, 2017). Likewise, cloud computing improves market and enterprises by reducing initial investment and capital expenditure promoting industrial specialization and resource utilization (Khaire, 2017).
Cloud computing makes the notion of “pay for what you use” or “infinite availability” that is use as much as you want. Such kind of a service is applicable only when the backend of a system is robust, proficient and flexible. One of the factors that drive the efficiency of a cloud computing backend system can be seen in its virtualized environment (Kumar & Charu, 2015). The virtualized infrastructure of cloud computing offers a virtual version of operating system, server, storage and network resources to the cloud users thereby increases efficiency, throughput and overall cost-effective (Vaezi & Zhang, 2017). Microsoft Azure and Amazon web services (AWS) are the most popular and scalable cloud computing services (Kotas et al., 2018) offers to cloud users. Therefore, even though different, cloud computing and virtualization share a close connection and a common bond. To sustain service to cloud users, cloud service provider has to maintain the standard of quality service without fail. For efficient performance, the main concern of any cloud computing system is balancing the workload shared among its virtualized component. Proper scheduling and load balancing improves response time, processing time, resource utilization, overall execution time, throughput, scalability and associated overheads (Deepa & Cheelu, 2017). Enhancing scheduling and balancing technique improves quality of cloud service and is an area which attracts the attention of many development and research work throughout the globe.
Performance is an essential property of any cloud computing system. It determines the functional efficacy and necessary improvements to harness the system performance (Jacob & Raj, 2019). Cloud performance can be achieved through parallel computing, load balancing and job scheduling (Khaire, 2017) and should be should be complete, efficient and guaranteed. This paper delves into the ambit of scheduling, load allocation and balancing techniques. Scheduling in cloud computing is a set of policies to regulate which task of the computer system would be taken up. Load balancing technique, on the other hand, is a process in which no node of a system remains in idle state while others are over utilized. Load balancing can be static and dynamic in nature. In static load balancing, prior knowledge about the node’s specification such as memory, bandwidth or processing elements, is required, and using this information, the load distribution is distributed accordingly at compile time (Deepa & Cheelu, 2017). Whereas, in dynamic load balancing, the load distribution happens at run time and in this case, decisions to distribute the load between the heavy and a lighter node happen dynamically (Deepa & Cheelu, 2017). Load balancing properties help to achieved prioritization and efficient allocation of resources thereby contribute to cloud service and performance.
This paper assessed the performance of a cloud computing system with respect to response time (RT) and VM’s CPU utilization. The analysis is done on a set of one hundred (100) and five hundred (500) heterogeneous cloudlets when the cloudlets are allocated from the data centre to the virtual machine using CloudSim simulator. The performance is evaluated by incorporating a hybridization of scheduling, allocation and load balancing algorithm under various environmental setups. The major contributions of this work are as follows: