A Proficient Approach for Load Balancing in Cloud Computing-Join Minimum Loaded Queue: Join Minimum Loaded Queue

A Proficient Approach for Load Balancing in Cloud Computing-Join Minimum Loaded Queue: Join Minimum Loaded Queue

Minakshi Sharma, Rajneesh Kumar, Anurag Jain
Copyright: © 2020 |Pages: 25
DOI: 10.4018/IJISMD.2020010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cloud load balancing is done to persist the services in the cloud environment along with quality of service (QoS) parameters. An efficient load balancing algorithm should be based on better optimization of these QoS parameters which results in efficient scheduling. Most of the load balancing algorithms which exist consider response time or resource utilization constraints but an efficient algorithm must consider both perspectives from the user side and cloud service provider side. This article presents a load balancing strategy that efficiently allocates tasks to virtualized resources to get maximum resource utilization in minimum response time. The proposed approach, join minimum loaded queue (JMLQ), is based on the existing join idle queue (JIQ) model that has been modified by replacing idle servers in the I-queues with servers having one task in execution list. The results of simulation in CloudSim verify that the proposed approach efficiently maximizes resource utilization by reducing the response time in comparison to its other variants.
Article Preview
Top

Introduction

Load balancing in the cloud environment is the process of splitting workload and computing properties among the resources available on the cloud network. Cloud load balancer manages the workload demands by distributing the resources among numerous computer networks or servers. The role of a load balancer is significant when there is an increase in traffic on the Internet that is growing rapidly due to surging cloud applications. The annual report of Cisco forecasts global data center traffic to reach 19.5 zettabytes (ZB) per year by 2021, up from 6.0 ZB per year in 2016 (ETICO, 2018). These predictions imply that workload on servers growing so fast that leads to overloading of these servers and mostly in case of servers dedicated to popular web applications, so load balancing in such an environment originates new challenges. In the case of a cloud network that consists of thousands of servers only at the front end alone, the capability of scaling in and out to amend the elasticity of demand is highly desirable in cloud datacenters. A hardware load balancer is not compatible in the cloud environment, because of their inability to scale to adapt the elasticity of demand. Most hardware load balancers come with the standard overprovisioning requirements, and if the utilization of resources is low at a time, then additional manpower is needed to configure and to maintain these devices. Thus even the best hardware load balancers are liable for increasing the enterprise's total cost of ownership, which is not desirable in the cloud environment. These limitations encouraged the development of distributed software load balancers that can scale elastically to meet consumer demand in the cloud environment.

For large scale cloud networks, scheduling user requests to a suitable cloud resource is considered an NP-hard problem (Mishra, Parida & Sahoo, 2018). To achieve this distributed scalable load balancing algorithm can be designed with the help of distributed dispatchers. Where each dispatcher has the capability to handle a job independently, and only a fragment of jobs drift towards a particular dispatcher. In such type of load balancing, scenario message can flow in two directions: that is message can flow dispatcher to the servers (push messages) and servers to the dispatchers (pull messages). Based on push-based policy, the role of dispatcher participation is active as it waits for the server response after sending query messages to them. The direction of the message exchange is bidirectional. Besides that, in pull-based policy dispatchers passively participate and wait to listen to server response. In pull-based policy, the decision related to dispatching of a job to a particular resource is governed by the dispatcher that depends upon the pull messages sent from the servers (Zhou, Wu, Tan, Sun, & Shroff, 2017). Figure 1 shows the system model for general load balancing based on these two scenarios. For pull-based policy, the dispatcher stores the Id of the servers stored in it based on a specific condition satisfied at a particular time slot t. In this case, only pull messages are passed from servers to the dispatcher. The direction of the message exchange is unidirectional. Besides that, in case of push-based policy on arrival of a job probing messages are sent from dispatcher to server, and the required information is feedbacked for making a dispatching decision for e.g., information related to queue lengths. The incoming job is dispatched to a particular server based on a dispatching distribution after receiving the feedbacks.

Figure 1.

A system model for general load balancing for a cluster of parallel servers

IJISMD.2020010102.f01

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 8 Issues (2022): 7 Released, 1 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing