Deployment of Load Balancing in Cloud Computing: Industrial Application and Benefits

Deployment of Load Balancing in Cloud Computing: Industrial Application and Benefits

Binay Kumar Pandey, A. P. Mukundan, Vinay Kumar Nassa, Digvijay Pandey, A. Shaji George, A. Shahul Hameed, Pankaj Dadheech
Copyright: © 2024 |Pages: 13
DOI: 10.4018/979-8-3693-1335-0.ch017
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cloud computing involves virtualization, distributed computers, networking, software, and web services. Clouds have clients, datacenters, and servers. It has fault tolerance, high availability, scalability, flexibility, little user overhead, cheap ownership cost, on-demand services, etc. These difficulties require a powerful load balancing algorithm. Memory, CPU, latency, or network-load. Load balancing distributes demand to avoid overloaded distributed system nodes and optimises resource use and job response time. Load balancers ensure that processors and network nodes perform similarly. Methods initiated by sender, recipient, or symmetric. Using the divisible load scheduling theorem, a load balancing method will optimise throughput and latency for clouds of different sizes.
Chapter Preview
Top

1. Introduction

Cloud computing involves virtualization, distributed computers, networking, software, and web services. Clouds have clients, datacenters, and servers. It has fault tolerance, high availability, scalability, flexibility, little user overhead, cheap ownership cost, on-demand services, etc. These difficulties require a powerful load balancing algorithm. Memory, CPU, latency, or network-load. Load balancing distributes demand to avoid overloaded distributed system nodes and optimizes resource use and job response time. Load balancers ensure that processors and network nodes perform similarly (Pandey, B. K.,et al., 2022). Methods initiated by sender, recipient, or symmetric. Using the divisible load scheduling theorem; a load balancing method will optimize throughput and latency for clouds of different sizes (Bessant, Y. A.,et al.,2023).

The phrase “cloud computing” refers to a collection of practices that includes virtualization, distributed computing, networking, software, and online service provisioning. All of these things are elements that make up cloud computing. Clients, a centralized data storage facility, and a huge number of servers that are spread out around the globe are some of the components that come together to make what is known as a cloud. It has a number of benefits, including on-demand service provisioning, fault tolerance, high availability, scalability, flexibility, decreased overhead for users, and a lower total cost of ownership. In addition to these benefits, it also has a wide range of other benefits (Anthony T.Velte,et al.,2010).

In view of these concerns, it is of the utmost urgency to work towards the construction of a reliable algorithm for load balancing. This must be done immediately. It is likely that the load will have an effect on the capacity of the memory, the degree of delay, or the processing capability of the CPU. The term “load balancing” refers to the process of distributing the workload across the various nodes of a distributed system in order to improve resource utilization and task response time, as well as to avoid a scenario in which some of the nodes are substantially occupied while others are either idle or just doing very little work. Load balancing can also be done in order to avoid a scenario in which some of the nodes are substantially occupied while others are either idle or just doing very little work (George, A. H.,et al.,2021). A situation in which some of the nodes are substantially busy while others are just doing very little work is one that you want to do everything you can to steer clear of using this strategy. Through a process known as load balancing, it is possible to make certain that all of the processors in a system or all of the nodes in a network are working on approximately the same amount of tasks at any particular instant in time. This can be verified by keeping an eye on the total quantity of work being completed by each processor or node. With this strategy, the conversation can be started by either the sender or the recipient, depending on who you want. Alternately, it may take the form of the symmetric type, in which both the sender and the recipient take turns being the one to initiate the conversation (Parthiban, K.,et al.,2021).

The fundamental purpose of this project is to design an efficient method for load balancing by utilizing the divisible load scheduling theorem as a basis. This will allow for the project to meet its primary objective. Because of this, we will have the capability to maximize or decrease a wide variety of performance factors (for example, throughput and latency) for clouds of varied sizes (the demand imposed by the application will determine the virtual topologies).

The term “cloud computing” refers to a type of on-demand service in which clients are offered shared resources, information, software, and other equipment in accordance with their requirements at a given point in time. This type of service is also known as “utility computing.” This category of service can also be referred to as “utility computing.” When discussing anything related to the internet, it is likely that one will use this term at some point. The entire internet can be conceptualized as a cloud, which is only one way to think about it. Cloud computing can result in cost savings not only in terms of operational costs but also in terms of capital costs, which can be a significant benefit (Kumar, M. S.,et al.,2021).

Figure 1.

A cloud is used in network diagrams to depict the Internet

979-8-3693-1335-0.ch017.f01

Complete Chapter List

Search this Book:
Reset