Task Allocation Model Based on Hierarchical Clustering and Impact of Different Distance Measures on the Performance

Task Allocation Model Based on Hierarchical Clustering and Impact of Different Distance Measures on the Performance

Harendra Kumar, Isha Tyagi
Copyright: © 2020 |Pages: 29
DOI: 10.4018/IJFSA.2020100105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article observed a new strategy to the problem of tasks clustering and allocation for very large distributed real-time problems, in which software is consolidated hierarchically and hardware potentially spans various shared or dedicated links. Here, execution and communication times have been considered as a number. Existing strategies for tasks clustering and allocation are based on either executability or communication. This study's analytical model is a recurrence conjuration of two stages: formation of clusters and clusters allocation. A modified hierarchical clustering (MHC) algorithm is derived to cluster high communicated tasks and also an algorithm is developed for proper allocation of task clusters onto suitable processors in order to achieve optimal fuzzy response time and fuzzy system. Yang's and Hamming's distances are taken to demonstrate the impact of distance measures on the performance of the proposed model.
Article Preview
Top

1. Introduction

Task scheduling schemes are usually categorized as ‘static’ or ‘dynamic.’ Several approaches have been circulated for solving statistical task allocations. Ma and Lee (1982) presented a model of task allocation depend on a zero-one programming approach and the branch and bound method in distributed computing. Sarje and Sagar (1991) and Elsadek and Wells (1999) have developed heuristic models in distributed system. Attiya and Hamam (2006) gave an allocation model depend upon cost function for optimize reliability. Daoud and Kharma (2008) addressed a static task scheduling, which permits the utilization of sophisticated scheduling algorithms to make highest quality task schedules without presenting run time scheduling overheads. Yadav et al. (2011) presented a model with the vital concern on the measures of performance, to attain optimal cost and reliability by tasks allocation onto the heterogeneous processors. Kumar et al. (2013) introduced a model by considering costs as fuzzy numbers to make the optimal tasks allocation. Wang et al. (2013) discussed a scheduling heuristic approach to diminish energy consumption and studied the relationship between execution time of tasks and energy consumption. Singh and Garg (2014) derived a systematic allocation model of tasks to diminish the program’s Parallel Processing Cost (PPC) with the hope to raise the overall system's throughput. Several approaches to the dynamic tasks allocations have been analyzed in the bygone time. In DCS, a taxonomy of dynamically tasks scheduling is synthesized by considering state estimation and decision making presented by Rotithor (1994). A time limit based scheduling for resolving the scheduling issue for periodical tasks on various processors is presented by Srinivasan and Baruah (2002). Augonnet et al. (2011) developed several strategies which can be picked flawlessly at run time and analyzed their efficiency. Li et al. (2014) designed an algorithm to effectually utilize heterogeneousness from wireless sensor and to make capable running various applications synchronously. Kumar et al. (2016) offered a model to schedule the tasks dynamically to attain optimal reliability and cost. Janati et al. (2017) introduced an approach via adopted two methods, an imitation learning and a genetic algorithm for solving TAP for arbitrary tasks and robots. Jana et al. (2018) offered a Modified Particle Swarm Optimization (MPSO) procedure where they focused on two essential parameters such as ratio of successful execution and average scheduling length. Karuppan et al. (2018) planed a priority-based max-min algorithm for scheduling which aims to achieve lower make-span and maximized throughput.

Usually, load balancing provides parallel and distributed systems with the potential to avoid that situation where several system’s resources are overloaded while another remain inactive or under loaded. It is properly understood that extremely overloading a portion of resources can substantially detract the overall system's performance. Zhao and Huang (2009) proposed a load balancing distributed algorithm COMPARE_AND_BALANCE depend on sampling for making balance solution. Liu et al. (2010) described LBVS strategy details and gave the direction to apply LBVS with the iRODS. Kansal and Chana (2012) discussed and compared load balancing techniques based on several parameters as associated overhead, performance, scalability etc. which are taken in distinct techniques in cloud computing. Abdullah et al. (2017) offered a model of hierarchical dynamic grid for load balancing with a proficient methodology to select masters-replicas resources in computational grid. Li et al. (2019) studied an efficient hybrid for load balancing of molecular dynamics simulations (MDS) on heterogeneous supercomputers by combining the particle swarm optimization (PSO) algorithm and the genetic algorithm (GA).

Complete Article List

Search this Journal:
Reset
Volume 13: 1 Issue (2024)
Volume 12: 1 Issue (2023)
Volume 11: 4 Issues (2022)
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 4 Issues (2017)
Volume 5: 4 Issues (2016)
Volume 4: 4 Issues (2015)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing