Maxmin Data Range Heuristic-Based Initial Centroid Method of Partitional Clustering for Big Data Mining

Maxmin Data Range Heuristic-Based Initial Centroid Method of Partitional Clustering for Big Data Mining

Kamlesh Kumar Pandey, Diwakar Shukla
Copyright: © 2022 |Pages: 22
DOI: 10.4018/IJIRR.289954
Article PDF Download
Open access articles are freely available for download

Abstract

The centroid-based clustering algorithm depends on the number of clusters, initial centroid, distance measures, and statistical approach of central tendencies. The initial centroid initialization algorithm defines convergence speed, computing efficiency, execution time, scalability, memory utilization, and performance issues for big data clustering. Nowadays various researchers have proposed the cluster initialization techniques, where some initialization techniques reduce the number of iterations with the lowest cluster quality, and some initialization techniques increase the cluster quality with high iterations. For these reasons, this study proposed the initial centroid initialization based Maxmin Data Range Heuristic (MDRH) method for K-Means (KM) clustering that reduces the execution times, iterations, and improves quality for big data clustering. The proposed MDRH method has compared against the classical KM and KM++ algorithms with four real datasets. The MDRH method has achieved better effectiveness and efficiency over RS, DB, CH, SC, IS, and CT quantitative measurements.
Article Preview
Top

Introduction

The rapid development of digital technologies had produced enormous amounts of data in a different format at high speed, such as social media. In Sep. 2019 (Viens, 2019), the monthly active users of Facebook was 2.4 billion that sent 41.6 million messages through Messenger in a minute, YouTube has 2 billion users that have watched 4.5 million videos per minute, Instagram has 1 billion users out of which 347,222 users scrolled the Instagram per minute, Twitter has 330 million users where 87500 users tweeted. All these social media describes how much data has been generated by the user in current ages with high speed. Digital technologies have changed the scales, formats, and speed of data production. For these reasons, the nature of the usual data changed to big data. The volume, variety, and velocity characteristics have defined the complex framework of big data. The volume characteristic defines Terabytes and Petabytes scaling, the variety defines heterogeneous data formats generated through the heterogeneous data sources, and the velocity defines the speed of data production and data analysis. Data volume is the base of big data that defines the massive data set. Here, this paper summarizes the volume, variety, and velocity characteristics basis of existing research (Hariri et al., 2019; Lee, 2017; Nada Elgendy & Elragal, 2014) as “volume depends upon variety, and variety depends upon velocity.”

Recent researchers have suggested other characteristics of the big data as value (Oracle), veracity (IBM), variability (SAS), and visualization. Value is defining the valuable information from massive volume using constant attributes of the big data that describe the decision system (Hariri et al., 2019; Sivarajah et al., 2017). Veracity is determining the quality of the data as trustiness and accuracy during the data analysis, data storing and management, and heterogeneous sources. Variability is defining data structure, meaning, and behavior that changes from time to time due to rapid growth. Veracity is determining the accuracy of the decision-making system (Nada Elgendy & Elragal, 2014; Tabesh et al., 2019) and variability used in sentiment analysis (Gandomi & Haider, 2015; Sivarajah et al., 2017). Visualization characteristic visualizes the knowledge as user expectation, or unstable such as pictorial or graphical such as a table, graph, picture, statically, and so on. This paper summarizes the value, veracity, variability, and visualization characteristics of big data as “ Veracity validates the accuracy basis of variety, the value identifies predicted value based on volume and variety, variability presents specific analysis tools based on the volume and variety, and visualization visualized the results and problems based on the volume, variety, and velocity.”

Classical data mining algorithms use a centralized data source, but big data mining algorithms use distributed, centralized and a mixture of multiple sources. Multiple sources mining of big data could be grouped into four categories pattern analysis, classification, clustering, and fusion (Wang et al., 2018). The clustering process is the default data mining approach that labels data items without any prior knowledge basis of data similarity (Jain, 2010). For this reason, clustering is known as unsupervised learning. Data similarities define by the distance measures, where data similarities and variance of within-cluster are minimum, the data similarities of between-clusters are maximum. Classical clustering algorithms are facing various challenges due to data volume, variety, and velocity. The data volume is defining the computational cost, speed, efficiency, and scalability challenges of the classical clustering algorithms (Khondoker, 2018; Maheswari & Ramakrishnan, 2019). Big data clustering focuses on scale-up, speed-up, optimizing computation costs, and resources without the effect of cluster quality. The design of the big data clustering is dependent upon the single-machine and multiple-machine execution environment (Khondoker, 2018).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing