Article Preview
Top1.Introduction
Recently, much attention and attraction are being gained by the facial emotion based interaction system that exists between human and computers. In the process of conveying the emotions as well as the identity in social life, a key role is being played by the human face (Wu, et al., 2013; Cruz, et al., 2014; Li, et al., 2013; Valstar, et al., 2012) . Typically, humans keep changing their emotions from time to time concerning their situations and circumstances (either physical or mental). Despite of various emotions expressed by humans, the psychology has defined a set of six facial expressions as a universal expression and it acts as a base for the other expressions, namely, happy, surprise, sad, fear, anger, and disgust (Jing, et al., 2015; Loo, 2014; Wang, 2013; Mariooryad & Busso, 2013) . These emotions are exhibited in the human faces by the facial muscles movements and eyes, eyebrows, nose, and mouth serve as the basic facial features. Although humans are skilled in identifying the facial emotions of known persons, they stuck in confusions, while dealing with a large number of unknown faces. Apart from this, the modifications in the visual stimulus related to the varying conditions like aging, environment as well as other natural factors (spectacles, mustache, hairstyles, beards, etc.) are also responsible for the uncertainties in the manual facial emotion detection (Sariyanidi, et al., 2015;Wang, S. 2010).
The facial emotions are playing are a crucial role in many research areas and hence a vast quantity of techniques have been developed still now (Zhong, et al., 2015;Yang & Bhanu, 2012; Chiranjeevi, et al., 2015; Eleftheriadis, et al., 2015) . These approaches are in general categorized as feature-based approaches and appearance-based approaches (Das and Sharma, 2018). One among the hot topic is feature-based approaches and in the feature-based approaches, the features concerning the human facial emotions are extracted from the geometrical relationships of facial characteristics like chin, eyes, nose, and mouth (Balas, et al., 2015 ;Kumar & Garg, 2018) . However, most of the facial feature extraction methods rely on the accuracy of facial detection and not on the reliability for practical applications. There is a chance for the degradation of most face-recognition systems due to the illumination of light in the input image and to resolve these issue huge algorithms were developed. It is the major requirement of the face detection systems to have limitless memory as well as high computational speed in order to overcome human limitations like Pose variation, Feature occlusion, and Facial expression (Kim & Bien, 2008;Rieffe & Wiefferink, 2017) .
The traditional face recognition system with PCA is a simple approach to face recognition as well as data compression (Bisandu, et al., 2019). But, it is not sensitive to lighting conditions. The learning-based methods are attaining popularity in the literature with the advancements in machine learning techniques (Rustam, et al., 2021) (Elezaj, et al., 2021) (Kalpana and Mohana, 2018). The LDA is one among the popular linear projection techniques, which is efficient in mapping the higher high-dimensional samples into low-dimensional space (Yankouskaya, et al., 2017;Halder, 2013;Happy &Routray, 2015) . The major drawback of LDA arises with a small sample size when the dimensionality of the samples is small. On the other hand, AdaBoost reduces the computational time as well as increases the detection speed. The major cons of this model are its inefficiency in finding optimum threshold from small set samples. Further, SVM has the advantage in terms of both detection rate and false detection. Apart from this, the error rate cannot be minimized in SVM. Thus, there is a necessity to develop an automatic emotion detection system with the aid of real-time face detection algorithms.