Article Preview
Top1. Introduction
Sparse Representation based techniques have achieved enormous successes, particularly in the area of signal reconstruction (Lan, Ye, Zhang, Zhou, & Yuen, 2020; Z. Zhang, Xu, Yang, Li, & Zhang, 2015),video analysis and image classification based on experimental outcomes discussed in (Bayrakdar, Yucedag, Simsek, & Dogru, 2020; C.-P. Wei, Y.-W. Chao, Y.-R. Yeh, & Y.-C. F. Wang, 2013; Zhan, Liu, Gou, & Wang, 2016). On the other hand, Dictionary Learning is to learn a good dictionary from training samples in order to achieve a well-represented signal. Meaning, the inculcation of a quality dictionary is very critical in achieving an efficient sparse representation. Studies have shown that Dictionary learning and sparse representation is an effective mathematical model for data representation that achieves state-of-the-art performance in various fields such as pattern recognition, machine learning and computer vision (Ji, Hooshyar, Kim, & Lim, 2019; Z. Zhang, Xu, et al., 2015).
The dictionary could be determined by either using all the training samples as the dictionary to code the test samples (e.g. Locality Constrained Linear Coding (LLC) in (J. Wang et al., 2010a)) or adopt a learned dictionary for the sparse representation for each training sample in the set (e.g. KSVD in (Q. Zhang & Li, 2010), Fisher Discriminative Dictionary Learning (FDDL) (Iqbal, Nait-Meziane, Seghouane, & Abed-Meraim, 2020; Zheng & Tao, 2015)). Besides, group centered sparse coding likened to rank minimization problem is used to measure the sparse coefficient of each group by estimating the values of each grouping in (Zha et al., 2016). All the methods that adopts the first strategy use training samples as the dictionary. Although they show good classification performance the dictionary might not be effective enough to represent the samples well, because of noisy information that may have accompanied the original training samples, may not also fully make use of the discrimination information hidden in the training samples. The second category is also not suitable for recognition, because it only requires that the dictionary is best expressed in the training samples with strict sparse representation. The above stated issues were addressed by the LSDL approach that incorporated a locality constraint into the objective function of the DL which ensures that, over complete dictionary leaned is more representative. The problem with the traditional sparse representation methods is that, they cannot produce identical results when the input features are from the same categorization.