A Classification Learning Research based on Discriminative Knowledge-Leverage Transfer

A Classification Learning Research based on Discriminative Knowledge-Leverage Transfer

Ding Xiong, Lu Yan
Copyright: © 2018 |Pages: 17
DOI: 10.4018/IJACI.2018100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Current transfer learning models study the source data for future target inferences within a major view, the whole source data should be used to explore the shared knowledge structure. However, human resources are constrained, the source domain data is collected as a whole in the real scene. However, this is not realistic, this data is associated with the target domain. A generalized empirical risk minimization model (GERM) is proposed in this article with discriminative knowledge-leverage (KL). The empirical risk minimization (ERM) principle is extended to the transfer learning setting. The theoretical upper bound of generalized ERM (GERM) is given for the practical discriminative transfer learning. The subset of the source domain data can be automatically selected in the model, and the source domain data is associated with the target domain. It can solve with only some knowledge of the source domain being available, thus it can avoid the negative transfer effect which is caused by the whole source domain dataset in the real scene. Simulation results show that the proposed algorithm is better than the traditional transfer learning algorithm in simulation data sets and real data sets.
Article Preview
Top

1. Introduction

American psychologist Anderson proposed adaptive control thought(ACT) (Anderson, 2010), the human cognition was divided into procedural cognitive and narrative cognitive, and the cognitive process was divided into two stages: First, procedural cognition rises to declarative cognition, and then declarative cognition migrates between tasks and creates new procedural cognition in new tasks. Due to the lack of procedural cognition, for some new tasks, even if people learn only some of their characteristics, the stereotyped cognition will selectively use the old task knowledge in the brain to identify, learn and transform a new task of process cognition. In the process of declarative knowledge, according to some of the characteristics of the new task retrieve and the associated old tasks, the brain will reason to get more, more specific awareness (Anderson, 2010). As the example in Figure 1, if the task of the source domain has been mastered in the process cognitive, when the new task of identifying chickens has been exposed, according to the shape of the chicken and other characteristics, it is rapidly identified that birds belong to the same group of animals. The same situation is applied to the identification of cats.

Figure 1.

Two examples of using related knowledge on birds and dogs while learning target objects chicken and cats

IJACI.2018100104.f01

Since the birth of the machine learning, it has been imitating the human cognitive process. There is no doubt that the development of cognitive psychology has contributed to the development of machine learning. In the framework of traditional machine learning, the learning task is to learn a classification model on the basis of given sufficient training data. Then this model is used to classify and predict the test documents. The machine learning algorithms have a key problem in the current Web mining research: it is very difficult to get a lot of training data in some emerging areas. The development of Web applications is very fast. A lot of new areas are emerging from traditional news, to web pages, pictures, blogs, podcasts and so on. Traditional machine learning requires a large amount of training data to be calibrated for each area, which will cost a lot of manpower and material resources. And there is not a lot of marked data, it will make a lot of research and application-related research cannot be carried out. Second, the traditional machine learning assumes that the training data and the test data are subject to the same data distribution. In many cases, this same distribution assumption is not satisfied. Often possible situations are such as training data expired. This often requires us to re-mark a lot of training data to meet the needs of our training, but marking the new data is very expensive, it requires a lot of manpower and material resources. From another point of view, if we have a lot of training data in different distributions, it is also very wasteful to completely discard the data. How to use these data rationally is the main problem of migration learning. Migration learning can migrate knowledge from existing data to help you learn in the future. The goal of Transfer Learning is to use the knowledge learned from an environment to help the learning tasks in the new environment. Thus, migration learning does not assume the same distribution as traditional machine learning. Work on migration learning can now be divided into three parts: An instance-based migration learning in isomorphic space(Dai, Yang et al., 2007), Feature-based migration learning under isomorphic space(Such as CoCC algorithm (Dai, Xue et al., 2007), TPLSA algorithm (Xue et al., 2008), spectral analysis algorithm (Ling et al., 2008) and self-learning algorithm (Dai et al., 2008)), Migration Learning in Heterogeneous Space (Dai et al., 2008; Ling et al., 2008).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing