Ranking with Genetics: Improving Document Ranking With Genetic and Optimization Algorithms

Ranking with Genetics: Improving Document Ranking With Genetic and Optimization Algorithms

Lawrence Master
Copyright: © 2020 |Pages: 15
DOI: 10.4018/IJIRR.2020070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

There are many applications for ranking, including page searching, question answering, recommender systems, sentiment analysis, and collaborative filtering, to name a few. In the past several years, machine learning and information retrieval techniques have been used to develop ranking algorithms and several list wise approaches to learning to rank have been developed. We propose a new method, which we call GeneticListMLE++ and GeneticListNet++, which build on the original ListMLE and ListNet algorithms. Our method substantially improves on the original ListMLE and ListNet ranking approaches by incorporating genetic optimization of hyperparameters, a nonlinear neural network ranking model, and a regularization technique.
Article Preview
Top

1. Introduction

Ranking is used in many fields such as information retrieval, social science, and natural language systems. Commercial web search engines utilize ranking to locate the most relevant documents in response to queries. E-commerce sites employ ranking to suggest products that users may want to consider purchasing. In all of the above scenarios, the ranking problem is reduced to arranging objects in order of importance to an end user based on the user’s input, which is typically text-based (Langville & Meyer, 2012). Contrary to the common belief that ranking is just simple sorting, there are numerous challenges associated with the ranking problem in the field of information retrieval. The most significant difficulty with solving the ranking problem is that most objects don’t have an obvious/trivial attribute to sort by. For instance, web pages do not have any clear attributes to sort by according to a user’s query. Also, it is difficult to directly quantify or model why and how a user thinks a document is relevant since this is partly subjective to the user. Furthermore, another major challenge is handling varied and unexpected user input and still achieving consistent performance. Another challenge is ensuring scalability of the system as the dataset to rank and search grows.

1.1 Existing Ranking Techniques

Simple ranking techniques are based solely on isolated features such as Boolean (Lashkari et al., 2009), Vector Space (Lee et al., 1997), Latent Semantic Indexing (Kettani & Newby, 2010), BM25 (Sari & Adriani, 2014), Language Model for Information Retrieval (Jin et al., 2002), PageRank (Page et al., 1999), and HITS (Ding et al., 2002). Boolean (Lashkari et al., 2009) ranking reduces both the query and documents to a set of words and then utilizes Boolean logical operators to determine document rank. Vector Space (Lee et al., 1997) model converts queries and documents into vectors and then sorts by the cosine similarity of the query and each document vector. The vectors are typically the weights of each term in respect to the local and global context. Latent Semantic Indexing (Kettani & Newby, 2010) operates through constructing a matrix of documents with terms, which is then dimensionally reduced. A similarity metric such as cosine similarity is then computed between the term vectors of the query and each document. BM25 (Sari & Adriani, 2014) calculates ranking with a bag-of-words approach, which compares the terms in a query with the terms in a document. Language Model for Information Retrieval (Jin et al., 2002) inverts the typical approach utilized in probabilistic information retrieval and ranks by the probability of a language model of a document producing the desired query. PageRank (Page et al., 1999) and HITS (Ding et al., 2002) are both link based ranking features that measure a page’s value from its incoming and outgoing hyperlinks. However, all of these features are not sufficiently sophisticated in isolation to capture the intricacies of most applications, such as modern-day web search.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing