An End-to-End Efficient Lucene-Based Framework of Document/Information Retrieval

An End-to-End Efficient Lucene-Based Framework of Document/Information Retrieval

Alaidine Ben Ayed, Ismaïl Biskri, Jean-Guy Meunier
Copyright: © 2022 |Pages: 14
DOI: 10.4018/IJIRR.289950
Article PDF Download
Open access articles are freely available for download

Abstract

In the context of big data and the 4.0 industrial revolution era, enhancing document/information retrieval frameworks efficiency to handle the ever‐growing volume of text data in an ever more digital world is a must. This article describes a double-stage system of document/information retrieval. First, a Lucene-based document retrieval tool is implemented, and a couple of query expansion techniques using a comparable corpus (Wikipedia) and word embeddings are proposed and tested. Second, a retention-fidelity summarization protocol is performed on top of the retrieved documents to create a short, accurate, and fluent extract of a longer retrieved single document (or a set of top retrieved documents). Obtained results show that using word embeddings is an excellent way to achieve higher precision rates and retrieve more accurate documents. Also, obtained summaries satisfy the retention and fidelity criteria of relevant summaries.
Article Preview
Top

Introduction

Document Retrieval (DR) is defined as the process of matching some stated user queries against a set of free-text records (Anwar, 2010). Nowadays, Massive and quite variant data is being generated at an unprecedented rate. In this context, the big data era has overturned classical DR challenges. More focus is being addressed on proposing innovative indexing and searching routines. Document retrieval systems generally perform two basic operations: 1) indexing; is the process of representing data in a condensed format, 2) querying; is the process of querying the DR system to retrieve appropriate data. The first operation does not involve end-users. Generally, it is performed in an off-line mode. The second one includes numerous processing operations, ranging from filtering, searching, mapping to ranking returned indexes.

Document retrieval frameworks are built upon the cluster hypothesis (Fiana & Oren, 2013). Identifying the appropriate cluster of pertinent documents to a given straightforward user query is an easy task. Finding the set of clusters appropriate to complex queries is a more difficult task (Tombros et al., 2002) (Liu & Croft, 2006). The retrieval performance drops down if top accurate documents are not presented at the top of returned indexes. Proposing new ranking query-specific cluster strategies has been a hot research topic for many years (Leuski, 2001), and suggested solutions base on a cluster-against-query representation comparison (Liu & Croft, 2004) (Liu & Croft, 2008). Some document retrieval frameworks make use of extra features, including inter-cluster and cluster-document similarities (Kurland & Lee, 2006) (Kurland & Domshlak, 2008) (Kurland & Krikon, 2011). Query expansion (QE) is another way to heighten document retrieval systems accuracy (Hiteshwar & Akshay, 2019).

First attempts of query expansion have been proposed since early 1960. The main objective is to improve the retrieval process performance. In this context, QE was used as a procedure for literature indexing and searching (Maron & Kuhns, 1960). The user's feedback was employed in (Rocchio, 1971) to expand queries. (Jones, 1971) (Van, 1977) suggested a collection-based term co-occurrence QE protocol, while (Jardine & Van, 1971) (Minker, 1972) introduced a cluster-based one. The mentioned above techniques led to satisfactory results. Nevertheless, they were experimented with using small corpora and a set of straightforward user queries. Researchers noticed a considerable loss in retrieval precision when the mentioned above techniques were tested using bigger corpora sizes provided by public search engines, firstly implemented in 1990 (Salton & Buckley, 1990) (Harman, 1992). Consequently, query expansion has been a hot search topic, notably in an ever-growing big data word. Precision and Recall are the states of the art standard measures of document retrieval accuracy (Sagayam et al., 2012). The first one refers to the percentage of relevant retrieved records, while the second one refers to the percentage of relevant records being retrieved. Notice also that the document retrieval research community uses TRECEVAL1, a standard tool commonly used to evaluate ad hoc retrieval runs, given the returned documents and a conventional collection of refereed results.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing