2015
Journal Articles
Julie Ayter; Adrian Chifu; Sébastien Déjean; Cecile Desclaux; Josiane Mothe
Statistical analysis to establish the importance of information retrieval parameters Journal Article
In: Journal of Universal Computer Science, vol. 21, no. 13, pp. pp–1767, 2015.
Abstract | Links | BibTeX | Tags: Information Retrieval, IR System Parameter, Query Clustering, Query Difficulty, Random Forest
@article{ayter2015statistical,
title = {Statistical analysis to establish the importance of information retrieval parameters},
author = {Julie Ayter and Adrian Chifu and Sébastien Déjean and Cecile Desclaux and Josiane Mothe},
url = {https://hal.archives-ouvertes.fr/hal-01592043/document},
year = {2015},
date = {2015-12-01},
urldate = {2015-12-01},
journal = {Journal of Universal Computer Science},
volume = {21},
number = {13},
pages = {pp--1767},
abstract = {Search engines are based on models to index documents, match queries and documents and rank documents. Research in Information Retrieval (IR) aims at defining these models and their parameters in order to optimize the results. Using benchmark collections, it has been shown that there is not a best system configura- tion that works for any query, but rather that performance varies from one query to another. It would be interesting if a meta-system could decide which system config- uration should process a new query by learning from the context of previousqueries. This paper reports a deep analysis considering more than 80,000 search engine config- urations applied to 100 queries and the corresponding performance. The goal of the analysis is to identify which configuration responds best to a certain type of query. We considered two approaches to define query types: one is post-evaluation, based on query clustering according to the performance measured with Average Precision, while the second approach is pre-evaluation, using query features (including query difficulty predictors) to cluster queries. Globally, we identified two parameters that should be optimized: retrieving model and TrecQueryTags process. One could ex- pect such results as these two parameters are major components of IR process. However our work results in two main conclusions: 1/ based on post-evaluation approach, we found that retrieving model is the most influential parameter for easy queries while TrecQueryTags process is for hard queries; 2/ for pre-evaluation, current query fea- tures do not allow to cluster queries to identify differences in the influential parameters.},
key = {Information Retrieval, query difficulty, query clustering, IR system pa- rameters, Random Forest},
keywords = {Information Retrieval, IR System Parameter, Query Clustering, Query Difficulty, Random Forest},
pubstate = {published},
tppubtype = {article}
}
Search engines are based on models to index documents, match queries and documents and rank documents. Research in Information Retrieval (IR) aims at defining these models and their parameters in order to optimize the results. Using benchmark collections, it has been shown that there is not a best system configura- tion that works for any query, but rather that performance varies from one query to another. It would be interesting if a meta-system could decide which system config- uration should process a new query by learning from the context of previousqueries. This paper reports a deep analysis considering more than 80,000 search engine config- urations applied to 100 queries and the corresponding performance. The goal of the analysis is to identify which configuration responds best to a certain type of query. We considered two approaches to define query types: one is post-evaluation, based on query clustering according to the performance measured with Average Precision, while the second approach is pre-evaluation, using query features (including query difficulty predictors) to cluster queries. Globally, we identified two parameters that should be optimized: retrieving model and TrecQueryTags process. One could ex- pect such results as these two parameters are major components of IR process. However our work results in two main conclusions: 1/ based on post-evaluation approach, we found that retrieving model is the most influential parameter for easy queries while TrecQueryTags process is for hard queries; 2/ for pre-evaluation, current query fea- tures do not allow to cluster queries to identify differences in the influential parameters.
TRANSLATE with x
English
TRANSLATE with
Enable collaborative features and customize widget: Bing Webmaster Portal