Performance prediction and evaluation in recommender systemsAn information retrieval perspective
- Bellogin, Alejandro
- Pablo Castells Azpilicueta Director
- Iván Cantador Co-director
Universidade de defensa: Universidad Autónoma de Madrid
Fecha de defensa: 30 de novembro de 2012
- Alberto Suárez González Presidente/a
- Gonzalo Martínez-Muñoz Secretario/a
- Jun Wang Vogal
- Lourdes Araujo Vogal
- Arjen de Vries Vogal
Tipo: Tese
Resumo
Personalised recommender systems aim to help users access and retrieve relevant information or items from large collections, by automatically finding and suggesting products or services of likely interest based on observed evidence of the users¿ pref-erences. For many reasons, user preferences are difficult to guess, and therefore recommender systems have a considerable variance in their success ratio in estimating the user¿s tastes and interests. In such a scenario, self-predicting the chances that a recommendation is accurate before actually submitting it to a user becomes an interesting capability from many perspectives. Performance prediction has been studied in the context of search engines in the Information Retrieval field, but there is little if any prior research of this problem in the recommendation domain. This thesis investigates the definition and formalisation of performance predic-tion methods for recommender systems. Specifically, we study adaptations of search performance predictors from the Information Retrieval field, and propose new predictors based on theories and models from Information Theory and Social Graph Theory. We show the instantiation of information-theoretical performance prediction methods on both rating and access log data, and the application of social-based predictors to social network structures. Recommendation performance prediction is a relevant problem per se, because of its potential application to many uses. Thus, we primarily evaluate the quality of the proposed solutions in terms of the correlation between the predicted and the observed performance on test data. This assessment requires a clear recommender evaluation methodology against which the predictions can be contrasted. Given that the evaluation of recommender systems is an open area to a significant extent, the thesis addresses the evaluation methodology as a part of the researched problem. We analyse how the variations in the evaluation procedure may alter the apparent behaviour of performance predictors, and we propose approaches to avoid misleading observations. In addition to the stand-alone assessment of the proposed predictors, we re-search the use of the predictive capability in the context of one of its common applications, namely the dynamic adjustment of hybrid methods combining several recommenders. We research approaches where the combination leans towards the algorithm that is predicted to perform best in each case, aiming to enhance the per-formance of the resulting hybrid configuration. The thesis reports positive empirical evidence confirming both a significant pre-dictive power for the proposed methods in different experiments, and consistent improvements in the performance of dynamic hybrid recommenders employing the proposed predictors.