Interpretability in Recommender Systems

As recommender systems are widely used in commercial systems, there is increasing demand to ensure that a recommendation model behaves align with both users’ and developers’ expectations. The central theme of my research is interpretability in recommender systems, aiming at “designing effective learning algorithm to incorporate desired properties of interpretability as well as evaluating the helpfulness of interpretability in recommender systems.” Thuat’s work aims to answer three research questions. First, what kind of information can be used to interpret user preferences? User generated data such as reviews, also-viewed/also-bought items and item meta-data such as texts, images are frequently used for user’s preferences interpretation. Second, how does a model learn to capture desired properties of interpretability from data? Each kind of data for interpretability has its own features and a tailored algorithm is required to uncover user preferences underlying them. Finally, which metrics should be used to evaluate the effectiveness of interpretability? Designing proper evaluation methods enables comparison between existing works, leading to the exploration and development of more advanced models.