Efficient Exploration of Gradient Space for Online Learning to Rank
Abstract: Online learning to rank (OL2R) optimizes the utility of returned search results based on implicit feedback gathered directly from users. To improve the estimates, OL2R algorithms examine one or more exploratory gradient directions and update the current ranker if a proposed one is preferred by users via an interleaved test. However, most OL2R algorithms uniformly sample from the entire parameter space for gradient estimation, without considering historical comparisons nor the feature distribution in the current query's candidate ranking documents. These limitations cause the algorithms to repeatedly explore less promising directions or propose directions that cannot be differentiated by any feedback in the given query.
We accelerate the online learning process by efficient exploration in the gradient space. Our algorithm, named as Null Space Gradient Descent, reduces the exploration space to only the null space of recent poorly performing gradients. This prevents the algorithm from repeatedly exploring directions that have been discouraged by the most recent interactions with users. To improve sensitivity of the resulting interleaved test, we selectively construct candidate rankers to maximize the chance that they can be differentiated by candidate ranking documents in the current query; and we use historically difficult queries to identify the best ranker when ties occur. Extensive experimental comparisons with the state-of-the-art OL2R algorithms on several public benchmarks confirmed the effectiveness of our proposal algorithm, especially in its fast learning convergence and promising ranking quality at an early stage.
Committee Members: Hongning Wang (Advisor), Alfred Weaver (Chair), Quanquan Gu, Farzad Farnoud