DyVo: Dynamic Vocabularies for Learned Sparse Retrieval with Entities
Thong Nguyen, Shubham Chatterjee, Sean MacAvaney, Iain Mackie, Jeff Dalton, Andrew Yates
2024-10-17

Summary
This paper introduces DyVo, a new method for improving the performance of Learned Sparse Retrieval (LSR) models by using dynamic vocabularies that include entities from Wikipedia.
What's the problem?
Current LSR models often break down entities (like names of people or places) into smaller parts that don't make sense, which can lead to inaccurate search results. This fragmentation reduces the model's ability to understand and retrieve relevant information, especially when it comes to current events or knowledge not included in the original training data.
What's the solution?
To solve this issue, the authors developed a method called DyVo, which enhances the LSR vocabulary by integrating Wikipedia concepts and entities. DyVo uses a special component that identifies relevant entities for a given query and assigns them importance scores. These scores are combined with traditional word representations to create a more effective way to index and retrieve information. The authors tested DyVo on three datasets filled with entity-rich documents and found that it outperformed existing models significantly.
Why it matters?
This research is important because it improves how search engines and information retrieval systems work, especially in areas where understanding specific entities is crucial. By making LSR models smarter and more accurate in handling complex queries, DyVo can enhance user experience in applications like search engines, recommendation systems, and knowledge databases.
Abstract
Learned Sparse Retrieval (LSR) models use vocabularies from pre-trained transformers, which often split entities into nonsensical fragments. Splitting entities can reduce retrieval accuracy and limits the model's ability to incorporate up-to-date world knowledge not included in the training data. In this work, we enhance the LSR vocabulary with Wikipedia concepts and entities, enabling the model to resolve ambiguities more effectively and stay current with evolving knowledge. Central to our approach is a Dynamic Vocabulary (DyVo) head, which leverages existing entity embeddings and an entity retrieval component that identifies entities relevant to a query or document. We use the DyVo head to generate entity weights, which are then merged with word piece weights to create joint representations for efficient indexing and retrieval using an inverted index. In experiments across three entity-rich document ranking datasets, the resulting DyVo model substantially outperforms state-of-the-art baselines.