Know When to Fuse: Investigating Non-English Hybrid Retrieval in the Legal Domain
Antoine Louis, Gijs van Dijck, Gerasimos Spanakis
2024-09-04

Summary
This paper talks about a study on hybrid search methods for retrieving legal information in French, exploring how combining different retrieval models can improve results.
What's the problem?
Most research on hybrid search focuses on English and uses a limited number of retrieval methods, which may not work well in specialized fields like law. This lack of exploration makes it hard to find effective ways to retrieve legal information in other languages, particularly French.
What's the solution?
The authors investigate how combining various retrieval models can enhance search performance in the legal domain for French language documents. They test these hybrid methods in both zero-shot scenarios (where the models haven't been specifically trained on the task) and in-domain situations (where the models are trained with relevant legal data). Their findings show that combining different models generally leads to better results when no specific training is done, but when models are trained with legal data, using a single best model often performs better unless the combined scores are carefully adjusted.
Why it matters?
This research is important because it expands our understanding of how hybrid search methods can be applied in non-English contexts, particularly in specialized fields like law. By improving retrieval methods for legal information in French, it can help legal professionals find necessary documents more efficiently and accurately.
Abstract
Hybrid search has emerged as an effective strategy to offset the limitations of different matching paradigms, especially in out-of-domain contexts where notable improvements in retrieval quality have been observed. However, existing research predominantly focuses on a limited set of retrieval methods, evaluated in pairs on domain-general datasets exclusively in English. In this work, we study the efficacy of hybrid search across a variety of prominent retrieval models within the unexplored field of law in the French language, assessing both zero-shot and in-domain scenarios. Our findings reveal that in a zero-shot context, fusing different domain-general models consistently enhances performance compared to using a standalone model, regardless of the fusion method. Surprisingly, when models are trained in-domain, we find that fusion generally diminishes performance relative to using the best single system, unless fusing scores with carefully tuned weights. These novel insights, among others, expand the applicability of prior findings across a new field and language, and contribute to a deeper understanding of hybrid search in non-English specialized domains.