Exploring Rewriting Approaches for Different Conversational Tasks
Md Mehrab Tanjim, Ryan A. Rossi, Mike Rimer, Xiang Chen, Sungchul Kim, Vaishnavi Muppala, Tong Yu, Zhengmian Hu, Ritwik Sinha, Wei Zhang, Iftikhar Ahamath Burhanuddin, Franck Dernoncourt
2025-03-06
Summary
This paper talks about different ways to improve how AI assistants understand and answer questions in conversations, focusing on two methods called rewriting and fusion
What's the problem?
AI assistants sometimes struggle to understand questions in a conversation because they might depend on earlier parts of the chat. Different types of AI assistants might need different ways to handle this problem
What's the solution?
The researchers tested two methods, rewriting and fusion, on different types of AI tasks. They found that for regular question-answering, rewriting works best. But for tasks where the AI needs to create charts or tables, fusion works better. They tested these methods on both short and long conversations to make sure
Why it matters?
This matters because it helps make AI assistants better at understanding conversations and giving more accurate answers. By knowing which method works best for different types of tasks, developers can create smarter AI assistants that can handle a wider range of questions and tasks more effectively
Abstract
Conversational assistants often require a question rewriting algorithm that leverages a subset of past interactions to provide a more meaningful (accurate) answer to the user's question or request. However, the exact rewriting approach may often depend on the use case and application-specific tasks supported by the conversational assistant, among other constraints. In this paper, we systematically investigate two different approaches, denoted as rewriting and fusion, on two fundamentally different generation tasks, including a text-to-text generation task and a multimodal generative task that takes as input text and generates a visualization or data table that answers the user's question. Our results indicate that the specific rewriting or <PRE_TAG>fusion approach</POST_TAG> highly depends on the underlying use case and generative task. In particular, we find that for a conversational question-answering assistant, the query rewriting approach performs best, whereas for a data analysis assistant that generates visualizations and data tables based on the user's conversation with the assistant, the <PRE_TAG>fusion approach</POST_TAG> works best. Notably, we explore two datasets for the data analysis assistant use case, for short and long conversations, and we find that query fusion always performs better, whereas for the conversational text-based question-answering, the query rewrite approach performs best.