< Explain other AI papers

LiveMind: Low-latency Large Language Models with Simultaneous Inference

Chuangtao Chen, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, Bing Li

2024-06-21

LiveMind: Low-latency Large Language Models with Simultaneous Inference

Summary

This paper presents LiveMind, a new framework designed to make large language models (LLMs) faster and more efficient by allowing them to process incomplete user inputs while they are still being typed.

What's the problem?

Traditional methods for using LLMs require users to finish typing their entire input before the model can start generating a response. This waiting time can lead to slow responses, making interactions with AI feel less natural and more frustrating. As a result, users may experience delays that hinder effective communication.

What's the solution?

The researchers developed the LiveMind framework, which allows LLMs to begin processing user input even if it is incomplete. By reallocating computing tasks to the time when users are typing, LiveMind significantly reduces the time it takes for the model to respond. The results showed that this method cuts response times by an average of 59% on the MMLU-Pro dataset, while still providing accurate answers. Additionally, by using a combination of a large model for inference and a smaller model for final output, they achieved an even greater reduction in response time and improved accuracy.

Why it matters?

This research is important because it enhances how we interact with AI systems, making them more responsive and efficient. By reducing latency, LiveMind paves the way for smoother and more engaging conversations with AI, similar to how humans communicate in real-time. This improvement could lead to better applications in customer service, virtual assistants, and other areas where quick and accurate responses are essential.

Abstract

In this paper, we introduce a novel low-latency inference framework for large language models (LLMs) inference which enables LLMs to perform inferences with incomplete prompts. By reallocating computational processes to prompt input phase, we achieve a substantial reduction in latency, thereby significantly enhancing the interactive experience for users of LLMs. The framework adeptly manages the visibility of the streaming prompt to the model, allowing it to infer from incomplete prompts or await additional prompts. Compared with traditional inference methods that utilize complete prompts, our approach demonstrates an average reduction of 59% in response latency on the MMLU-Pro dataset, while maintaining comparable accuracy. Additionally, our framework facilitates collaborative inference and output across different models. By employing an LLM for inference and a small language model (SLM) for output, we achieve an average 68% reduction in response latency, alongside a 5.5% improvement in accuracy on the MMLU-Pro dataset compared with the SLM baseline. For long prompts exceeding 20 sentences, the response latency can be reduced by up to 93%.