EgoSpeak: Learning When to Speak for Egocentric Conversational Agents in the Wild
Junhyeok Kim, Min Soo Kim, Jiwan Chung, Jungbin Cho, Jisoo Kim, Sungwoong Kim, Gyeongbo Sim, Youngjae Yu
2025-02-24
Summary
This paper talks about EgoSpeak, a new AI system that helps conversational agents decide the best time to start speaking during real-world conversations by analyzing video from a first-person perspective.
What's the problem?
AI conversational agents often struggle to know when to speak in dynamic, real-world situations. Current systems rely on simplified setups or basic audio cues, which don't work well in complex conversations where people interrupt or overlap each other.
What's the solution?
The researchers created EgoSpeak, which uses video from the speaker's point of view to detect visual and audio cues, like body language or gaze direction, to predict when the agent should start talking. They also introduced a dataset called YT-Conversation with real-world videos for training the system and showed that EgoSpeak performs better than older methods at deciding when to speak.
Why it matters?
This matters because it makes AI more natural and human-like in conversations, improving its ability to interact in real-world settings. EgoSpeak could be used in social robots or virtual assistants, making them more engaging and effective in tasks like customer service or personal assistance.
Abstract
Predicting when to initiate speech in real-world environments remains a fundamental challenge for conversational agents. We introduce EgoSpeak, a novel framework for real-time speech initiation prediction in egocentric streaming video. By modeling the conversation from the speaker's first-person viewpoint, EgoSpeak is tailored for human-like interactions in which a conversational agent must continuously observe its environment and dynamically decide when to talk. Our approach bridges the gap between simplified experimental setups and complex natural conversations by integrating four key capabilities: (1) first-person perspective, (2) RGB processing, (3) online processing, and (4) untrimmed video processing. We also present YT-Conversation, a diverse collection of in-the-wild conversational videos from YouTube, as a resource for large-scale pretraining. Experiments on EasyCom and Ego4D demonstrate that EgoSpeak outperforms random and silence-based baselines in real time. Our results also highlight the importance of multimodal input and context length in effectively deciding when to speak.