F-HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions
Jie Yang, Xuesong Niu, Nan Jiang, Ruimao Zhang, Siyuan Huang
2024-07-24

Summary
This paper discusses F-HOI, a new approach to understanding human-object interactions (HOIs) in 3D environments. It focuses on creating a more detailed and accurate representation of how humans interact with objects by breaking down the interactions into smaller, understandable parts.
What's the problem?
Current datasets and models for 3D human-object interactions often treat the entire interaction as a single event, which means they miss important details about what happens in between different stages of the interaction. This lack of detail makes it hard for models to learn and predict how humans interact with objects in a realistic way. Without understanding these intermediate states, the models can struggle to provide accurate representations of complex actions.
What's the solution?
To solve this problem, the authors introduce a new dataset called Semantic-HOI, which includes over 20,000 detailed descriptions of different interaction states between humans and objects. They also create F-HOI, a model that uses this dataset to learn from these fine-grained descriptions. F-HOI is designed to handle various tasks related to human-object interactions by using multimodal instructions that combine text and visual information. This allows the model to understand and generate more accurate representations of HOIs across different contexts.
Why it matters?
This research is significant because it improves how AI systems understand and represent human-object interactions, which is crucial for applications like robotics, virtual reality, and animation. By providing a more detailed understanding of these interactions, F-HOI can help create more realistic simulations and improve the performance of AI in tasks that involve human and object dynamics.
Abstract
Existing 3D human object interaction (HOI) datasets and models simply align global descriptions with the long HOI sequence, while lacking a detailed understanding of intermediate states and the transitions between states. In this paper, we argue that fine-grained semantic alignment, which utilizes state-level descriptions, offers a promising paradigm for learning semantically rich HOI representations. To achieve this, we introduce Semantic-HOI, a new dataset comprising over 20K paired HOI states with fine-grained descriptions for each HOI state and the body movements that happen between two consecutive states. Leveraging the proposed dataset, we design three state-level HOI tasks to accomplish fine-grained semantic alignment within the HOI sequence. Additionally, we propose a unified model called F-HOI, designed to leverage multimodal instructions and empower the Multi-modal Large Language Model to efficiently handle diverse HOI tasks. F-HOI offers multiple advantages: (1) It employs a unified task formulation that supports the use of versatile multimodal inputs. (2) It maintains consistency in HOI across 2D, 3D, and linguistic spaces. (3) It utilizes fine-grained textual supervision for direct optimization, avoiding intricate modeling of HOI states. Extensive experiments reveal that F-HOI effectively aligns HOI states with fine-grained semantic descriptions, adeptly tackling understanding, reasoning, generation, and reconstruction tasks.