< Explain other AI papers

Shoe Style-Invariant and Ground-Aware Learning for Dense Foot Contact Estimation

Daniel Sungho Jung, Kyoung Mu Lee

2025-12-03

Shoe Style-Invariant and Ground-Aware Learning for Dense Foot Contact Estimation

Summary

This paper focuses on figuring out where feet are touching the ground, which is important for understanding how people move and interact with their surroundings.

What's the problem?

Currently, methods for detecting foot contact aren't very detailed and often just assume the foot stops moving when it touches the ground. Also, it's hard for computers to accurately predict *where* a foot is touching from just a picture because shoes come in so many different styles and the ground can look very plain, making it difficult to identify useful features.

What's the solution?

The researchers created a system called FECO that improves foot contact estimation. They tackled the shoe style problem by training the system to ignore differences in shoe appearance when figuring out contact. To deal with the plain ground, they added a part to the system that specifically looks for patterns and details in the ground itself to help determine where the foot is touching.

Why it matters?

This research is important because accurately knowing where feet are touching the ground allows for more realistic and detailed computer models of human movement, which could be useful in areas like robotics, animation, and understanding how people walk and balance.

Abstract

Foot contact plays a critical role in human interaction with the world, and thus exploring foot contact can advance our understanding of human movement and physical interaction. Despite its importance, existing methods often approximate foot contact using a zero-velocity constraint and focus on joint-level contact, failing to capture the detailed interaction between the foot and the world. Dense estimation of foot contact is crucial for accurately modeling this interaction, yet predicting dense foot contact from a single RGB image remains largely underexplored. There are two main challenges for learning dense foot contact estimation. First, shoes exhibit highly diverse appearances, making it difficult for models to generalize across different styles. Second, ground often has a monotonous appearance, making it difficult to extract informative features. To tackle these issues, we present a FEet COntact estimation (FECO) framework that learns dense foot contact with shoe style-invariant and ground-aware learning. To overcome the challenge of shoe appearance diversity, our approach incorporates shoe style adversarial training that enforces shoe style-invariant features for contact estimation. To effectively utilize ground information, we introduce a ground feature extractor that captures ground properties based on spatial context. As a result, our proposed method achieves robust foot contact estimation regardless of shoe appearance and effectively leverages ground information. Code will be released.