< Explain other AI papers

Sharingan: Extract User Action Sequence from Desktop Recordings

Yanting Chen, Yi Ren, Xiaoting Qin, Jue Zhang, Kehong Yuan, Lu Han, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, Qi Zhang

2024-11-15

Sharingan: Extract User Action Sequence from Desktop Recordings

Summary

This paper introduces Sharingan, a system that uses advanced video analysis techniques to extract user action sequences from desktop recordings.

What's the problem?

While desktop recordings can provide valuable insights into user behavior, extracting meaningful actions from these videos is challenging. Existing methods using Vision-Language Models (VLMs) have not been effectively applied to this task, leaving a gap in understanding user interactions in recorded videos.

What's the solution?

The authors propose two new methods for extracting user actions: the Direct Frame-Based Approach (DF) and the Differential Frame-Based Approach (DiffF). The DF method inputs selected frames directly into VLMs to generate action sequences, while the DiffF method identifies changes between frames using computer vision techniques before analyzing them with VLMs. They tested these methods on a self-created dataset and found that the DF approach achieved an accuracy of 70% to 80% in identifying user actions, making it a reliable option for action extraction.

Why it matters?

This research is important because it represents the first time VLMs have been applied to extract user action sequences from desktop recordings. By developing these new methods, the authors provide valuable tools and insights that can improve our understanding of user behavior and could lead to better automation processes in various applications.

Abstract

Video recordings of user activities, particularly desktop recordings, offer a rich source of data for understanding user behaviors and automating processes. However, despite advancements in Vision-Language Models (VLMs) and their increasing use in video analysis, extracting user actions from desktop recordings remains an underexplored area. This paper addresses this gap by proposing two novel VLM-based methods for user action extraction: the Direct Frame-Based Approach (DF), which inputs sampled frames directly into VLMs, and the Differential Frame-Based Approach (DiffF), which incorporates explicit frame differences detected via computer vision techniques. We evaluate these methods using a basic self-curated dataset and an advanced benchmark adapted from prior work. Our results show that the DF approach achieves an accuracy of 70% to 80% in identifying user actions, with the extracted action sequences being re-playable though Robotic Process Automation. We find that while VLMs show potential, incorporating explicit UI changes can degrade performance, making the DF approach more reliable. This work represents the first application of VLMs for extracting user action sequences from desktop recordings, contributing new methods, benchmarks, and insights for future research.