Temporal Segmentation of Creative Live Streams

ACM Conference on Human Factors and Computing Systems (SIGCHI)

Publication date: April 23, 2020

Ailie Fraser, Joy Kim, Hijung Valentina Shin, Joel Brandt, Mira Dontcheva

Many artists broadcast their creative process through live streaming platforms like Twitch and YouTube, and people often watch archives of these broadcasts later for learning and inspiration. Unfortunately, because live stream videos are often multiple hours long and hard to skim and browse, few can leverage the wealth of knowledge hidden in these archives. We present an approach for automatic temporal segmentation of creative live stream videos. Using an audio transcript and a log of software usage, the system segments the video into sections that the artist can optionally label with meaningful titles. We evaluate this approach by gathering feedback from expert streamers and comparing automatic segmentations to those made by viewers. We find that, while there is no one "correct" way to segment a live stream, our automatic method performs similarly to viewers, and streamers find it useful for navigating their streams after making slight adjustments and adding section titles.

Learn More

Research Area:  Adobe Research iconHuman Computer Interaction