dc.contributor.author | Wu, Hui-Yin | en_US |
dc.contributor.author | Santarra, Trevor | en_US |
dc.contributor.author | Leece, Michael | en_US |
dc.contributor.author | Vargas, Rolando | en_US |
dc.contributor.author | Jhala, Arnav | en_US |
dc.contributor.editor | Christie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, Vineet | en_US |
dc.date.accessioned | 2020-05-24T13:14:09Z | |
dc.date.available | 2020-05-24T13:14:09Z | |
dc.date.issued | 2020 | |
dc.identifier.isbn | 978-3-03868-127-4 | |
dc.identifier.issn | 2411-9733 | |
dc.identifier.uri | https://doi.org/10.2312/wiced.20201131 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/wiced20201131 | |
dc.description.abstract | Joint attention refers to the shared focal points of attention for occupants in a space. In this work, we introduce a computational definition of joint attention for the automated editing of meetings in multi-camera environments from the AMI corpus. Using extracted head pose and individual headset amplitude as features, we developed three editing methods: (1) a naive audio-based method that selects the camera using only the headset input, (2) a rule-based edit that selects cameras at a fixed pacing using pose data, and (3) an editing algorithm using LSTM (Long-short term memory) learned joint-attention from both pose and audio data, trained on expert edits. The methods are evaluated qualitatively against the human edit, and quantitatively in a user study with 22 participants. Results indicate that LSTM-trained joint attention produces edits that are comparable to the expert edit, offering a wider range of camera views than audio, while being more generalizable as compared to rule-based methods. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | smart conferencing | |
dc.subject | automated video editing | |
dc.subject | joint attention | |
dc.subject | LSTM | |
dc.title | Joint Attention for Automated Video Editing | en_US |
dc.description.seriesinformation | Workshop on Intelligent Cinematography and Editing | |
dc.description.sectionheaders | Afternoon Session | |
dc.identifier.doi | 10.2312/wiced.20201131 | |
dc.identifier.pages | 37-37 | |