K
Kiwon Yun
Researcher at Stony Brook University
Publications - 11
Citations - 721
Kiwon Yun is an academic researcher from Stony Brook University. The author has contributed to research in topics: Gaze & Gesture recognition. The author has an hindex of 7, co-authored 11 publications receiving 605 citations. Previous affiliations of Kiwon Yun include Yonsei University.
Papers
More filters
Proceedings ArticleDOI
Two-person interaction detection using body-pose features and multiple instance learning
TL;DR: A complex human activity dataset depicting two person interactions, including synchronized video, depth and motion capture data is created, and techniques related to Multiple Instance Learning (MIL) are explored, finding that the MIL based classifier outperforms SVMs when the sequences extend temporally around the interaction of interest.
Proceedings ArticleDOI
Studying Relationships between Human Gaze, Description, and Computer Vision
TL;DR: This paper conducts experiments to better understand the relationship between images, the eye movements people make while viewing images, and how people construct natural language to describe images in the context of two commonly used computer vision datasets.
Journal ArticleDOI
Exploring the role of gaze behavior and object detection in scene understanding
TL;DR: It is proposed that a person's gaze behavior while freely viewing a scene contains an abundance of information, not only about their intent and what they consider to be important in the scene, but also about the scene's content.
Proceedings ArticleDOI
Design and evaluation of a foveated video streaming service for commodity client devices
TL;DR: A multi-resolution video coding approach that is scalable in that it is possible to pre-code the video in a small number of copies for a given set of resolutions and designed to match the error performance of an eye tracker built using commodity webcams.
Book ChapterDOI
Action Detection with Improved Dense Trajectories and Sliding Window
TL;DR: An action/interaction detection system based on improved dense trajectories, multiple visual descriptors and bag-of-features representation, which relies on a non-overlapped temporal sliding window to enable the temporal localization.