K
Kelvin Cheng
Researcher at Rakuten
Publications - 42
Citations - 343
Kelvin Cheng is an academic researcher from Rakuten. The author has contributed to research in topics: Augmented reality & Computer science. The author has an hindex of 7, co-authored 35 publications receiving 276 citations. Previous affiliations of Kelvin Cheng include National University of Singapore & University of Sydney.
Papers
More filters
Direct interaction with large-scale display systems using infrared laser tracking devices
Kelvin Cheng,Kevin Pulo +1 more
TL;DR: This work addresses hotspots, regions surrounding objects of interest, and gestures, movements made with the laser pointer which triggers an action, similar to those found in modern web browsers, by using an infrared laser pointer and an infrared tracking device.
Proceedings ArticleDOI
PinchPad: performance of touch-based gestures while grasping devices
TL;DR: This paper designed generic interactions for discrete, continuous, and combined gesture commands that are executed without hand-eye control because the performing fingers are hidden behind a grasped device in a way that the thumb can always be used as a proprioceptive reference for guiding finger movements.
Proceedings ArticleDOI
A Neural Network for Detailed Human Depth Estimation From a Single Image
TL;DR: Zhang et al. as mentioned in this paper separate the depth map into a smooth base shape and a residual detail shape and design a network with two branches to regress them respectively, which can capture geometry details such as cloth wrinkles.
Proceedings ArticleDOI
Supporting interaction and collaboration on large displays using tablet devices
TL;DR: An efficient management and navigation interface is proposed through the use of an interactive world-in-miniature view and multitouch gestures that can intuitively manage their views on their tablets, navigate between different areas of the workspace, or enlarge them for a closer look.
Proceedings ArticleDOI
Does proprioception guide back-of-device pointing as well as vision?
TL;DR: The results show that users do not require complex techniques to visualize finger position on the rear of device, and visual feedback does not affect any performance parameters, such as effectiveness, perceived performance, and the number of trials needed to select a target.