Author
Yaron Yanai
Other affiliations:Â AMIT, Omek Interactive
Bio: Yaron Yanai is an academic researcher from Intel. The author has contributed to research in topics: Object (computer science) & User interface. The author has an hindex of 10, co-authored 13 publications receiving 408 citations. Previous affiliations of Yaron Yanai include AMIT & Omek Interactive.
Papers
More filters
Patent•
22 Jun 2012TL;DR: In this article, a system and method for close range object tracking using depth images of a user's hands and fingers or other objects is described. But it is not shown how to interact with an object displayed on a screen.
Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor (110). Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen (155), by using the positions and movements of his hands and fingers or other objects.
110Â citations
Patent•
04 Apr 2013TL;DR: In this paper, a tracking module processes depth data of a user performing movements, for example, movements of the user's hand and fingers, and the tracked movements are used to animate a representation of the hands and fingers and the animated representation is displayed to the user using a 3D display.
Abstract: Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hand and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using a three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device.
84Â citations
15 Dec 2010
TL;DR: Animating in-game avatars using real time motion capture data for animating avatars in games is highly appealing and is becoming more widespread due to the accessibility of researchers and developers to consumer priced depth sensors.
Abstract: Animating in-game avatars using real time motion capture data is highly appealing and is becoming more widespread due to the accessibility of researchers and developers to consumer priced depth sensors [Microsoft Kinect 2010]. Depth sensors allow for a cheap and robust motion capture solution which can be naturally adapted to games. However, in spite of the many advantages of using real-time motion capture data for animating avatars in games, there are two major challenges. The first is due to the limitations of the current tracking techniques in producing smooth, noise free, accurate animation in real-time. The second, and more acute problem, stems from the fact that in most games, the movements of the animated avatar are expected to be more expressive than the player's actual movements. In such cases, one would like to visually enhance the player's motion to display exaggerated or even super-natural motions.
66Â citations
Patent•
18 Feb 2016TL;DR: In this article, techniques for 3D analysis of a scene including detection, segmentation, and registration of objects within the scene are described, which can be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints.
Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
35Â citations
Patent•
15 Dec 2015TL;DR: In this paper, techniques for generation of synthetic 3-dimensional object image variations for training of recognition systems are described, including an image synthesizing circuit configured to synthesize a 3-D image of the object (including color and depth image pairs) based on a 3D model.
Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
29Â citations
Cited by
More filters
TL;DR: The calibration of the Kinect sensor is discussed, and an analysis of the accuracy and resolution of its depth data is provided, based on a mathematical model of depth measurement from disparity.
Abstract: Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements.
1,671Â citations
Patent•
24 Mar 2015TL;DR: In this article, a mobile terminal including a display configured to display information; a short range communication module configured to exchange a signal with an external control device; and a controller configured to receive the signal from the external controller, determine a context at a timing point of receiving the signal, and control an operation corresponding to the signal to be performed in the determined context.
Abstract: A mobile terminal including a display configured to display information; a short range communication module configured to exchange a signal with an external control device; and a controller configured to receive the signal from the external control device, determine a context at a timing point of receiving the signal, and control an operation corresponding to the signal to be performed in the determined context. Further, the operation includes at least one of activation/deactivation of the display, an activation/deactivation of a lock mode of the mobile terminal, a volume operation and a camera operation.
242Â citations
Patent•
03 Sep 2014
TL;DR: In this article, the authors coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z 0 plane.
Abstract: An electronic device coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z 0 plane, while controlling operation of the device. Subtle gestures include hand movements commenced in a dynamically resizable and relocatable interaction zone. Preferably (x,y,z) locations in the interaction zone are mapped to two-dimensional display screen locations. Detected user hand movements can signal the device that an interaction is occurring in gesture mode. Device response includes presenting GUI on the display screen, creating user feedback including haptic feedback. User three-dimensional interaction can manipulate displayed virtual objects, including releasing such objects. User hand gesture trajectory clues enable the device to anticipate probable user intent and to appropriately update display screen renderings.
152Â citations
Patent•
10 Jul 2013TL;DR: Flexible hinge and removable attachment techniques are described in this article, where a flexible hinge is configured to communicatively and physically couple an input device to a computing device and may implement functionality such as a support layer and minimum bend radius.
Abstract: Flexible hinge and removable attachment techniques are described. In one or more implementations, a flexible hinge is configured to communicatively and physically couple an input device to a computing device and may implement functionality such as a support layer and minimum bend radius. The input device may also include functionality to promote a secure physical connection between the input device and the computing device. One example of this includes use of one or more protrusions that are configured to be removed from respective cavities of the computing device along a particular axis but mechanically bind along other axes. Other techniques include use of a laminate structure to form a connection portion of the input device.
143Â citations
TL;DR: In this article, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analyzed, and machine learning algorithms were used to build systems for automatically discriminating between four emotional states, two levels of arousal andTwo levels of valence.
Abstract: The increasing number of people playing games on touch-screen mobile phones raises the question of whether touch behaviors reflect players’ emotional states. This prospect would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behavior show the existence of discriminative affective profiles. In this article, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analyzed. Machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. Accuracy reached between 69p and 77p for the four emotional states, and higher results (~89p) were obtained for discriminating between two levels of arousal and two levels of valence. We conclude by discussing the factors relevant to the generalization of the results to applications other than games.
140Â citations