Author
Shahar Fleishman
Other affiliations: Omek Interactive
Bio: Shahar Fleishman is an academic researcher from Intel. The author has contributed to research in topics: 3D reconstruction & Gesture recognition. The author has an hindex of 9, co-authored 12 publications receiving 289 citations. Previous affiliations of Shahar Fleishman include Omek Interactive.
Papers
More filters
Patent•
22 Jun 2012TL;DR: In this article, a system and method for close range object tracking using depth images of a user's hands and fingers or other objects is described. But it is not shown how to interact with an object displayed on a screen.
Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor (110). Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen (155), by using the positions and movements of his hands and fingers or other objects.
110 citations
Patent•
18 Feb 2016TL;DR: In this article, techniques for 3D analysis of a scene including detection, segmentation, and registration of objects within the scene are described, which can be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints.
Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
35 citations
Patent•
31 Jul 2013TL;DR: In this paper, a system and method for adjusting the parameters of a camera based upon the elements in an imaged scene are described, where the frame rate at which the camera captures images can be adjusted based upon whether the object of interest appears in the camera's field of view.
Abstract: A system and method for adjusting the parameters of a camera based upon the elements in an imaged scene are described. The frame rate at which the camera captures images can be adjusted based upon whether the object of interest appears in the camera's field of view to improve the camera's power consumption. The exposure time can be set based on the distance of an object form the camera to improve the quality of the acquired camera data.
24 citations
Patent•
13 Jul 2016TL;DR: In this paper, context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene is presented, which can also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames.
Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.
19 citations
Patent•
11 Dec 2014TL;DR: In this paper, a feature vector including invariant features associated with an area of interest within an image of an object was generated and a component label was provided based on an application of a machine learning classifier to the feature vector.
Abstract: Techniques related to labeling component parts and detecting component properties in imaging data are discussed. Such techniques may include generating a feature vector including invariant features associated with an area of interest within an image of an object such as an image of a hand and providing a component label such as a hand part label for the area of interest based on an application of a machine learning classifier to the feature vector.
18 citations
Cited by
More filters
Patent•
03 Sep 2014
TL;DR: In this article, the authors coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z 0 plane.
Abstract: An electronic device coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z 0 plane, while controlling operation of the device. Subtle gestures include hand movements commenced in a dynamically resizable and relocatable interaction zone. Preferably (x,y,z) locations in the interaction zone are mapped to two-dimensional display screen locations. Detected user hand movements can signal the device that an interaction is occurring in gesture mode. Device response includes presenting GUI on the display screen, creating user feedback including haptic feedback. User three-dimensional interaction can manipulate displayed virtual objects, including releasing such objects. User hand gesture trajectory clues enable the device to anticipate probable user intent and to appropriately update display screen renderings.
152 citations
Patent•
19 Jul 2011TL;DR: In this paper, an image of 3D objects is displayed on a 2D interactive surface, and input is received and interpreted for manipulating the 3D object, and rotation control handles indicating available rotation directions are displayed.
Abstract: Computerized methods and interactive input systems for manipulation of 3D objects are disclosed. An image of 3D object is displayed on a 2D interactive surface, and input is received and is interpreted for manipulating the 3D object. When the 3D object is selected, rotation control handles indicating available rotation directions are displayed. In one embodiment, the method comprises capturing images of a 3D input space, recognizing at least one object in the images, and comparing the recognized objects in the images to determine a difference therebetween based on a difference threshold. Depending on the outcome of the comparison, the recognized objects are emerged and associated with digital content, or only one of the recognized objects is maintained and associated with digital content.
123 citations
Patent•
03 Sep 2014
TL;DR: In this paper, a pair of two-dimensional cameras are used to acquire information for user gestures made with an unadorned user object in an interaction zone responsive to viewing displayed imagery, with which the user can interact.
Abstract: User wearable eye glasses include a pair of two-dimensional cameras that optically acquire information for user gestures made with an unadorned user object in an interaction zone responsive to viewing displayed imagery, with which the user can interact. Glasses systems intelligently signal process and map acquired optical information to rapidly ascertain a sparse (x,y,z) set of locations adequate to identify user gestures. The displayed imagery can be created by glasses systems and presented with a virtual on-glasses display, or can be created and/or viewed off-glasses. In some embodiments the user can see local views directly, but augmented with imagery showing internet provided tags identifying and/or providing information as to viewed objects. On-glasses systems can communicate wirelessly with cloud servers and with off-glasses systems that the user can carry in a pocket or purse.
122 citations
Patent•
22 Jun 2012TL;DR: In this article, a system and method for close range object tracking using depth images of a user's hands and fingers or other objects is described. But it is not shown how to interact with an object displayed on a screen.
Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor (110). Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen (155), by using the positions and movements of his hands and fingers or other objects.
110 citations
Patent•
15 Feb 2011TL;DR: In this paper, the authors propose an architecture that combines multiple depth cameras and multiple projectors to cover a specified space (e.g., a room) and allows the development of a multi-dimensional model of objects in the space, as well as the ability to project graphics in a controlled fashion on the same objects.
Abstract: Architecture that combines multiple depth cameras and multiple projectors to cover a specified space (e.g., a room). The cameras and projectors are calibrated, allowing the development of a multi-dimensional (e.g., 3D) model of the objects in the space, as well as the ability to project graphics in a controlled fashion on the same objects. The architecture incorporates the depth data from all depth cameras, as well as color information, into a unified multi-dimensional model in combination with calibrated projectors. In order to provide visual continuity when transferring objects between different locations in the space, the user's body can provide a canvas on which to project this interaction. As the user moves body parts in the space, without any other object, the body parts can serve as temporary “screens” for “in-transit” data.
108 citations