scispace - formally typeset
Search or ask a question
Author

兼司 岡

Bio: 兼司 岡 is an academic researcher. The author has contributed to research in topics: Interface (computing) & Luminance. The author has an hindex of 3, co-authored 6 publications receiving 197 citations.

Papers
More filters
Journal Article
TL;DR: A fast and robust method for tracking a user's hand and multiple fingertips and gesture recognition based on measured fingertip trajectories for augmented desk interface systems, which is particularly advantageous for human-computer interaction (HCI).

170 citations

Patent
12 Apr 2004
TL;DR: In this paper, an animation editing system which can make gestures with the hands and fingers execute physical operations is presented. But it is not suitable for animation editing with a mouse or a keyboard.
Abstract: PROBLEM TO BE SOLVED: To provide an animation editing system which can make gestures with the hands and fingers execute physical operations. SOLUTION: A user 160 executes each operation for editing animations by means of the user's gestures with the hands and fingers on an animation editing screen which is projected to a screen 150 from a projector 130 connected to an image-projecting computer 110. A camera 140 connected to an image-processing computer 120 picks up the gestures, and the image-processing computer 120 recognizes the positions of the hands and fingers. The image pickup computer 110 recognizes the meanings of the gestures from the positions of the user's 160 hands and fingers, executes image editing processing according to the gestures, and projects the animation editing screen again to the screen 150. The above interface solves the problems of usual animation editing systems operated with a mouse or the like. COPYRIGHT: (C)2006,JPO&NCIPI

17 citations

Patent
20 Jul 2010
TL;DR: In this article, a pupil detection method was proposed which can stably detect a pupil by positively using information regarding the reflected image of the cornea even if a large part of the pupil is hidden by a reflected image.
Abstract: PROBLEM TO BE SOLVED: To provide a pupil detection device and a pupil detection method which can stably detecting a pupil by positively using information regarding the reflected image of the cornea even if a large part of the pupil is hidden by a reflected image of the cornea.SOLUTION: The pupil detection device 100 comprises: a circumferential state evaluation unit 105 which sets a plurality of line segments each having an end at a reference point in a reflected image of the cornea and having a predetermined length, and calculates a luminance evaluation value on the basis of the luminance of each pixel in each line segment and the reference luminance; a pupil center straight line calculation unit 106 which specifies a pupil center straight line extending the center of the pupil image among a plurality of line segments on the basis of the luminance evaluation value; and a pupil searching unit 107 which detects a pupil image on the basis of the luminance state around the pupil center straight line.

4 citations

Patent
02 Mar 2009
TL;DR: In this article, an image capturing device is provided with a camera unit 3 which captures images of the same subject by means of two optical systems, a face part detection unit and an exposure control value determination unit.
Abstract: PROBLEM TO BE SOLVED: To provide an image capturing device capable of accurately measuring the distances to face parts. SOLUTION: The image capturing device 1 is provided with a camera unit 3 which captures images of the same subject by means of two optical systems, a face part detection unit 9 which detects a plurality of face parts that compose a face included in each of the images captured by the camera unit 3, a face part luminance calculation unit 10 which calculates the luminance values of the detected plurality of face parts, and an exposure control value determination unit 12 which finds the exposure control value of the camera unit on the basis of the luminance values of the plurality of face parts. A distance measurement unit 17 of the image capturing device 1 measures the distances to the face parts on the basis of the images captured by the camera unit 3 using the exposure control value. COPYRIGHT: (C)2010,JPO&INPIT

3 citations

Patent
27 Sep 2010
TL;DR: In this article, the authors propose a line-of-sight estimation system that is capable of obtaining high-precision line of sight estimation results with suppressed errors without delay, even if a large error of the sort stemming from pupil misdetection is included in line ofsight measurement results.
Abstract: PROBLEM TO BE SOLVED: To provide a line-of-sight estimation apparatus capable of obtaining high-precision line-of-sight estimation results with suppressed errors without delay, even if a large error of the sort stemming from pupil misdetection is included in line-of-sight measurement results.SOLUTION: The line-of-sight estimation apparatus 200 comprises: an image inputting section 201 that captures and takes in an image of a person; a line-of-sight measurement section 202 that measures a direction of a line-of-sight at the present time on the basis of the taken image; a line-of-sight measuring result storing section 211 that stores therein line-of-sight measuring results measured in the past; a representative value extracting section 212 that extracts a representative value of past line-of-sight measuring result; and a line-of-sight determining section 213 that determines whether the difference between the representative value of the past line-of-sight measuring result and the line-of-sight measuring result at the present time is lower than a predetermined threshold to determine as a line-of-sight estimating result either of the past line-of-sight measuring result or the line-of-sight measuring result at the present time.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A literature review on the second research direction, which aims to capture the real 3D motion of the hand, which is a very challenging problem in the context of HCI.

901 citations

Proceedings ArticleDOI
27 Jun 2004
TL;DR: A method is presented for robust tracking in highly cluttered environments that makes effective use of 3D depth sensing technology, resulting in illumination-invariant tracking.
Abstract: A method is presented for robust tracking in highly cluttered environments. The method makes effective use of 3D depth sensing technology, resulting in illumination-invariant tracking. A few applications using tracking are presented including face tracking and hand tracking.

507 citations

Proceedings ArticleDOI
15 Jun 2000
TL;DR: A multi-PC/camera system that can perform 3D reconstruction and ellipsoid fitting of moving humans in real time and using a simple and user-friendly interface, the user can display and observe, in realTime and from any view-point, the 3D models of the moving human body.
Abstract: We present a multi-PC/camera system that can perform 3D reconstruction and ellipsoid fitting of moving humans in real time. The system consists of five cameras. Each camera is connected to a PC which locally extracts the silhouettes of the moving person in the image captured by the camera. The five silhouette images are then sent, via local network, to a host computer to perform 3D voxel-based reconstruction by an algorithm called SPOT. Ellipsoids are then used to fit the reconstructed data. By using a simple and user-friendly interface, the user can display and observe, in real time and from any view-point, the 3D models of the moving human body. With a rate of higher than 15 frames per second, the system is able to capture non-intrusively, a sequence of human motions.

447 citations

Patent
04 Mar 2008
TL;DR: In this article, a user's gesture is recognized from first and second images, an interaction command corresponding to the recognized user gesture is determined, and, based on the determined interaction command, an image object displayed in a user interface is manipulated.
Abstract: Enhanced image viewing, in which a user's gesture is recognized from first and second images, an interaction command corresponding to the recognized user's gesture is determined, and, based on the determined interaction command, an image object displayed in a user interface is manipulated.

375 citations

Proceedings ArticleDOI
13 Oct 2004
TL;DR: By segmenting the hand regions from the video images and then augmenting them transparently into a graphical interface, the Visual Touchpad provides a compelling direct manipulation experience without the need for more expensive tabletop displays or touch-screens, and with significantly less self-occlusion.
Abstract: This paper presents the Visual Touchpad, a low-cost vision-based input device that allows for fluid two-handed interactions with desktop PCs, laptops, public kiosks, or large wall displays. Two downward-pointing cameras are attached above a planar surface, and a stereo hand tracking system provides the 3D positions of a user's fingertips on and above the plane. Thus the planar surface can be used as a multi-point touch-sensitive device, but with the added ability to also detect hand gestures hovering above the surface. Additionally, the hand tracker not only provides positional information for the fingertips but also finger orientations. A variety of one and two-handed multi-finger gestural interaction techniques are then presented that exploit the affordances of the hand tracker. Further, by segmenting the hand regions from the video images and then augmenting them transparently into a graphical interface, our system provides a compelling direct manipulation experience without the need for more expensive tabletop displays or touch-screens, and with significantly less self-occlusion.

343 citations