scispace - formally typeset
Search or ask a question
Author

Francis Macdougall

Bio: Francis Macdougall is an academic researcher from Qualcomm. The author has contributed to research in topics: Gesture & Gesture recognition. The author has an hindex of 15, co-authored 26 publications receiving 2580 citations.

Papers
More filters
Patent
24 Jul 2001
TL;DR: In this paper, a method of using stereo vision to interface with a computer is described, which includes capturing a stereo image, processing the stereo image to determine position information of an object in the stereo images, and communicating the position information to the computer.
Abstract: A method of using stereo vision to interface with a computer is provided. The method includes capturing a stereo image, and processing the stereo image to determine position information of an object in the stereo image. The object is controlled by a user. The method also includes communicating the position information to the computer to allow the user to interact with a computer application.

838 citations

Patent
03 Oct 2001
TL;DR: In this paper, a multiple camera tracking system for interfacing with an application program running on a computer is presented, which includes two or more video cameras arranged to provide different viewpoints of a region of interest, and are operable to produce a series of video images.
Abstract: A multiple camera tracking system for interfacing with an application program running on a computer is provided. The tracking system includes two or more video cameras arranged to provide different viewpoints of a region of interest, and are operable to produce a series of video images. A processor is operable to receive the series of video images and detect objects appearing in the region of interest. The processor executes a process to generate a background data set from the video images, generate an image data set for each received video image, compare each image data set to the background data set to produce a difference map for each image data set, detect a relative position of an object of interest within each difference map, and produce an absolute position of the object of interest from the relative positions of the object of interest and map the absolute position to a position indicator associated with the application program.

578 citations

Patent
14 Sep 2009
TL;DR: In this paper, an element is initially displayed on an interactive touch-screen display device with an initial orientation relative to the interactive touch screen display device and the user is determined to be interacting with the element displayed on the display device.
Abstract: An element is initially displayed on an interactive touch-screen display device with an initial orientation relative to the interactive touch-screen display device. One or more images of a user of the interactive touch-screen display device are captured. The user is determined to be interacting with the element displayed on the interactive touch-screen display device. In addition, an orientation of the user relative to the interactive touch-screen display device is determined based on at least one captured image of the user of the interactive touch-screen display device. Thereafter, in response to determining that the user is interacting with the displayed element, the initial orientation of the displayed element relative to the interactive touch-screen display device is automatically adjusted based on the determined orientation of the user relative to the interactive touch-screen display device.

283 citations

Patent
27 Feb 2008
TL;DR: In this article, a representation of a user can move with respect to a graphical user interface based on input of the user based on a gesture recognition system, and based on the recognized gesture, the display of the graphical interface is altered and an application control is outputted.
Abstract: A representation of a user can move with respect to a graphical user interface based on input of a user. The graphical user interface comprises a central region and interaction elements disposed outside of the central region. The interaction elements are not shown until the representation of the user is aligned with the central region. A gesture of the user is recognized, and, based on the recognized gesture, the display of the graphical user interface is altered and an application control is outputted.

259 citations

Patent
23 Jun 2008
TL;DR: In this paper, a control including radially disposed interaction elements is output, where at least a portion of the interaction elements are associated with clusters of characters and the characters associated with the selected interaction element are disposed radially in relation to the selected interface element.
Abstract: Enhanced character input using recognized gestures, in which a user's first and second gestures are recognized, and a control including radially disposed interaction elements is output. At least a portion of the interaction elements are associated with clusters of characters. When an interaction element is selected, the characters associated with the selected interaction element are disposed radially in relation to the selected interaction element. Using the control, the interaction element and a character associated with the selected interaction element are selected based on the user's recognized first and second gestures, respectively, and the selected character is output.

217 citations


Cited by
More filters
Patent
Jong Hwan Kim1
13 Mar 2015
TL;DR: In this article, a mobile terminal including a body; a touchscreen provided to a front and extending to side of the body and configured to display content; and a controller configured to detect one side of a body when it comes into contact with a side of an external terminal, display a first area on the touchscreen corresponding to a contact area of body and the external terminal and a second area including the content.
Abstract: A mobile terminal including a body; a touchscreen provided to a front and extending to side of the body and configured to display content; and a controller configured to detect one side of the body comes into contact with one side of an external terminal, display a first area on the touchscreen corresponding to a contact area of the body and the external terminal and a second area including the content, receive an input of moving the content displayed in the second area to the first area, display the content in the first area, and share the content in the first area with the external terminal.

1,441 citations

Patent
01 Dec 2003
TL;DR: In this article, a perceptual user interface system includes a tracking component that detects object characteristics of at least one of a plurality of objects within a scene, and tracks the respective object.
Abstract: Architecture for implementing a perceptual user interface. The architecture comprises alternative modalities for controlling computer application programs and manipulating on-screen objects through hand gestures or a combination of hand gestures and verbal commands. The perceptual user interface system includes a tracking component that detects object characteristics of at least one of a plurality of objects within a scene, and tracks the respective object. Detection of object characteristics is based at least in part upon image comparison of a plurality of images relative to a course mapping of the images. A seeding component iteratively seeds the tracking component with object hypotheses based upon the presence of the object characteristics and the image comparison. A filtering component selectively removes the tracked object from the object hypotheses and/or at least one object hypothesis from the set of object hypotheses based upon predetermined removal criteria.

876 citations

Patent
24 Jul 2001
TL;DR: In this paper, a method of using stereo vision to interface with a computer is described, which includes capturing a stereo image, processing the stereo image to determine position information of an object in the stereo images, and communicating the position information to the computer.
Abstract: A method of using stereo vision to interface with a computer is provided. The method includes capturing a stereo image, and processing the stereo image to determine position information of an object in the stereo image. The object is controlled by a user. The method also includes communicating the position information to the computer to allow the user to interact with a computer application.

838 citations

Patent
18 Feb 2003
TL;DR: In this article, three-dimensional position information is used to identify the gesture created by a body part of interest, based on the shape of the body part and its position and orientation.
Abstract: Three-dimensional position information is used to identify the gesture created by a body part of interest. At one or more instances of an interval, the posture of a body part is recognized, based on the shape of the body part and its position and orientation. The posture of the body part over each of the one or more instances in the interval are recognized as a combined gesture. The gesture is classified for determining an input into a related electronic device.

773 citations

Patent
04 Jun 2002
TL;DR: An interactive video display system as mentioned in this paper is a system where a camera is used to detect an object in an interactive area located in front of the display screen, the camera operable to capture three-dimensional information about the object.
Abstract: An interactive video display system. A display screen is for displaying a visual image for presentation to a user. A camera is for detecting an object in an interactive area located in front of the display screen, the camera operable to capture three-dimensional information about the object. A computer system is for directing the display screen to change the visual image in response to the object.

760 citations