scispace - formally typeset
Search or ask a question
Patent

Gestures for touch sensitive input devices

TL;DR: In this article, the authors described a system for processing touch inputs with respect to a multipoint sensing device and identifying at least one multipoint gesture based on the data from the multi-point sensing device.
Abstract: Methods and systems for processing touch inputs are disclosed. The invention in one respect includes reading data from a multipoint sensing device such as a multipoint touch screen where the data pertains to touch input with respect to the multipoint sensing device, and identifying at least one multipoint gesture based on the data from the multipoint sensing device.
Citations
More filters
Patent
06 Sep 2007
TL;DR: In this paper, a computer-implemented method for use in conjunction with a computing device with a touch screen display comprises: detecting one or more finger contacts with the touch screen, applying one or several heuristics to the finger contacts to determine a command for the device, and processing the command.
Abstract: A computer-implemented method for use in conjunction with a computing device with a touch screen display comprises: detecting one or more finger contacts with the touch screen display, applying one or more heuristics to the one or more finger contacts to determine a command for the device, and processing the command. The one or more heuristics comprise: a heuristic for determining that the one or more finger contacts correspond to a one- dimensional vertical screen scrolling command, a heuristic for determining that the one or more finger contacts correspond to a two-dimensional screen translation command, and a heuristic for determining that the one or more finger contacts correspond to a command to transition from displaying a respective item in a set of items to displaying a next item in the set of items.

2,167 citations

Patent
03 Mar 2006
TL;DR: In this paper, a multi-functional hand-held device capable of configuring user inputs based on how the device is to be used is presented, and a GUI for each of the multiple functions of the devices is presented.
Abstract: Disclosed herein is a multi-functional hand-held device capable of configuring user inputs based on how the device is to be used. Preferably, the multifunctional hand-held device has at most only a few physical buttons, keys, or switches so that its display size can be substantially increased. The multifunctional hand-held device also incorporates a variety of input mechanisms, including touch sensitive screens, touch sensitive housings, display actuators, audio input, etc. The device also incorporates a user-configurable GUI for each of the multiple functions of the devices.

1,844 citations

Patent
11 Jan 2011
TL;DR: In this article, an intelligent automated assistant system engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions.
Abstract: An intelligent automated assistant system engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions. The system can be implemented using any of a number of different platforms, such as the web, email, smartphone, and the like, or any combination thereof. In one embodiment, the system is based on sets of interrelated domains and tasks, and employs additional functionally powered by external services with which the system can interact.

1,462 citations

Patent
Jong Hwan Kim1
13 Mar 2015
TL;DR: In this article, a mobile terminal including a body; a touchscreen provided to a front and extending to side of the body and configured to display content; and a controller configured to detect one side of a body when it comes into contact with a side of an external terminal, display a first area on the touchscreen corresponding to a contact area of body and the external terminal and a second area including the content.
Abstract: A mobile terminal including a body; a touchscreen provided to a front and extending to side of the body and configured to display content; and a controller configured to detect one side of the body comes into contact with one side of an external terminal, display a first area on the touchscreen corresponding to a contact area of the body and the external terminal and a second area including the content, receive an input of moving the content displayed in the second area to the first area, display the content in the first area, and share the content in the first area with the external terminal.

1,441 citations

Patent
19 Jul 2005
TL;DR: In this article, a user interface method for detecting a touch and then determining user interface mode when a touch is detected is presented. And the method further includes activating one or more GUI elements based on the user interface modes and in response to the detected touch.
Abstract: A user interface method is disclosed. The method includes detecting a touch and then determining a user interface mode when a touch is detected. The method further includes activating one or more GUI elements based on the user interface mode and in response to the detected touch.

1,390 citations

References
More filters
Patent
05 Oct 2010
TL;DR: In this article, the authors proposed a two-layer electrode structure with branches extending in the first direction part of the way towards each adjacent sense electrode so that end portions of the branches of adjacent sense electrodes co-extend with each other.
Abstract: A capacitive position sensor has a two-layer electrode structure. Drive electrodes extending in a first direction on a first plane on one side of a substrate. Sense electrodes extend in a second direction on a second plane on the other side of the substrate so that the sense electrodes cross the drive electrodes at a plurality of intersections which collectively form a position sensing array. The sense electrodes are provided with branches extending in the first direction part of the way towards each adjacent sense electrode so that end portions of the branches of adjacent sense electrodes co-extend with each other in the first direction separated by a distance sufficiently small that capacitive coupling to the drive electrode adjacent to the co-extending portion is reduced. Providing sense electrode branches allow a sensor to be made which has a greater extent in the first direction for a given number of sense channels, since the co-extending portions provide an interpolating effect. The number of sense electrode branches per drive electrode can be increased which allows a sensor to be made which has ever greater extent in the first direction without having to increase the number of sense channels.

764 citations

Patent
01 Jul 2002

734 citations

Patent
14 Oct 1998
TL;DR: In this article, a system and method for providing a gesture recognition system for recognizing gestures made by a moving subject within an image and performing an operation based on the semantic meaning of the gesture.
Abstract: A system and method are disclosed for providing a gesture recognition system for recognizing gestures made by a moving subject within an image and performing an operation based on the semantic meaning of the gesture. A subject, such as a human being, enters the viewing field of a camera connected to a computer and performs a gesture, such as flapping of the arms. The gesture is then examined by the system one image frame at a time. Positional data is derived from the input frames and compared to data representing gestures already known to the system. The comparisons are done in real-time and the system can be trained to better recognize known gestures or to recognize new gestures. A frame of the input image containing the subject is obtained after a background image model has been created. An input frame is used to derive a frame data set that contains particular coordinates of the subject at a given moment in time. This series of frame data sets is examined to determine whether it conveys a gesture that is known to the system. If the subject gesture is recognizable to the system, an operation based on the semantic meaning of the gesture can be performed by a computer.

712 citations

Proceedings ArticleDOI
01 Jun 1992
TL;DR: The video presents a two-phase interaction technique that combines gesture and direct manipulation, and the result is a powerful interaction which combines the advantages of gesturing anddirect manipulation.
Abstract: A gesture, as the term is used here, is a handmade mark used to give a command to a computer. The attributes of the gesture (its location, size, extent, orientation, and dynamic properties) can be mapped to parameters of the command. An operation, operands, and parameters can all be communicated simultaneously with a single, intuitive, easily drawn gesture. This makes gesturing an attractive interaction teehnique. ~pically, agestural interactions completed (e.g. the styIus is lifted) before the the gesture is classified, its attributes computed, and the intended command performed. There is no opportunity for the interactive manipulation of parameters in the presence of application feedback that is typical of drag operations indirect manipulation interfaces. This lack of continuous feedback during the interaction makes the use of gestures awkward for tasks that require such feedback, The video presents a two-phase interaction technique that combines gesture and direct manipulation. A two-phase interaction begins with a gesture, which is recognized during the interaction (e.g. while the stylus is still touching the writing surface). After recognition, the application is informed and the interaction continues, allowing the user to manipulate parameters interactively, The result is a powerful interaction which combines the advantages of gesturing and direct manipulation.

700 citations

Patent
13 Sep 2002
TL;DR: A gesture recognition system includes: elements for detecting and generating a signal corresponding a number of markers arranged on an object, elements for processing the signal from the detecting elements, members for detecting position of the markers in the signal as mentioned in this paper.
Abstract: A gesture recognition system includes: elements for detecting and generating a signal corresponding a number of markers arranged on an object, elements for processing the signal from the detecting elements, members for detecting position of the markers in the signal. The markers are divided into first and second set of markers, the first set of markers constituting a reference position and the system comprises elements for detecting movement of the second set of markers and generating a signal as a valid movement with respect to the reference position.

697 citations