Topic
Stylus
About: Stylus is a research topic. Over the lifetime, 8009 publications have been published within this topic receiving 111335 citations.
Papers published on a yearly basis
Papers
More filters
•
21 Mar 2007TL;DR: A haptic device provides indirect haptic feedback and virtual texture sensations to a user by modulation of friction of a touch surface of the device in response to one or more sensed parameters and/or time as discussed by the authors.
Abstract: A haptic device provides indirect haptic feedback and virtual texture sensations to a user by modulation of friction of a touch surface of the device in response to one or more sensed parameters and/or time. The sensed parameters can include, but are not limited to, sensed position of the user's finger, derivatives of sensed finger position such as velocity and/or acceleration, sensed finger pressure, and/or sensed direction of motion of the finger. The touch surface is adapted to be touched by a user's bare finger, thumb or other appendage and/or by an instrument such as a stylus held by the user.
340 citations
•
23 Sep 1997
TL;DR: In this article, a touch pad is made flexible, so that the stylus pressure compresses an insulating layer between a conductive reference layer and a matrix of conductive traces.
Abstract: The present invention provides a touch pad which can detect a simply stylus using a change capacitance. The touch pad is made flexible, so that the stylus pressure compresses an insulating layer between a conductive reference layer and matrix of conductive traces. The compression causes a change in capacitance due to either the two capacitive conductive surfaces being closer to each other, or a variation of the dielectric value of the insulating layer due to the replacement of air in air gaps by the material of the insulating layer.
318 citations
••
PARC1
TL;DR: This work presents the design and preliminary analysis of an approach to stylus touch-typing using an alphabet of unistrokes, which are letters specially designed to be used with a stylus, which have the following advantages over ordinary printing.
Abstract: One of the attractive features of keyboards is that they support novice as well as expert users. Novice users enter text using “hunt-and-peck,” experts use touch-typing. Although it takes time to learn touch-typing, there is a large payoff in faster operation. In contrast to keyboarda, pen-based computers have only a novice mode for text entry in which users print text to a character recognize. An electronic pen (or stylus) would be more attractive as an input device if it supported expert users with some analogue of touch-typing. We present the design and preliminary analysis of an approach to stylus touch-typing using an alphabet of unistrokes, which are letters specially designed to be used with a stylus. Unistrokes have the following advantages over ordinary printing they are faster to write, less prone to recognition error, and can be entered in an “eyes-free” manner that requires very little screen real estate.
303 citations
•
30 Jul 2002TL;DR: In this paper, a stylus-based context menu is presented for a computing device receiving user input via a touch-enabled stylus. Butler et al. provide context menus useful in, e.g., a computer device receiving users' input via stylus, displaying actions performable on an object in a context menu for the object.
Abstract: Aspects of the present invention provide context menus useful in, e.g., a computing device receiving user input via a stylus. Icons representing actions performable on an object are displayed in a context menu for the object. Additional aspects of the invention include cascading menus that also minimize hand and/or wrist motion, as well as placement of menus based on user handedness and/or stylus orientation.
300 citations
•
10 Dec 2003
TL;DR: In this paper, a technique for disambiguating speech input for multimodal systems by using a combination of speech and visual I/O interfaces is presented, where the user is presented with a set of possible matches using a visual display and/or speech output.
Abstract: A technique is disclosed for disambiguating speech input (202) for multimodal systems by using a combination of speech and visual I/O interfaces. When the user's speech input is not recognized with sufficiently high confidence, a the user is presented with a set of possible matches (210) using a visual display and/or speech output. The user then selects (212) the intended input from the list of matches via one or more available input mechanisms (e.g., stylus, buttons, keyboard, mouse, or speech input). These techniques involve the combined use of speech and visual interfaces to correctly identify user's speech input. The techniques disclosed herein may be utilized in computer devices such as PDAs, cellphones, desktop and laptop computers, tablet PCs, etc.
292 citations