scispace - formally typeset
Open AccessProceedings ArticleDOI

WhichFingers: Identifying Fingers on Touch Surfaces and Keyboards using Vibration Sensors

Reads0
Chats0
TLDR
A low cost prototype using piezo based vibration sensors attached to each finger is developed by combining the events from an input device with the information from the vibration sensors to achieve low latency and robust finger identification.
Abstract
HCI researchers lack low latency and robust systems to support the design and development of interaction techniques using finger identification. We developed a low cost prototype using piezo based vibration sensors attached to each finger. By combining the events from an input device with the information from the vibration sensors we demonstrate how to achieve low latency and robust finger identification. Our prototype was evaluated in a controlled experiment, using two keyboards and a touchpad, showing recognition rates of 98.2% for the keyboard and, for the touchpad, 99.7% for single touch and 94.7% for two simultaneous touches. These results were confirmed in an additional laboratory style experiment with ecologically valid tasks. Last we present new interactions techniques made possible using this technology.

read more

Content maybe subject to copyright    Report

HAL Id: hal-01609943
https://hal.archives-ouvertes.fr/hal-01609943
Submitted on 4 Oct 2017
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
WhichFingers: Identifying Fingers on Touch Surfaces
and Keyboards using Vibration Sensors
Damien Masson, Alix Goguey, Sylvain Malacria, Géry Casiez
To cite this version:
Damien Masson, Alix Goguey, Sylvain Malacria, Géry Casiez. WhichFingers: Identifying Fingers
on Touch Surfaces and Keyboards using Vibration Sensors. UIST 2017 - 30th ACM Symposium on
User Interface Software and Technology, Oct 2017, Québec, Canada. pp.8, �10.1145/3126594.3126619�.
�hal-01609943�

WhichFingers: Identifying Fingers on Touch Surfaces and
Keyboards using Vibration Sensors
Damien Masson
1
, Alix Goguey
2,3
, Sylvain Malacria
3
& Géry Casiez
1
1
Université de Lille, France,
2
University of Saskatchewan, Canada,
3
Inria, Lille, France
damien.masson@etudiant.univ-lille1.fr, alix.goguey@usask.ca, sylvain.malacria@inria.fr,
gery.casiez@univ-lille1.fr
ABSTRACT
HCI researchers lack low-latency and robust systems to sup-
port the design and development of interaction techniques us-
ing finger identification. We developed a low-cost prototype
using piezo-based vibration sensors attached to each finger.
By combining the events from an input device with the in-
formation from the vibration sensors we demonstrate how to
achieve low-latency and robust finger identification. Our pro-
totype was evaluated in a controlled experiment, using two
keyboards and a touchpad, showing single-touch recognition
rates of 98.2% for the keyboard and 99.7% for the touch-
pad, and 94.7% for two simultaneous touches. These results
were confirmed in an additional laboratory-style experiment
with ecologically valid tasks. Last we present new interaction
techniques made possible using this technology.
Author Keywords
finger identification; touch interaction; vibration sensor.
ACM Classification Keywords
H.5.2 Information interfaces (e.g. HCI): User interfaces
INTRODUCTION AND RELATED WORK
Finger identification associating specific fingers and hands
with touch contacts on a device is seeing a rising inter-
est, given the increased input vocabulary it provides [12, 14,
15, 33]. By executing similar physical actions, but with dif-
ferent fingers, the user can input different commands to the
system. However, no technology efficiently supports finger
identification yet, and as a result, researchers have explored
different methods. Gupta et al. recently used finger identi-
fication to improve typing speed on miniature screens using
two fingers [15] and help multitasking on smartphones [14].
Gupta et al. acknowledge they explored multiple techniques
to identify index and middle fingers, using optical markers,
color markers, leap motion, IMU and muscle sensing before
ending up using an IR photo-resistor mounted under the in-
dex finger, introducing some constraints. In [33], Zheng and
ACM ISBN 978-1-4503-2138-9.
DOI: 10.1145/1235
Vogel propose to augment keyboard shortcuts by differenti-
ating which finger presses the trigger key. Their prototype
requires the use of a green keyboard cover and laptop skin
together with a reflector that directs the webcam light path to
the keyboard. After complex image processing, they identify
any of the ten fingers and two postures (opened and closed
hand) within 77 ms but the accuracy of their technique was
not reported.
Other prototypes developed in the literature are mainly based
on contacts geometry [1, 5, 9, 31, 18, 29], RGB and depth
cameras [4, 11, 13, 17, 24, 30, 33], and fingerprints [27, 16].
Geometry-based techniques need no extra hardware but con-
strain users’ movements as they require to put fingers at pre-
defined places [9] or at least three fingers in contact to infer
their identity based on geometrical relationships [1, 5, 9, 31,
18, 29]. The reliability of the prototypes is seldom reported.
Au and Tai achieved a 98.6% finger identification rate when
all 5 fingers of the hand are in contact with a surface [1]. Wag-
ner et al. also rely on geometrical features to achieve up to
98.4% accuracy to recognize only nine 3-finger chords [29].
RGB cameras have been used to recognize specific hand pos-
tures involving different fingers [17, 21]. Depth cameras can
help to segment the hand from the background [24] but as
soon as the hand can interact more freely, fingers need to be
equipped with color rings to improve the robustness of detec-
tion [11, 13, 30]. Leap Motion has been investigated but it
requires users to first open the hand such that the algorithm
identifies the 5 fingers, and then fold the fingers such that the
finger selected for interaction is the furthest away from the
others [4]. In addition, it does not work well in front of a
screen due to high reflection. Success rates have only been
reported in [24], achieving 97.7% for 9 hand poses requiring
to hold the hand still for 10 seconds. Overall, camera-based
techniques require good lighting conditions, careful camera
positioning, pre-calibration and are subject to occlusions.
Sugiura et al. used a fingerprint scanner to identify single
fingers [27]. In Fiberio [16], the authors designed a touch-
screen able to identify users by recognizing their fingerprints.
They asked participants to lay flat for 400 ms, one after the
other, their index, middle and ring fingers. The image cap-
tured by the touchscreen could be used to correctly identify
the finger, and thus the user, in 98.7% of all cases. Pushing
the idea further, the system might be able to identify fingers
given that each fingerprint is unique, although it requires the
1

users to hold a finger flat on the surface for at least 400 ms
and takes significant time to process. In addition it requires
specific expansive hardware that is difficult to adapt to mo-
bile scenarios. Finally, it remains unclear if fingers could be
identified well when users interact with their fingertip, only
showing a partial fingerprint.
Several other techniques have been explored for finger identi-
fication. For instance, gloves augmented with fiduciary mark-
ers on a FTIR touchscreen can be used [22], but this cannot
be applied to capacitive surfaces. Gupta et al. used time-of-
flight IR cameras attached to two fingers to measure the dis-
tance from the screen and achieved a 99.5% success rate for
single touches only [15]. Benko et al. used electro-myogram
sensors placed on the forearm and achieve a 90% success rate
for single finger identification among 5 fingers with a 300 ms
latency [2]. Other works attached Gametrak strings to each
finger to track their 3D positions [10] or used RFID tags at-
tached to each finger with fake fingernails or gloves on cus-
tom touchscreens integrating RFID antennas [19, 28].
Fukumoto et al. developed FingeRing [7, 8], a wearable chord
keyboard using accelerometers mounted on each finger where
users can input chord sequences by tapping with their fingers
on any surface. Performance was not measured but when
reimplementing the technique, we found that the use of ac-
celerometers capped the recognition accuracy due to false
positives when moving the fingers.
When interaction techniques are introduced with a system to
identify fingers, most of them bind a command to a single
finger [2, 4, 14, 22, 27, 28, 30, 33]. Other applications ex-
plore the use of common two-finger gestures (e.g. 2-fingers
swipes) [2, 13]. However, when it comes to chords with more
than two fingers, they use sequential construction of chord
(e.g. in [9, 11, 13, 14, 20, 29]) which would be captured as
successive identification of 1 finger chord.
In summary, none of the previously introduced solutions is
concurrently robust, low-latency, cross device and easy to
replicate. We argue that finger identification needs to be re-
liable, fast and able to recognize any finger chord combina-
tion. If finger identification is slow, it simply outweighs the
benefits of using such technology. If it is unreliable, users
are likely to wait for feedback from the system to make sure
their fingers are correctly recognized before interacting, slow-
ing down the interaction and reducing the possible benefits
of using finger identification. Although we believe that fu-
ture generations of multi-touch devices will embed a non-
invasive, mature and reliable finger identification technology,
using capacitive sensing [32, 6] or 3D ultrasound biometric
sensors [25], researchers need a robust environment to ex-
plore the use of this information to propose useful applica-
tions today.
In this paper, we propose WhichFingers, a low-cost device
enabling real-time finger identification, using piezo-based
vibration sensors. Our device is the first real-time solu-
tion which does not require calibration and works not only
on any touch surface, but also on keyboards and supports
cross-device uses. We are also the first to evaluate a finger-
Figure 1: Our device prototype (left) using vibration sensors attached
to the fingers using elastic rings. Example of stimuli (right) used during
the controlled experiment.
identification prototype in realistic and diverse scenarios. Af-
ter describing the hardware, we present its evaluation demon-
strating its robustness. Last we present new interaction tech-
niques made possible by our device prototype.
HARDWARE DEVICE
The WhichFingers prototype consists of five Minisense 100
vibration sensors [26] attached to each finger (see Fig. 1-left).
The sensors use flexible PVDF piezoelectric polymer film
loaded by a mass to offer high sensitivity to detect contact
vibrations. They produce a voltage as large as 90V depend-
ing on the intensity of the shock or vibration. Our underlying
assumption is that the finger that contacts a touch surface or a
keyboard key produces a higher response on its sensor com-
pared to the other fingers and thus can be identified.
The five sensors are plugged into a micro-controller through
a custom designed shield that connects each sensor to the
ground and an analog input of the board. A 1M resistor
connected in parallel of each sensor acts as a high pass filter.
We developed a device prototype using an Arduino Leonardo
board that reads the voltage of each sensor and sends the raw
values to the host computer at 1000 Hz using Raw HID over
USB (Fig. 1). In total, the device costs less than $35.
The device is attached on the back of the hand using a finger-
less glove. Sensors are glued to elastic rings. To put on the
device, users first attach the fingerless glove and then attach
each ring to the corresponding finger. The sensors or the elas-
tic rings do not disturb the interaction with a touch surface or
keyboard as the finger pulp remains free.
PROCESSING SOFTWARE
The software that processes the data transmitted by the hard-
ware device has two main components: a Low-level interac-
tion monitor and a Simple signal processing algorithm.
On desktop, the low-level interaction monitor detects touch
and key events on the touchpad and keyboard. It has been
developed in C++ with the Qt framework. It monitors raw
touchpad inputs using the I/O Kit and Apple’s private mul-
titouch API, and key events using the Quartz Event Services
API. On mobile Android devices, the low-level interaction
monitor detects touch events using native code. Note that
WhichFingers can be used with other touch-based devices, as
long as a low-level interaction monitor can be implemented.
2

To identify which finger has been used to perform an opera-
tion, we use a simple algorithm that examines which vibration
sensor created the highest voltage right before the event oc-
curred. As the software receives the values of the vibration
sensor at a frequency of 1000Hz, which is much higher than
the frequency of detecting low-level events (< 120Hz), the
algorithm first retrieves for an event detected at time t all the
vibration values from time t 32 ms to t 8 ms, which was
empirically defined as the best timeframe for 1000Hz. We es-
timated that the latency between the time the finger contacts
the touchpad or presses a keyboard key to the time the event
is reported in our application to be around 30 ms while our
wired version reports the information to the host-computer
within 1 ms, which helps explain why using this time win-
dow works best [3]. Our algorithm then declares the vibration
sensor that created the highest voltage over the timeframe as
the finger that produced the input event. On touch surfaces,
if two (or more) contacts occur in less than 30 ms, the two
(or more) highest voltages over the overlapping time frames
are stored. Fingers are then disambiguated using the x posi-
tion of the contacts: the leftmost contact is associated with
the leftmost finger and the second contact is associated with
the remaining finger.
Our processing software handles all touch and key events and
receives the raw HID events from WhichFingers.
CONTROLLED EXPERIMENT
We conducted a controlled experiment, targeted at evaluat-
ing the robustness of the prototype when the participant was
asked to type keys on two different keyboards as well as con-
tacting and performing slides on a touchpad with different
combination of fingers.
Method, procedures and tasks
In this experiment, we asked 20 participants (27.7 mean age,
all right-handed, 2 females, computer science university staff
and students), equipped with our wired prototype to interact
with a keyboard or a touchpad in reaction to a visual stimuli.
Vibration sensors were positioned on the second phalanx with
the Leonardo board mounted on the back of the hand. The
goal is to evaluate the performance in the following scenarios:
when no finger is already in contact and that our algorithm has
to determine which of the ve fingers enters in contact. In ad-
dition we include 2-finger chords: tapping with two fingers
simultaneously on the touchpad (from the user perspective)
when no fingers are already in contact. Considering our algo-
rithm uses the highest voltage within time windows, simulta-
neous contacts reduce chances to correctly identify fingers.
The participant sat in front of a desktop computer and the pro-
totype was equipped on the hand they reported to use for op-
erating a pointing device. They were then invited to interact
either with a KEYBOARD or TOUCHPAD in response to a vi-
sual stimuli. The stimuli displayed an image of a hand with a
circular overlay on the finger or chord they had to use (Fig. 1-
right). It also displayed the name of the finger or chord in
plain text. Participants then had to operate the corresponding
device with the requested finger or chord. Once the interac-
tion had been performed, there was a 400 ms delay before
displaying the next stimuli to avoid participants’ anticipation.
For the keyboard part of the experiment, participants had to
perform two different types of operations on the keyboard:
a TYPE which corresponds to typing a key, starting with the
finger not in contact with the key, and a PUSH which corre-
sponds to pressing a key with the finger first positioned on
it. We also used 2 different keyboards: an Apple Magic Key-
board, which has LOW travel distance keys, and a Hewlett-
Packard KU-0316, which has HIGH travel distance keys.
Only the 5 individual FINGERS were tested in this part. The
participant could press on any letter key of the keyboard. If
more than one key was pressed, the trial had to be repeated.
For the touchpad part of the experiment, participants had to
perform two different types of operations: a TAP which cor-
responds to tapping the touchpad, and a SLIDE which cor-
responds to performing a quick unidirectional movement in
any direction right after the finger enters in contact with the
touchpad. There was no requirement for the direction of the
movement so that we have no prior knowledge when doing
the recognition. Participants could touch the touchpad at any
location, but had to repeat the trial if more than the required
number of touches was detected. The 5 individual FINGERS
as well as the 10 possible 2-fingers CHORDS were tested. Par-
ticipants used an external Apple Magic Trackpad.
To avoid influencing participants, no particular instructions
on the hand position were given and no feedback informed
the participants on the detected finger(s) after each trial.
For each trial, we logged the expected finger or chord and the
actual finger or chord detected by the simple algorithm. We
video recorded all the sessions that we manually annotated
with the actual fingers used by the participants. Raw data was
displayed over the video for easier gesture labeling.
Design
Half of the participants started with the keyboard part of the
experiment, the other half starting with the touchpad part.
The keyboard part of the experiment used a 2 × 2 × 5 within-
subject design with factors keyboard (LOW or HIGH travel
distance keys), operation (TYPE or PUSH), and contacts (all
5 FINGERS of the dominant hand). Each combination of these
factors was repeated 5 times, for a total of 20 × 2 × 2 × 5 ×
5 = 2000 trials for all participants. Orders of Keyboard and
operation were counter-balanced across participants.
The touchpad part of the experiment used a 2 × 15 within-
subject design with factors operation (TAP or SLIDE) and
contacts (5 FINGERS + 10 CHORDS)
1
. Each combination of
these factors was repeated 5 times, resulting in a total of 20
× 2 × 15 × 5 = 3000 trials for all participants. Order of
operation was counter-balanced across participants.
The order of contacts was randomized with the only con-
straint that a contact cannot appear more than twice in a row.
1
CONTACTS can also be replaced by the number of contacts (1 or 2
fingers) that can also be considered as a factor.
3

Keyboard All operations TYPE PUSH
All keyboards 98.2% 98.3% 98.2%
1965 / 2000 983 / 1000 982 / 1000
LOW 99.1% 98.4% 99.8%
991 / 1000 492 / 500 499 / 500
HIGH 97.4% 98.2% 96.6%
974 / 1000 491 / 500 483 / 500
Touchpad All operations TAP SLIDE
CONTACTS 96.3% 96.9% 95.7%
2890 / 3000 1454 / 1500 1436 / 1500
FINGERS 99.7% 99.6% 99.8%
997 / 1000 498 / 500 499 / 500
CHORDS/ SETS 94.7% 95.6% 93.7%
1893 / 2000 956 / 1000 937 / 1000
Table 1: Detailed recognition rates of WhichFingers for the keyboards
and touchpad.
Results
In the subsequent analysis, we used SPSS for the ANOVAs.
Mauchly tests indicated the assumption of sphericity was not
violated in any of our analysis.
For the keyboard part, repeated measures ANOVA only found
a significant main effect of keyboard (F
1,19
= 7.5, p < 0.02, η
2
p
=
0.28) on recognition rate. Overall WhichFingers successfully
identified 99.1% of fingers on the LOW keyboard and 97.4%
on the HIGH keyboard (see Table 1 for details).
For the touchpad part, repeated measures ANOVA only found
a significant main effect of contacts (F
14,266
= 8.3, p < 0.0001,
η
2
p
= 0.30) and number of contacts (F
1,19
= 30.4, p < 0.001, η
2
p
=
0.62) on recognition rate. Overall WhichFingers success-
fully identified 96.3% of the contacts: 99.7% of the FIN-
GERS, 94.7% of the CHORDS (right set of fingers and cor-
rect contact-finger assignment) and 94.7% of the SETS (right
set of fingers and wrong contact-finger assignment). Pairwise
comparisons between the different contacts showed many
significant differences between them (significant differences
are highlighted in Fig. 2). The Middle-Little chord (83%)
performs significantly worse than all other chords except
Thumb-Little. Thumb-Little with 90.5% is the second worse
chord and performs significantly worse than the other ones
except Index-Little, Middle-Little and Thumb-Ring.
EXPERIMENT WITHOUT CONSTRAINT ON FINGERS
In the first experiment we systematically investigated the
recognition rates of fingers and chords. We also wanted to
evaluate WhichFingers in tasks representative of daily activi-
ties on a desktop computer while not constraining the fingers
or chords used by the participants.
Method
In this experiment, we asked 12 participants (28.3 mean age,
all right-handed, 1 female, computer science university staff
and students) to complete 4 tasks with either the touchpad or
the keyboard, on a desktop computer. See also the accom-
panying video for task demonstrations. Participants used the
same trackpad as the first experiment and the Apple Magic
Keyboard.
Contacts
0
20
40
60
80
100
Recognition rate
Thumb I-RMiddle RingIndex Little T-I T-M T-R T-L I-M M-RI-L M-L R-L
Figure 2: Mean success rates and 95% CI for the touchpad per contact.
TAP and SLIDE are merged. The horizontal lines represent the signifi-
cant differences between contacts: for a given line, the square contact is
significantly different from the dot contacts.
Touchpad. Participants were equipped with the device on
their dominant hand and had to perform 3 distinct tasks us-
ing the touchpad: a docking task, a scrolling task and a point-
ing task. For the docking task, the interface displayed a ge-
ometric shape that the participant had to scale, rotate and
position in its similar shaped dock by only using gestures
on the touchpad. The transformation initially applied to the
shape was generated randomly. The only available operations
were 2-finger rotate gestures for rotating the shape, 2-finger
pinch-and-expand gestures for scaling the shape, and drag-
ging for positioning the shape. For the scrolling task, the in-
terface displayed at the top of a window a target word that
the user had to acquire. The target was contained in a 4254
line, alphabetically-ordered list that was displayed in a 29-
lines-high viewport. The participants had to scroll the list
using 2-finger scrolling gestures only and acquired the target
by clicking on it. For the pointing task, participants had to
click on a 1.4 cm wide circular target randomly positioned
28.2 cm away. After selection, the participant had to lift all
fingers from the touchpad to display the next target. Each of
these tasks consisted of 12 trials. In total, these tasks lasted
about 6 minutes. During these tasks, we collected the data re-
trieved from the device, as well as the current task and all the
low-level inputs performed on the touchpad (contact points
information, button clicks).
Keyboard. Participants were equipped with the device on
their left hand and had to perform a text entry and formatting
task using the keyboard only in Apple Pages with its tool-
bar hidden. The view was configured in full screen, with the
leftmost part displaying Apple Pages and the rightmost part
displaying a non-editable version of the text participants had
to type and format, as well as the formatting command names
and their corresponding keyboard shortcuts below the text so
participants did not have to learn and memorize them. Partic-
ipants had to use the keyboard only. Text had to be selected
using a combination of the shift modifier and the arrow keys.
Commands had to be selected using their corresponding key-
board shortcuts, that had been modified so participants could
perform them using the left hand only. They were allowed
to use both hands for typing text and selecting commands.
We equipped the wired version of the device on participants’
left hands because we expected participants to issue keyboard
4

Citations
More filters
Proceedings ArticleDOI

InfiniTouch: Finger-Aware Interaction on Fully Touch Sensitive Smartphones

TL;DR: InfiniTouch is presented, the first system that enables touch input on the whole device surface and identifies the fingers touching the device without external sensors while keeping the form factor of a standard smartphone.
Proceedings ArticleDOI

Accurate and Low-Latency Sensing of Touch Contact on Any Surface with Finger-Worn IMU Sensor

TL;DR: It is shown that a finger ring with Inertial Measurement Unit (IMU) can substantially improve the accuracy of contact sensing from 84.74% to 98.61% (f1 score), with a low latency of 10 ms.
Proceedings ArticleDOI

Investigating the feasibility of finger identification on capacitive touchscreens using deep learning

TL;DR: This work uses capacitive images from mobile touchscreens to investigate the feasibility of finger identification and collects a dataset of low-resolution fingerprints and trained convolutional neural networks that classify touches from eight combinations of fingers, focusing on combinations that involve the thumb and index finger.
Proceedings ArticleDOI

TaplD: Rapid Touch Interaction in Virtual Reality using Wearable Sensing

TL;DR: TapID as mentioned in this paper is a wrist-based inertial sensing system that complements headset-tracked hand poses to trigger input in VR, which can detect surface touch events and identify the finger used for touch.
Proceedings ArticleDOI

Touchsense: classifying finger touches and measuring their force with an electromyography armband

TL;DR: Touch-Sense as discussed by the authors uses a neural network architecture to classify the finger touches using EMG data and estimate their force on a smartphone in real time based on data recorded from the sensors of an inexpensive and wireless EMG armband.
References
More filters
Patent

Embedded authentication systems in an electronic device

TL;DR: In this paper, an electronic device with a display and a fingerprint sensor may authenticate a user for a respective function by displaying a graphical element on the display, the graphical element indicating a first direction of finger movement that enables unlocking of the respective function.
Proceedings ArticleDOI

“Body coupled FingerRing”: wireless wearable keyboard

TL;DR: A really wearable input device “FingeRing” is developed for coming wearable PDAs, and a new symbol coding method that combines order and chord typing is proposed, and useful typing patterns are chosen by typing speed evaluations.
Proceedings ArticleDOI

Interacting with large displays from a distance with vision-tracked multi-finger gestural input

TL;DR: Bimanual techniques are developed to support a variety of asymmetric and symmetric interactions, including fast targeting and navigation to all parts of a large display from the comfort of a desk and chair, as well as techniques that exploit the ability of the vision-based hand tracking system to provide multi-finger identification and full 2D hand segmentation.
Proceedings ArticleDOI

The design and evaluation of multitouch marking menus

TL;DR: This paper discusses the new multitouch marking menu design, which can increase the number of items in a menu, and eliminate a level of depth, and investigates the human capabilities for performing directional chording gestures, to assess the feasibility of multitouch marks menus.
Patent

Fingerprint sensor in an electronic device

TL;DR: In this article, a fingerprint sensor can be implemented as an integrated circuit connected to a bottom surface of a cover sheet, near the bottom of the cover sheet or connected to the top of a display.
Related Papers (5)
Frequently Asked Questions (21)
Q1. What are the contributions in "Whichfingers: identifying fingers on touch surfaces and keyboards using vibration sensors" ?

By combining the events from an input device with the information from the vibration sensors the authors demonstrate how to achieve low-latency and robust finger identification. Last the authors present new interaction techniques made possible using this technology. 

The only available operations were 2-finger rotate gestures for rotating the shape, 2-finger pinch-and-expand gestures for scaling the shape, and dragging for positioning the shape. 

The keyboard part of the experiment used a 2 × 2 × 5 withinsubject design with factors keyboard (LOW or HIGH travel distance keys), operation (TYPE or PUSH), and contacts (all 5 FINGERS of the dominant hand). 

Scrolling long documentsis often performed with two fingers on a touchpad and can result in numerous motor actions in order to position the view port on a given page. 

Once the interac-tion had been performed, there was a 400 ms delay before displaying the next stimuli to avoid participants’ anticipation. 

synchronization between the device and the host computer is a critical factor as the success rate can be affected by the jitter on latency in the communication. 

WhichFingers is mainly targeted at HCI researchers who want to explore interaction and conduct performance studies with techniques leveraging finger identification. 

A common feature in current OS is to resize a window to one half of the available screen real estate (most of the time by dragging it to one side of the screen). 

The user can first roughly position the cursor at the desired position using absolute pointing before using relative pointing to precisely select a target. 

The authors estimated that the latency between the time the finger contacts the touchpad or presses a keyboard key to the time the event is reported in their application to be around 30 ms while their wired version reports the information to the host-computer within 1 ms, which helps explain why using this time window works best [3]. 

For the second experiment where participants mainly used chords they are used to perform (e.g. thumb+index, index+middle), the lower success rates can be explained by the use of rotational movements in the experiment: when the time elapsed between two contacts is less than 30 ms their algorithm uses the x position of the contacts, which can be oversimplistic in such situations, for instance when a user starts a rotate gesture with inverted thumb and index fingers in order to increase the range of movement. 

Although WhichFingers is easily reproducible and software examples are available2, the current version of the system still requires designers to implement ad-hoc applications that fuse low level data from both their device and the operating system events (e.g. key events or touch events) and tune the timeframe window parameters. 

For the keyboard part, repeated measures ANOVA only found a significant main effect of keyboard (F1,19 = 7.5, p < 0.02, η2p = 0.28) on recognition rate. 

As the software receives the values of the vibration sensor at a frequency of 1000Hz, which is much higher than the frequency of detecting low-level events (< 120Hz), the algorithm first retrieves for an event detected at time t all the vibration values from time t − 32 ms to t − 8 ms, which was empirically defined as the best timeframe for 1000Hz. 

In this paper, the authors propose WhichFingers, a low-cost device enabling real-time finger identification, using piezo-based vibration sensors. 

For the docking task, the interface displayed a geometric shape that the participant had to scale, rotate and position in its similar shaped dock by only using gestures on the touchpad. 

The authors logged a total of 3,346 gestures on touchpad (1,419 onefinger contacts and 1,927 two-finger contacts) and 3,017 on keyboard (2,557 one-key presses and 460 two-key presses). 

The five sensors are plugged into a micro-controller through a custom designed shield that connects each sensor to the ground and an analog input of the board. 

Performance was not measured but when reimplementing the technique, the authors found that the use of accelerometers capped the recognition accuracy due to false positives when moving the fingers. 

Their prototype could be improved by using less rigid wires to prevent such problems or move the sensor on the third phalanx near the palm instead of the second phalanx. 

The authors propose to use the index and middle fingers for relative scrolling and the middle and ring fingers for absolute scrolling by mapping the whole document to the touchpad height.