scispace - formally typeset
Search or ask a question
Author

J. Unwin

Bio: J. Unwin is an academic researcher from University of Pennsylvania. The author has contributed to research in topics: Haptic technology & Delaunay triangulation. The author has an hindex of 1, co-authored 1 publications receiving 54 citations.

Papers
More filters
Proceedings ArticleDOI
14 Apr 2013
TL;DR: A new method for creating haptic texture models from data recorded during natural and unconstrained motions using a new haptic recording device and presents a new spectral metric for determining perceptual match of the models in order to evaluate the effectiveness and consistency of the segmenting and modeling approach.
Abstract: If you pick up a tool and drag its tip across a table, a rock, or a swatch of fabric, you are able to feel variations in the textures even though you are not directly touching them. These vibrations are characteristic of the material and the motions made when interacting with the surface. This paper presents a new method for creating haptic texture models from data recorded during natural and unconstrained motions using a new haptic recording device. The recorded vibration data is parsed into short segments that represent the feel of the surface at the associated tool force and speed. We create a low-order auto-regressive (AR) model for each data segment and construct a Delaunay triangulation of models in force-speed space for each surface. During texture rendering, we stably interpolate between these models using barycentric coordinates and drive the interpolated model with white noise to output synthetic vibrations. Our methods were validated through application to data recorded by eight human subjects and the experimenter interacting with six textures. We present a new spectral metric for determining perceptual match of the models in order to evaluate the effectiveness and consistency of the segmenting and modeling approach. Multidimensional scaling (MDS) on the pairwise differences in the synthesized vibrations shows that the 54 created texture models cluster by texture in a two-dimensional perceptual space.

61 citations


Cited by
More filters
Proceedings ArticleDOI
19 Apr 2018
TL;DR: Haptic Revolver is a handheld virtual reality controller that renders fingertip haptics when interacting with virtual surfaces through an actuated wheel that raises and lowers underneath the finger to render contact with a virtual surface.
Abstract: We present Haptic Revolver, a handheld virtual reality controller that renders fingertip haptics when interacting with virtual surfaces. Haptic Revolver's core haptic element is an actuated wheel that raises and lowers underneath the finger to render contact with a virtual surface. As the user's finger moves along the surface of an object, the controller spins the wheel to render shear forces and motion under the fingertip. The wheel is interchangeable and can contain physical textures, shapes, edges, or active elements to provide different sensations to the user. Because the controller is spatially tracked, these physical features can be spatially registered with the geometry of the virtual environment and rendered on-demand. We evaluated Haptic Revolver in two studies to understand how wheel speed and direction impact perceived realism. We also report qualitative feedback from users who explored three application scenarios with our controller.

180 citations

Journal ArticleDOI
TL;DR: This paper presents a set of methods for creating a haptic texture model from tool-surface interaction data recorded by a human in a natural and unconstrained manner and uses these texture model sets to render synthetic vibration signals in real time as a user interacts with the TexturePad system.
Abstract: Texture gives real objects an important perceptual dimension that is largely missing from virtual haptic interactions due to limitations of standard modeling and rendering approaches. This paper presents a set of methods for creating a haptic texture model from tool-surface interaction data recorded by a human in a natural and unconstrained manner. The recorded high-frequency tool acceleration signal, which varies as a function of normal force and scanning speed, is segmented and modeled as a piecewise autoregressive (AR) model. Each AR model is labeled with the source segment's median force and speed values and stored in a Delaunay triangulation to create a model set for a given texture. We use these texture model sets to render synthetic vibration signals in real time as a user interacts with our TexturePad system, which includes a Wacom tablet and a stylus augmented with a Haptuator. We ran a human-subject study with two sets of ten participants to evaluate the realism of our virtual textures and the strengths and weaknesses of this approach. The results indicated that our virtual textures accurately capture and recreate the roughness of real textures, but other modeling and rendering approaches are required to completely match surface hardness and slipperiness.

140 citations

Journal ArticleDOI
TL;DR: The proposed subset of six features, selected from the described sound, image, friction force, and acceleration features, leads to a classification accuracy of 74 percent in the authors' experiments when combined with a Naive Bayes classifier.
Abstract: When a tool is tapped on or dragged over an object surface, vibrations are induced in the tool, which can be captured using acceleration sensors. The tool-surface interaction additionally creates audible sound waves, which can be recorded using microphones. Features extracted from camera images provide additional information about the surfaces. We present an approach for tool-mediated surface classification that combines these signals and demonstrate that the proposed method is robust against variable scan-time parameters. We examine freehand recordings of 69 textured surfaces recorded by different users and propose a classification system that uses perception-related features, such as hardness, roughness, and friction; selected features adapted from speech recognition, such as modified cepstral coefficients applied to our acceleration signals; and surface texture-related image features. We focus on mitigating the effect of variable contact force and exploration velocity conditions on these features as a prerequisite for a robust machine-learning-based approach for surface classification. The proposed system works without explicit scan force and velocity measurements. Experimental results show that our proposed approach allows for successful classification of textured surfaces under variable freehand movement conditions, exerted by different human operators. The proposed subset of six features, selected from the described sound, image, friction force, and acceleration features, leads to a classification accuracy of 74 percent in our experiments when combined with a Naive Bayes classifier.

112 citations

Journal ArticleDOI
01 Feb 2019
TL;DR: In this article, the authors present the fundamentals and state of the art in haptic codec design for the Tactile Internet and discuss how limitations of the human haptic perception system can be exploited for efficient perceptual coding of kinesthetic and tactile information.
Abstract: The Tactile Internet will enable users to physically explore remote environments and to make their skills available across distances. An important technological aspect in this context is the acquisition, compression, transmission, and display of haptic information. In this paper, we present the fundamentals and state of the art in haptic codec design for the Tactile Internet. The discussion covers both kinesthetic data reduction and tactile signal compression approaches. We put a special focus on how limitations of the human haptic perception system can be exploited for efficient perceptual coding of kinesthetic and tactile information. Further aspects addressed in this paper are the multiplexing of audio and video with haptic information and the quality evaluation of haptic communication solutions. Finally, we describe the current status of the ongoing IEEE standardization activity P1918.1.1 which has the ambition to standardize the first set of codecs for kinesthetic and tactile information exchange across communication networks.

104 citations

Proceedings ArticleDOI
20 Mar 2014
TL;DR: A method for resampling the texture models so they can be rendered at a sampling rate other than the 10 kHz used when recording data, to increase the adaptability and utility of HaTT.
Abstract: This paper introduces the Penn Haptic Texture Toolkit (HaTT), a publicly available repository of haptic texture models for use by the research community. HaTT includes 100 haptic texture and friction models, the recorded data from which the models were made, images of the textures, and the code and methods necessary to render these textures using an impedance-type haptic interface such as a SensAble Phantom Omni. This paper reviews our previously developed methods for modeling haptic virtual textures, describes our technique for modeling Coulomb friction between a tooltip and a surface, discusses the adaptation of our rendering methods for display using an impedance-type haptic device, and provides an overview of the information included in the toolkit. Each texture and friction model was based on a ten-second recording of the force, speed, and high-frequency acceleration experienced by a handheld tool moved by an experimenter against the surface in a natural manner. We modeled each texture's recorded acceleration signal as a piecewise autoregressive (AR) process and stored the individual AR models in a Delaunay triangulation as a function of the force and speed used when recording the data. To increase the adaptability and utility of HaTT, we developed a method for resampling the texture models so they can be rendered at a sampling rate other than the 10 kHz used when recording data. Measurements of the user's instantaneous normal force and tangential speed are used to synthesize texture vibrations in real time. These vibrations are transformed into a texture force vector that is added to the friction and normal force vectors for display to the user.

101 citations