scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

One hundred data-driven haptic texture models and open-source methods for rendering on 3D objects

TL;DR: A method for resampling the texture models so they can be rendered at a sampling rate other than the 10 kHz used when recording data, to increase the adaptability and utility of HaTT.
Abstract: This paper introduces the Penn Haptic Texture Toolkit (HaTT), a publicly available repository of haptic texture models for use by the research community. HaTT includes 100 haptic texture and friction models, the recorded data from which the models were made, images of the textures, and the code and methods necessary to render these textures using an impedance-type haptic interface such as a SensAble Phantom Omni. This paper reviews our previously developed methods for modeling haptic virtual textures, describes our technique for modeling Coulomb friction between a tooltip and a surface, discusses the adaptation of our rendering methods for display using an impedance-type haptic device, and provides an overview of the information included in the toolkit. Each texture and friction model was based on a ten-second recording of the force, speed, and high-frequency acceleration experienced by a handheld tool moved by an experimenter against the surface in a natural manner. We modeled each texture's recorded acceleration signal as a piecewise autoregressive (AR) process and stored the individual AR models in a Delaunay triangulation as a function of the force and speed used when recording the data. To increase the adaptability and utility of HaTT, we developed a method for resampling the texture models so they can be rendered at a sampling rate other than the 10 kHz used when recording data. Measurements of the user's instantaneous normal force and tangential speed are used to synthesize texture vibrations in real time. These vibrations are transformed into a texture force vector that is added to the friction and normal force vectors for display to the user.
Citations
More filters
Proceedings ArticleDOI
02 May 2017
TL;DR: This work explores how to add haptics to walls and other heavy objects in virtual reality by creating a counter force that pulls the user's arm backwards when a user tries to push such an object, and accomplishes this in a wearable form factor.
Abstract: We explore how to add haptics to walls and other heavy objects in virtual reality. When a user tries to push such an object, our system actuates the user's shoulder, arm, and wrist muscles by means of electrical muscle stimulation, creating a counter force that pulls the user's arm backwards. Our device accomplishes this in a wearable form factor. In our first user study, participants wearing a head-mounted display interacted with objects provided with different types of EMS effects. The repulsion design (visualized as an electrical field) and the soft design (visualized as a magnetic field) received high scores on "prevented me from passing through" as well as "realistic". In a second study, we demonstrate the effectiveness of our approach by letting participants explore a virtual world in which all objects provide haptic EMS effects, including walls, gates, sliders, boxes, and projectiles.

221 citations


Cites background from "One hundred data-driven haptic text..."

  • ...It allows conveying the texture of objects [8]....

    [...]

  • ...There has been a good amount of progress towards simulating the haptic qualities of lightweight objects, such as contact with surfaces [17] or textures [8]....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a set of methods for creating a haptic texture model from tool-surface interaction data recorded by a human in a natural and unconstrained manner and uses these texture model sets to render synthetic vibration signals in real time as a user interacts with the TexturePad system.
Abstract: Texture gives real objects an important perceptual dimension that is largely missing from virtual haptic interactions due to limitations of standard modeling and rendering approaches. This paper presents a set of methods for creating a haptic texture model from tool-surface interaction data recorded by a human in a natural and unconstrained manner. The recorded high-frequency tool acceleration signal, which varies as a function of normal force and scanning speed, is segmented and modeled as a piecewise autoregressive (AR) model. Each AR model is labeled with the source segment's median force and speed values and stored in a Delaunay triangulation to create a model set for a given texture. We use these texture model sets to render synthetic vibration signals in real time as a user interacts with our TexturePad system, which includes a Wacom tablet and a stylus augmented with a Haptuator. We ran a human-subject study with two sets of ten participants to evaluate the realism of our virtual textures and the strengths and weaknesses of this approach. The results indicated that our virtual textures accurately capture and recreate the roughness of real textures, but other modeling and rendering approaches are required to completely match surface hardness and slipperiness.

140 citations

Journal ArticleDOI
TL;DR: The proposed subset of six features, selected from the described sound, image, friction force, and acceleration features, leads to a classification accuracy of 74 percent in the authors' experiments when combined with a Naive Bayes classifier.
Abstract: When a tool is tapped on or dragged over an object surface, vibrations are induced in the tool, which can be captured using acceleration sensors. The tool-surface interaction additionally creates audible sound waves, which can be recorded using microphones. Features extracted from camera images provide additional information about the surfaces. We present an approach for tool-mediated surface classification that combines these signals and demonstrate that the proposed method is robust against variable scan-time parameters. We examine freehand recordings of 69 textured surfaces recorded by different users and propose a classification system that uses perception-related features, such as hardness, roughness, and friction; selected features adapted from speech recognition, such as modified cepstral coefficients applied to our acceleration signals; and surface texture-related image features. We focus on mitigating the effect of variable contact force and exploration velocity conditions on these features as a prerequisite for a robust machine-learning-based approach for surface classification. The proposed system works without explicit scan force and velocity measurements. Experimental results show that our proposed approach allows for successful classification of textured surfaces under variable freehand movement conditions, exerted by different human operators. The proposed subset of six features, selected from the described sound, image, friction force, and acceleration features, leads to a classification accuracy of 74 percent in our experiments when combined with a Naive Bayes classifier.

112 citations


Cites methods from "One hundred data-driven haptic text..."

  • ...We decided not to rely on the publicly available and extensive haptic database in [20], as it contains only acceleration data recorded while a tool moved across different surfaces....

    [...]

Journal ArticleDOI
01 Feb 2019
TL;DR: In this article, the authors present the fundamentals and state of the art in haptic codec design for the Tactile Internet and discuss how limitations of the human haptic perception system can be exploited for efficient perceptual coding of kinesthetic and tactile information.
Abstract: The Tactile Internet will enable users to physically explore remote environments and to make their skills available across distances. An important technological aspect in this context is the acquisition, compression, transmission, and display of haptic information. In this paper, we present the fundamentals and state of the art in haptic codec design for the Tactile Internet. The discussion covers both kinesthetic data reduction and tactile signal compression approaches. We put a special focus on how limitations of the human haptic perception system can be exploited for efficient perceptual coding of kinesthetic and tactile information. Further aspects addressed in this paper are the multiplexing of audio and video with haptic information and the quality evaluation of haptic communication solutions. Finally, we describe the current status of the ongoing IEEE standardization activity P1918.1.1 which has the ambition to standardize the first set of codecs for kinesthetic and tactile information exchange across communication networks.

104 citations

Journal ArticleDOI
01 Apr 2019
TL;DR: This paper surveys the paradigm shift of haptic display occurred in the past 30 years, which is classified into three stages, including desktop haptics, surface haptic, and wearable haptic systems, and the importance of understanding human haptic perception for designing effective haptic devices is addressed.
Abstract: Immersion, interaction, and imagination are three features of virtual reality (VR). Existing VR systems possess fairly realistic visual and auditory feedbacks, and however, are poor with haptic feedback, by means of which human can perceive the physical world via abundant haptic properties. Haptic display is an interface aiming to enable bilateral signal communications between human and computer, and thus to greatly enhance the immersion and interaction of VR systems. This paper surveys the paradigm shift of haptic display occurred in the past 30 years, which is classified into three stages, including desktop haptics, surface haptics, and wearable haptics. The driving forces, key technologies and typical applications in each stage are critically reviewed. Toward the future high-fidelity VR interaction, research challenges are highlighted concerning handheld haptic device, multimodal haptic device, and high fidelity haptic rendering. In the end, the importance of understanding human haptic perception for designing effective haptic devices is addressed.

98 citations

References
More filters
Journal ArticleDOI
01 May 1971

7,355 citations


"One hundred data-driven haptic text..." refers methods in this paper

  • ...We model the acceleration signal using a fundamental model structure from time series analysis and speech processing [3]....

    [...]

Book ChapterDOI
01 Jan 2011
TL;DR: This chapter discusses the Boost C++ API, a peer-reviewed C++ class library which implements many interesting and useful data structures and algorithms, and the use of Boost smart pointers, Boost asynchronous IO, and IO Streams.
Abstract: In this chapter we discuss the Boost C++ API. Boost is a peer-reviewed C++ class library which implements many interesting and useful data structures and algorithms. In particular we discuss the use of Boost smart pointers, Boost asynchronous IO, and IO Streams. Boost also implements many data structures which are not present in the C++ standard library (e.g. bimap). Boost Graph Library (BGL) is presented with the help of real-life example. We compare Boost multi-threading and memory pool performance to APR. We discuss the integration of Python with C++ using Boost. We conclude the chapter with a discussion of Boost Generic Image Processing Library.

440 citations


"One hundred data-driven haptic text..." refers methods in this paper

  • ...In addition, the Boost Random Number Library is needed to generate the excitation signals [9], and the GIMP Toolkit [1] is needed for the texture selection menu....

    [...]

Journal ArticleDOI
TL;DR: This article considers the problem of modeling a class of nonstationary time series using piecewise autoregressive (AR) processes, and the minimum description length principle is applied to compare various segmented AR fits to the data.
Abstract: This article considers the problem of modeling a class of nonstationary time series using piecewise autoregressive (AR) processes. The number and locations of the piecewise AR segments, as well as the orders of the respective AR processes, are assumed unknown. The minimum description length principle is applied to compare various segmented AR fits to the data. The goal is to find the “best” combination of the number of segments, the lengths of the segments, and the orders of the piecewise AR processes. Such a “best” combination is implicitly defined as the optimizer of an objective function, and a genetic algorithm is implemented to solve this difficult optimization problem. Numerical results from simulation experiments and real data analyses show that the procedure has excellent empirical properties. The segmentation of multivariate time series is also considered. Assuming that the true underlying model is a segmented autoregression, this procedure is shown to be consistent for estimating the location of...

418 citations


"One hundred data-driven haptic text..." refers methods in this paper

  • ...We accomplished this segmentation using the Auto-PARM segmenting algorithm presented in [8]....

    [...]

Journal ArticleDOI
TL;DR: The DynTex database of high-quality dynamic texture videos is presented and a scheme for the manual annotation of the sequences based on a detailed analysis of the physical processes underlying the dynamic textures is proposed.

337 citations


"One hundred data-driven haptic text..." refers methods in this paper

  • ...The DynTex database contains sequences of dynamic texture videos [18]....

    [...]

Journal ArticleDOI

232 citations


"One hundred data-driven haptic text..." refers background in this paper

  • ...Since humans cannot discriminate the direction of vibration [2], it does not matter in which direction we display the texture forces....

    [...]

  • ...This mapping was motivated by the fact that humans cannot discern the direction of highfrequency vibrations [2]....

    [...]