scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Design and Analysis of Predictive Sampling of Haptic Signals

TL;DR: This article identifies adaptive sampling strategies for haptic signals using classifiers based on level crossings and Weber’s law but also random forests using a variety of causal signal features and finds that the level crossing sampler is superior.
Abstract: In this article, we identify adaptive sampling strategies for haptic signals. Our approach relies on experiments wherein we record the response of several users to haptic stimuli. We then learn different classifiers to predict the user response based on a variety of causal signal features. The classifiers that have good prediction accuracy serve as candidates to be used in adaptive sampling. We compare the resultant adaptive samplers based on their rate-distortion tradeoff using synthetic as well as natural data. For our experiments, we use a haptic device with a maximum force level of 3 N and 10 users. Each user is subjected to several piecewise constant haptic signals and is required to click a button whenever he perceives a change in the signal. For classification, we not only use classifiers based on level crossings and Weber’s law but also random forests using a variety of causal signal features. The random forest typically yields the best prediction accuracy and a study of the importance of variables suggests that the level crossings and Weber’s classifier features are most dominant. The classifiers based on level crossings and Weber’s law have good accuracy (more than 90p) and are only marginally inferior to random forests. The level crossings classifier consistently outperforms the one based on Weber’s law even though the gap is small. Given their simple parametric form, the level crossings and Weber’s law--based classifiers are good candidates to be used for adaptive sampling. We study their rate-distortion performance and find that the level crossing sampler is superior. For example, for haptic signals obtained while exploring various rendered objects, for an average sampling rate of 10 samples per second, the level crossings adaptive sampler has a mean square error about 3dB less than the Weber sampler.
Citations
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: This survey focuses on how the fifth generation of mobile networks will allow haptic applications to take life, in combination with the haptic data communication protocols, bilateral teleoperation control schemes and hapticData processing needed.
Abstract: Touch is currently seen as the modality that will complement audition and vision as a third media stream over the Internet in a variety of future haptic applications which will allow full immersion and that will, in many ways, impact society. Nevertheless, the high requirements of these applications demand networks which allow ultra-reliable and low-latency communication for the challenging task of applying the required quality of service for maintaining the user’s quality of experience at optimum levels. In this survey, we enlist, discuss, and evaluate methodologies and technologies of the necessary infrastructure for haptic communication. Furthermore, we focus on how the fifth generation of mobile networks will allow haptic applications to take life, in combination with the haptic data communication protocols, bilateral teleoperation control schemes and haptic data processing needed. Finally, we state the lessons learned throughout the surveyed research material along with the future challenges and infer our conclusions.

179 citations


Cites methods from "Design and Analysis of Predictive S..."

  • ...Nonetheless, in [91] it is stated that the level crossings sampler outperforms the sampling method based on Weber’s law....

    [...]

Journal ArticleDOI
01 Feb 2019
TL;DR: In this article, the authors present the fundamentals and state of the art in haptic codec design for the Tactile Internet and discuss how limitations of the human haptic perception system can be exploited for efficient perceptual coding of kinesthetic and tactile information.
Abstract: The Tactile Internet will enable users to physically explore remote environments and to make their skills available across distances. An important technological aspect in this context is the acquisition, compression, transmission, and display of haptic information. In this paper, we present the fundamentals and state of the art in haptic codec design for the Tactile Internet. The discussion covers both kinesthetic data reduction and tactile signal compression approaches. We put a special focus on how limitations of the human haptic perception system can be exploited for efficient perceptual coding of kinesthetic and tactile information. Further aspects addressed in this paper are the multiplexing of audio and video with haptic information and the quality evaluation of haptic communication solutions. Finally, we describe the current status of the ongoing IEEE standardization activity P1918.1.1 which has the ambition to standardize the first set of codecs for kinesthetic and tactile information exchange across communication networks.

104 citations

Proceedings ArticleDOI
12 Dec 2013
TL;DR: The frame structure used in HoIP is described in detail, the rationale behind the design choices, the implementation details of HoIP at the transmitter and receiver, and processing delay measurements done using real haptic devices and HAPI (an open source software used to interface with the haptic device).
Abstract: Applications such as telesurgery need low latency communication of haptic data between remote nodes. For the stability of the control loop, a typical round trip delay target is 5 ms or lower. In this paper, we specify an application layer protocol - Haptics over Internet Protocol (HoIP) - which is designed to allow such low latency haptic data transmission without sacrificing on the quality of haptic perception. The key ingredients of HoIP include adaptive sampling strategies for the digitization of the haptic signal, use of a multithreaded architecture at the transmitter and receiver to minimize processing delay, and use of an existing UDP implementation. In this paper, we describe in detail the frame structure used in HoIP and the rationale behind our design choices, the implementation details of HoIP at the transmitter and receiver, and report processing delay measurements done using real haptic devices and HAPI (an open source software used to interface with the haptic device). The HoIP software is entirely written in C++ and our results show that in the worst case transmitter and receiver side processing delays are less than 0.6 ms.

13 citations


Cites background from "Design and Analysis of Predictive S..."

  • ...12, as discussed in [13], [19], and linear extrapolator at both ends....

    [...]

  • ...These constants can vary significantly depending upon the user and the haptic device [19]....

    [...]

  • ...22, as discussed in [13], [19], and sample-hold extrapolator at both ends....

    [...]

  • ...We refer to [13], [19] for a study of adaptive sampling of haptic signals....

    [...]

Proceedings ArticleDOI
16 Apr 2015
TL;DR: HoIP - a low latency application layer protocol that enables haptic, audio and video data transmission over a network between two remotely connected nodes performs well in terms of maintaining the latencies well under the QoS thresholds.
Abstract: Telehaptics applications are usually characterized by a strict imposition of a round trip haptic data latency of less than 30 ms. In this paper, we present Haptics over Internet Protocol (HoIP) - a low latency application layer protocol that enables haptic, audio and video data transmission over a network between two remotely connected nodes. The evaluation of the protocol is carried out through a set of three experiments, each with distinct objectives. First, a haptic-audio-visual (HAV) interactive application, involving two remotely located human personnel communicating via haptic, auditory and visual media, to evaluate the Quality of Service (QoS) violation due to the protocol. Second, a haptic sawing experiment with the goal of assessing the impact of HoIP and network delays in telehaptics applications, by taking the example of a typical telesurgical activity. Third, a telepottery system to determine the protocol's ability in reproducing a real-time interactive user experience with a remote virtual object, in presence of perceptual data compression and reconstruction techniques. Our experiments reveal that the transmission scheduling of multimedia packets performs well in terms of maintaining the latencies well under the QoS thresholds.

12 citations


Cites background from "Design and Analysis of Predictive S..."

  • ...In order to take advantages of the dynamics of the human perception [14], the sampling thresholds (the field Threshold in Figure 1) are periodically communicated, as a part of the header, so that the other node can refrain from sending unnecessary packets....

    [...]

References
More filters
Journal ArticleDOI
01 Oct 2001
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

79,257 citations

Journal ArticleDOI
TL;DR: This article gives an introduction to the subject of classification and regression trees by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples.
Abstract: Classification and regression trees are machine-learning methods for constructing prediction models from data. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition. As a result, the partitioning can be represented graphically as a decision tree. Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost. Regression trees are for dependent variables that take continuous or ordered discrete values, with prediction error typically measured by the squared difference between the observed and predicted values. This article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples. © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 14-23 DOI: 10.1002/widm.8 This article is categorized under: Technologies > Classification Technologies > Machine Learning Technologies > Prediction Technologies > Statistical Fundamentals

16,974 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Proceedings Article
Ron Kohavi1
20 Aug 1995
TL;DR: The results indicate that for real-word datasets similar to the authors', the best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
Abstract: We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.

11,185 citations