scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel statistical approach for automatic vehicle detection based on local features that are located within three significant subregions of the image by eliminating the requirement for an ICA residual image reconstruction process and by computing the likelihood probability using a weighted Gaussian mixture model.
Abstract: This paper develops a novel statistical approach for automatic vehicle detection based on local features that are located within three significant subregions of the image. In the detection process, each subregion is projected onto its associated eigenspace and independent basis space to generate a principal components analysis (PCA) weight vector and an independent component analysis (ICA) coefficient vector, respectively. A likelihood evaluation process is then performed based on the estimated joint probability of the projection weight vectors and the coefficient vectors of the subregions with position information. The use of subregion position information minimizes the risk of false acceptances, whereas the use of PCA to model the low-frequency components of the eigenspace and ICA to model the high-frequency components of the residual space improves the tolerance of the detection process toward variations in the illumination conditions and vehicle pose. The use of local features not only renders the system more robust toward partial occlusions but also reduces the computational overhead. The computational costs are further reduced by eliminating the requirement for an ICA residual image reconstruction process and by computing the likelihood probability using a weighted Gaussian mixture model, whose parameters and weights are iteratively estimated using an expectation-maximization algorithm.

136 citations

Journal ArticleDOI
TL;DR: In this paper, a dynamic kernel partial least squares (DKPLS) technique is proposed to model nonlinearities as well as to capture the dynamics in the data, which is based on a kernel transformation of the source features to allow nonlinear modeling and concatenation of previous and next frames to model the dynamics.
Abstract: A drawback of many voice conversion algorithms is that they rely on linear models and/or require a lot of tuning. In addition, many of them ignore the inherent time-dependency between speech features. To address these issues, we propose to use dynamic kernel partial least squares (DKPLS) technique to model nonlinearities as well as to capture the dynamics in the data. The method is based on a kernel transformation of the source features to allow non-linear modeling and concatenation of previous and next frames to model the dynamics. Partial least squares regression is used to find a conversion function that does not overfit to the data. The resulting DKPLS algorithm is a simple and efficient algorithm and does not require massive tuning. Existing statistical methods proposed for voice conversion are able to produce good similarity between the original and the converted target voices but the quality is usually degraded. The experiments conducted on a variety of conversion pairs show that DKPLS, being a statistical method, enables successful identity conversion while achieving a major improvement in the quality scores compared to the state-of-the-art Gaussian mixture-based model. In addition to enabling better spectral feature transformation, quality is further improved when aperiodicity and binary voicing values are converted using DKPLS with auxiliary information from spectral features.

136 citations

01 Jan 2005
TL;DR: An overview of techniques for nonlinear filtering for a wide variety of conditions on the nonlinearities and on the noise is presented and a general Bayesian approach to filtering is developed which is applicable to all linear or nonlinear stochastic systems.
Abstract: Nonlinear filtering is the process of estimating and tracking the state of a nonlinear stochastic system from non-Gaussian noisy observation data. In this technical memorandum, we present an overview of techniques for nonlinear filtering for a wide variety of conditions on the nonlinearities and on the noise. We begin with the development of a general Bayesian approach to filtering which is applicable to all linear or nonlinear stochastic systems. We show how Bayesian filtering requires integration over probability density functions that cannot be accomplished in closed form for the general nonlinear, non-Gaussian multivariate system, so approximations are required. Next, we address the special case where both the dynamic and observation models are nonlinear but the noises are additive and Gaussian. The extended Kalman filter (EKF) has been the standard technique usually applied here. But, for severe nonlinearities, the EKF can be very unstable and performs poorly. We show how to use the analytical expression for Gaussian densities to generate integral expressions for the mean and covariance matrices needed for the Kalman filter which include the nonlinearities directly inside the integrals. Several numerical techniques are presented that give approximate solutions for these integrals, including Gauss-Hermite quadrature, unscented filter, and Monte Carlo approximations. We then show how these numerically generated integral solutions can be used in a Kalman filter so as to avoid the direct evaluation of the Jacobian matrix associated with the extended Kalman filter. For all filters, step-by-step block diagrams are used to illustrate the recursive implementation of each filter. To solve the fully nonlinear case, when the noise may be non-additive or non-Gaussian, we present several versions of particle filters that use importance sampling. Particle filters can be subdivided into two categories: those that re-use particles and require resampling to prevent divergence, and those that do not re-use particles and therefore require no resampling. For the first category, we show how the use of importance sampling, combined with particle re-use at each iteration, leads to the sequential importance sampling (SIS) particle filter and its special case, the bootstrap particle filter. The requirement for resampling is outlined and an efficient resampling scheme is presented. For the second class, we discuss a generic importance sampling particle filter and then add specific implementations, including the Gaussian particle filter and combination particle filters that bring together the Gaussian particle filter, and either the Gauss-Hermite, unscented, or Monte Carlo Kalman filters developed above to specify a Gaussian importance density. When either the dynamic or observation models are linear, we show how the Rao-Blackwell simplifications can be applied to any of the filters presented to reduce computational costs. We then present results for two nonlinear tracking examples, one with additive Gaussian noise and one with non-Gaussian embedded noise. For each example, we apply the appropriate nonlinear filters and compare performance results.

136 citations

Journal ArticleDOI
TL;DR: A practical method for automatically imposing restrictions on the extent of the nugget effect is introduced, achieved by means of a penalty term that is added to the likelihood function, and controls the amount of unexplainable variability in the computer model.

136 citations

Journal ArticleDOI
TL;DR: In this article, Gaussian Process Motion Planner (GPMP) is proposed to solve continuous-time motion planning problems as probabilistic inference on a factor graph, where GP representations of trajectories are combined with fast structure-exploiting inference via numerical optimization.
Abstract: We introduce a novel formulation of motion planning, for continuous-time trajectories, as probabilistic inference. We first show how smooth continuous-time trajectories can be represented by a small number of states using sparse Gaussian process (GP) models. We next develop an efficient gradient-based optimization algorithm that exploits this sparsity and GP interpolation. We call this algorithm the Gaussian Process Motion Planner (GPMP). We then detail how motion planning problems can be formulated as probabilistic inference on a factor graph. This forms the basis for GPMP2, a very efficient algorithm that combines GP representations of trajectories with fast, structure-exploiting inference via numerical optimization. Finally, we extend GPMP2 to an incremental algorithm, iGPMP2, that can efficiently replan when conditions change. We benchmark our algorithms against several sampling-based and trajectory optimization-based motion planning algorithms on planning problems in multiple environments. Our evaluation reveals that GPMP2 is several times faster than previous algorithms while retaining robustness. We also benchmark iGPMP2 on replanning problems, and show that it can find successful solutions in a fraction of the time required by GPMP2 to replan from scratch.

136 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978