scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Pattern Recognition and Machine Learning

01 Aug 2007-Technometrics (Taylor & Francis)-Vol. 49, Iss: 3, pp 366-366
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Abstract: (2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper shows that a better option is available for fully-observed VBMF--the global solution can be analytically computed, and it is shown that the global optimal solution of empirical V BMF (where hyperparameters are also learned from data) can also be analyrically computed.
Abstract: The variational Bayesian (VB) approximation is known to be a promising approach to Bayesian estimation, when the rigorous calculation of the Bayes posterior is intractable. The VB approximation has been successfully applied to matrix factorization (MF), offering automatic dimensionality selection for principal component analysis. Generally, finding the VB solution is a nonconvex problem, and most methods rely on a local search algorithm derived through a standard procedure for the VB approximation. In this paper, we show that a better option is available for fully-observed VBMF--the global solution can be analytically computed. More specifically, the global solution is a reweighted SVD of the observed matrix, and each weight can be obtained by solving a quartic equation with its coefficients being functions of the observed singular value. We further show that the global optimal solution of empirical VBMF (where hyperparameters are also learned from data) can also be analytically computed. We illustrate the usefulness of our results through experiments in multi-variate analysis.

129 citations


Cites background or methods from "Pattern Recognition and Machine Lea..."

  • ...The variational Bayesian (VB) method (Attias, 1999; Bishop, 2006), which approximates the posterior distributions by decomposable distributions, has also been applied to MF (Bishop, 1999; Lim and Teh, 2007; Ilin and Raiko, 2010)....

    [...]

  • ...(6) This constraint breaks the entanglement between the parameter matrices A and B, and leads to a computationally-tractable iterative algorithm, called the iterated conditional modes (ICM) algorithm (Besag, 1986; Bishop, 2006)....

    [...]

  • ...In practice, the VBMF solution is computed by the iterated conditional modes (ICM) algorithm (Besag, 1986; Bishop, 2006), where the mean and the covariance of the posterior distributions are iteratively updated until convergence (Lim and Teh, 2007; Ilin and Raiko, 2010)....

    [...]

Journal ArticleDOI
TL;DR: This study finds support in German readers' eye fixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses as new words are integrated; and retrieval, which quantifies comprehension difficulty in terms of working memory constraints.
Abstract: Eye fixation durations during normal reading correlate with processing difficulty, but the specific cognitive mechanisms reflected in these measures are not well understood. This study finds support in German readers' eye fixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses as new words are integrated; and retrieval, which quantifies comprehension difficulty in terms of working memory constraints. We examine the predictions of both metrics using a family of dependency parsers indexed by an upper limit on the number of candidate syntactic analyses they retain at successive words. Surprisal models all fixation measures and regression probability. By contrast, retrieval does not model any measure in serial processing. As more candidate analyses are considered in parallel at each word, retrieval can account for the same measures as surprisal. This pattern suggests an important role for ranked parallelism in theories of sentence comprehe...

129 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...We use a maximum-entropy classifier to estimate the probability of a transition given a state (Bishop, 2006, p. 198)....

    [...]

Journal Article
TL;DR: The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets it has tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user.
Abstract: Markov Random Field Based Automatic Image Alignment for Electron Tomography Farshid Moussavi ∗ Department of Electrical Engineering Stanford University Stanford, CA 94305 farshid1@stanford.edu Luis R. Comolli Life Sciences Division Lawrence Berkeley National Laboratory Berkeley, CA 94704 lrcomolli@lbl.gov Kenneth H. Downing Life Sciences Division Lawrence Berkeley National Laboratory Berkeley, CA 94704 khdowning@lbl.gov Fernando Amat ∗ Department of Electrical Engineering Stanford University Stanford, CA 94305 famat@stanford.edu Gal Elidan Department of Computer Science Stanford University Stanford, CA 94305 galel@cs.stanford.edu Mark Horowitz Department of Electrical Engineering Stanford University Stanford, CA 94305 horowitz@stanford.edu Introduction Cryo electron tomography (cryo-ET) is the primary method for obtaining 3D reconstructions of intact bacteria, viruses, and complex molecular machines ([7],[2]). It first flash freezes a specimen in a thin layer of ice, and then rotates the ice sheet in a transmission electron microscope (TEM) recording images of different projections through the sample. The resulting images are aligned and then back projected to form the desired 3-D model. The typical resolution of biological electron microscope is on the order of 1 nm per pixel which means that small imprecision in the microscope’s stage or lenses can cause large alignment errors. To enable a high precision alignment, biologists add a small number of spherical gold beads to the sample before it is frozen. These beads generate high contrast dots in the image that can be tracked across projections. Each gold bead can be seen as a marker with a fixed location in 3D, which provides the reference points to bring all the images to a common frame as in the classical structure from motion problem. A high accuracy alignment is critical to obtain a high resolution tomogram (usually on the order of 5-15nm resolution). While some methods try to automate the task of tracking markers and aligning the images ([8],[4]), they require user intervention if the SNR of the image becomes too low. Unfortunately, cryogenic electron tomography (or cryo-ET) often has poor SNR, since the samples are relatively thick (for TEM) and the restricted electron dose usually results in projections with SNR under 0 dB. This paper shows that formulating this problem as a most-likely estimation task yields an approach that is able to automatically align with high precision cryo-ET datasets using inference in graphical models. This approach has been packaged into a publicly available software called RAPTOR-Robust Alignment and Projection estimation for Tomographic Reconstruction. 1 These authors contributed equally to this work. [1] presents an extended version of the results reported in this abstract.

129 citations


Cites background from "Pattern Recognition and Machine Lea..."

  • ...For a detailed description of LBP the reader is referred to (Yedidia et al., 2000, 2005; Kschiang et al., 2001; Bishop, 2006). field based automatic image alignment ..., J. Struct....

    [...]

  • ...For a thorough treatment, the reader is referred to the many references on the subject (e.g. Yedidia et al., 2005; Bishop, 2006)....

    [...]

Journal ArticleDOI
TL;DR: This article reviews and provides a categorization of wearable sensors useful for capturing biometric signals, and analyses the computational cost of the different signal processing techniques, an important practical factor in constrained devices such as wearables.
Abstract: The growing popularity of wearable devices is leading to new ways to interact with the environment, with other smart devices, and with other people. Wearables equipped with an array of sensors are able to capture the owner’s physiological and behavioural traits, thus are well suited for biometric authentication to control other devices or access digital services. However, wearable biometrics have substantial differences from traditional biometrics for computer systems, such as fingerprints, eye features, or voice. In this article, we discuss these differences and analyse how researchers are approaching the wearable biometrics field. We review and provide a categorization of wearable sensors useful for capturing biometric signals. We analyse the computational cost of the different signal processing techniques, an important practical factor in constrained devices such as wearables. Finally, we review and classify the most recent proposals in the field of wearable biometrics in terms of the structure of the biometric system proposed, their experimental setup, and their results. We also present a critique of experimental issues such as evaluation and feasibility aspects, and offer some final thoughts on research directions that need attention in future work.

129 citations

Journal ArticleDOI
TL;DR: Experimental results show that the random forest regression model trained by the proposed DOG feature is highly correspondent to the HVS and is also robust when tested by available databases.
Abstract: Objective image quality assessment (IQA) plays an important role in the development of multimedia applications. Prediction of IQA metric should be consistent with human perception. The release of the newest IQA database (TID2013) challenges most of the widely used quality metrics (e.g., peak-to-noise-ratio and structure similarity index). We propose a new methodology to build the metric model using a regression approach. The new IQA score is set to be the nonlinear combination of features extracted from several difference of Gaussian (DOG) frequency bands, which mimics the human visual system (HVS). Experimental results show that the random forest regression model trained by the proposed DOG feature is highly correspondent to the HVS and is also robust when tested by available databases.

129 citations