scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Pattern Recognition and Machine Learning

01 Aug 2007-Technometrics (Taylor & Francis)-Vol. 49, Iss: 3, pp 366-366
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Abstract: (2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.
Citations
More filters
Journal Article
TL;DR: It is a fair statement that the recent advances brought forward by Deep Learning reflect a new era in Machine Learning that revolutionized many domains of signal and information processing.
Abstract: Modeling data via artificial neural networks (ANN) is not a new concept. Most of the underlying techniques have been known since the 1940s. It has to be pointed out though that a series of recent advances in how the networks are trained and utilized form the foundation of today's Deep Learning ecosystem. It is a fair statement that the recent advances brought forward by Deep Learning reflect a new era in Machine Learning (ML) that revolutionized many domains of signal and information processing. This holds true beyond the commonly discussed speech and object recognition science but also branches into computer vision, natural language processing, or information retrieval related fields.

117 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance and shows that theranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms.
Abstract: Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features’ ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.

117 citations


Additional excerpts

  • ...More details of this standard ensemble learning method can be found in [43]....

    [...]

Proceedings ArticleDOI
09 Jan 2012
TL;DR: Experiments on several visual classification tasks show that the proposed embedding into the Reproducing Kernel Hilbert Space by introducing a Riemannian pseudo kernel obtains considerable improvements in discrimination accuracy.
Abstract: A convenient way of analysing Riemannian manifolds is to embed them in Euclidean spaces, with the embedding typically obtained by flattening the manifold via tangent spaces. This general approach is not free of drawbacks. For example, only distances between points to the tangent pole are equal to true geodesic distances. This is restrictive and may lead to inaccurate modelling. Instead of using tangent spaces, we propose embedding into the Reproducing Kernel Hilbert Space by introducing a Riemannian pseudo kernel. We furthermore propose to recast a locality preserving projection technique from Euclidean spaces to Riemannian manifolds, in order to demonstrate the benefits of the embedding. Experiments on several visual classification tasks (gesture recognition, person re-identification and texture classification) show that in comparison to tangent-based processing and state-of-the-art methods (such as tensor canonical correlation analysis), the proposed approach obtains considerable improvements in discrimination accuracy.

116 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...V i = AKi and classification methods such as NearestNeighbours or Support Vector Machines [2] can be employed to label Xq....

    [...]

  • ...Similarly, gallery points Xi are represented by r dimensional vectors V i = ATKi and classification methods such as NearestNeighbours or Support Vector Machines [2] can be employed to label Xq....

    [...]

Proceedings ArticleDOI
14 Jun 2009
TL;DR: It is shown that, except for different approaches to regularization, Kernelized LSTD (KLSTD) is equivalent to a modelbased approach that uses kernelized regression to find an approximate reward and transition model, and that Gaussian Process Temporal Difference learning (GPTD) returns a mean value function that is equivalentto these other approaches.
Abstract: A recent surge in research in kernelized approaches to reinforcement learning has sought to bring the benefits of kernelized machine learning techniques to reinforcement learning. Kernelized reinforcement learning techniques are fairly new and different authors have approached the topic with different assumptions and goals. Neither a unifying view nor an understanding of the pros and cons of different approaches has yet emerged. In this paper, we offer a unifying view of the different approaches to kernelized value function approximation for reinforcement learning. We show that, except for different approaches to regularization, Kernelized LSTD (KLSTD) is equivalent to a modelbased approach that uses kernelized regression to find an approximate reward and transition model, and that Gaussian Process Temporal Difference learning (GPTD) returns a mean value function that is equivalent to these other approaches. We also discuss the relationship between our modelbased approach and the earlier Gaussian Processes in Reinforcement Learning (GPRL). Finally, we decompose the Bellman error into the sum of transition error and reward error terms, and demonstrate through experiments that this decomposition can be helpful in choosing regularization parameters.

116 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...…least-squares regression is re-derived using the kernel trick, we arrive at the dual (kernelized) form of linear least-squares regression (Bishop, 2006), y(x)= k(x)T (K + S)-1 t, (1) where t represents the target values of the sampled points, and k(x) is a column vector with elements…...

    [...]

  • ...If regularized least-squares regression is re-derived using the kernel trick, we arrive at the dual (kernelized) form of linear least-squares regression (Bishop, 2006),...

    [...]

Journal ArticleDOI
TL;DR: A combination of yeast genetics, synthetic genetic array analysis, and high-throughput screening reveals that sumoylation of Mcm21p promotes disassembly of the mitotic spindle.
Abstract: We describe the application of a novel screening approach that combines automated yeast genetics, synthetic genetic array (SGA) analysis, and a high-content screening (HCS) system to examine mitotic spindle morphogenesis. We measured numerous spindle and cellular morphological parameters in thousands of single mutants and corresponding sensitized double mutants lacking genes known to be involved in spindle function. We focused on a subset of genes that appear to define a highly conserved mitotic spindle disassembly pathway, which is known to involve Ipl1p, the yeast aurora B kinase, as well as the cell cycle regulatory networks mitotic exit network (MEN) and fourteen early anaphase release (FEAR). We also dissected the function of the kinetochore protein Mcm21p, showing that sumoylation of Mcm21p regulates the enrichment of Ipl1p and other chromosomal passenger proteins to the spindle midzone to mediate spindle disassembly. Although we focused on spindle disassembly in a proof-of-principle study, our integrated HCS-SGA method can be applied to virtually any pathway, making it a powerful means for identifying specific cellular functions.

116 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...To identify mutants whose morphological profiles differed from wild type, we used a mixture of Gaussian models to learn the probability density function of the control based on the four features (Bishop, 2006)....

    [...]

  • ...Data analysis To identify mutants whose morphological profiles differed from wild type, we used a mixture of Gaussian models to learn the probability density function of the control based on the four features (Bishop, 2006)....

    [...]