scispace - formally typeset
Search or ask a question

Showing papers by "Patrick Haffner published in 2014"


Patent
14 Aug 2014
TL;DR: In this article, the authors present systems, methods and non-transitory computer-readable media for performing speech recognition across different applications or environments without model customization or prior knowledge of the received speech.
Abstract: Disclosed herein are systems, methods and non-transitory computer-readable media for performing speech recognition across different applications or environments without model customization or prior knowledge of the domain of the received speech. The disclosure includes recognizing received speech with a collection of domain-specific speech recognizers, determining a speech recognition confidence for each of the speech recognition outputs, selecting speech recognition candidates based on a respective speech recognition confidence for each speech recognition output, and combining selected speech recognition candidates to generate text based on the combination.

25 citations


Patent
21 Jul 2014
TL;DR: This article proposed a method for performing translations from a source language to a target language, which comprises receiving a source phrase, generating a target bag of words based on a global lexical selection of words that loosely couples the source words and target words/phrases, and reconstructing a target phrase or sentence by considering all permutations of words with a conditional probability greater than a threshold.
Abstract: Disclosed are systems, methods, and computer-readable media for performing translations from a source language to a target language The method comprises receiving a source phrase, generating a target bag of words based on a global lexical selection of words that loosely couples the source words/phrases and target words/phrases, and reconstructing a target phrase or sentence by considering all permutations of words with a conditional probability greater than a threshold

17 citations


Posted Content
TL;DR: This work considers combining two techniques, parallelism and Nesterov's acceleration, to design faster algorithms for L1-regularized loss, and simplifies BOOM, a variant of gradient descent, and proposes an efficient accelerated version of Shotgun, improving the convergence rate.
Abstract: The growing amount of high dimensional data in different machine learning applications requires more efficient and scalable optimization algorithms. In this work, we consider combining two techniques, parallelism and Nesterov's acceleration, to design faster algorithms for L1-regularized loss. We first simplify BOOM, a variant of gradient descent, and study it in a unified framework, which allows us to not only propose a refined measurement of sparsity to improve BOOM, but also show that BOOM is provably slower than FISTA. Moving on to parallel coordinate descent methods, we then propose an efficient accelerated version of Shotgun, improving the convergence rate from $O(1/t)$ to $O(1/t^2)$. Our algorithm enjoys a concise form and analysis compared to previous work, and also allows one to study several connected work in a unified way.

9 citations


Patent
04 Nov 2014
TL;DR: In this paper, a method and apparatus for providing protection for mail servers in networks such as the packet networks are disclosed, which selectively limits connections to the mail server from a plurality of source nodes based on a spam index associated with each of the source nodes.
Abstract: A method and apparatus for providing protection for mail servers in networks such as the packet networks are disclosed. For example, the present method detects a mail server is reaching its processing limit. The method then selectively limits connections to the mail server from a plurality of source nodes based on a spam index associated with each of the source nodes.

2 citations


Journal ArticleDOI
TL;DR: The experimental results show that with the proposed LSRK, the authors can achieve significant and consistent topic spotting performance gains over the n-gram rational kernels.
Abstract: In this work, we propose latent semantic rational kernels (LSRK) for topic spotting on conversational speech. Rather than mapping the input weighted finite-state transducers (WFSTs) onto a high dimensional n-gram feature space as in n-gram rational kernels, the proposed LSRK maps the WFSTs onto a latent semantic space. With the proposed LSRK, all available external knowledge and techniques can be flexibly integrated into a unified WFST based framework to boost the topic spotting performance. We present how to generalize the LSRK using tf-idf weighting, latent semantic analysis, WordNet and probabilistic topic models. To validate the proposed LSRK framework, we conduct the topic spotting experiments on two datasets, Switchboard and AT&T HMIHY0300 initial collection. The experimental results show that with the proposed LSRK we can achieve significant and consistent topic spotting performance gains over the n-gram rational kernels.

2 citations