scispace - formally typeset
Search or ask a question

Showing papers by "Jonathan J. Dennis published in 2015"


Journal ArticleDOI
TL;DR: Results suggest that antibiotics can be combined with phages to stimulate increased phage production and/or activity and thus improve the efficacy of bacterial killing.
Abstract: The Burkholderia cepacia complex (Bcc) is a group of at least 18 species of Gram-negative opportunistic pathogens that can cause chronic lung infection in cystic fibrosis (CF) patients Bcc organisms possess high levels of innate antimicrobial resistance, and alternative therapeutic strategies are urgently needed One proposed alternative treatment is phage therapy, the therapeutic application of bacterial viruses (or bacteriophages) Recently, some phages have been observed to form larger plaques in the presence of sublethal concentrations of certain antibiotics; this effect has been termed phage-antibiotic synergy (PAS) Those reports suggest that some antibiotics stimulate increased production of phages under certain conditions The aim of this study is to examine PAS in phages that infect Burkholderia cenocepacia strains C6433 and K56-2 Bcc phages KS12 and KS14 were tested for PAS, using 6 antibiotics representing 4 different drug classes Of the antibiotics tested, the most pronounced effects were observed for meropenem, ciprofloxacin, and tetracycline When grown with subinhibitory concentrations of these three antibiotics, cells developed a chain-like arrangement, an elongated morphology, and a clustered arrangement, respectively When treated with progressively higher antibiotic concentrations, both the sizes of plaques and phage titers increased, up to a maximum B cenocepacia K56-2-infected Galleria mellonella larvae treated with phage KS12 and low-dose meropenem demonstrated increased survival over controls treated with KS12 or antibiotic alone These results suggest that antibiotics can be combined with phages to stimulate increased phage production and/or activity and thus improve the efficacy of bacterial killing

152 citations


Journal ArticleDOI
TL;DR: The isolation and characterization of phages able to infect two completely different species of bacteria is an exciting discovery, as phages typically can only infect related bacterial species, and rarely infect bacteria across taxonomic families, let alone acrossTaxonomic orders.
Abstract: A rapid worldwide increase in the number of human infections caused by the extremely antibiotic resistant bacterium Stenotrophomonas maltophilia is prompting alarm. One potential treatment solution to the current antibiotic resistance dilemma is “phage therapy”, the clinical application of bacteriophages to selectively kill bacteria. Towards that end, phages DLP1 and DLP2 (vB_SmaS-DLP_1 and vB_SmaS-DLP_2, respectively) were isolated against S. maltophilia strain D1585. Host range analysis for each phage was conducted using 27 clinical S. maltophilia isolates and 11 Pseudomonas aeruginosa strains. Both phages exhibit unusually broad host ranges capable of infecting bacteria across taxonomic orders. Transmission electron microscopy of the phage DLP1 and DLP2 morphology reveals that they belong to the Siphoviridae family of bacteriophages. Restriction fragment length polymorphism analysis and complete genome sequencing and analysis indicates that phages DLP1 and DLP2 are closely related but different phages, sharing 96.7 % identity over 97.2 % of their genomes. These two phages are also related to P. aeruginosa phages vB_Pae-Kakheti_25 (PA25), PA73, and vB_PaeS_SCH_Ab26 (Ab26) and more distantly related to Burkholderia cepacia complex phage KL1, which together make up a taxonomic sub-family. Phages DLP1 and DLP2 exhibited significant differences in host ranges and growth kinetics. The isolation and characterization of phages able to infect two completely different species of bacteria is an exciting discovery, as phages typically can only infect related bacterial species, and rarely infect bacteria across taxonomic families, let alone across taxonomic orders.

44 citations


Proceedings ArticleDOI
01 Dec 2015
TL;DR: The main components of the system are a front-end processing system consisting of a distributed beam-forming algorithm, that performs adaptive weighting and channel elimination, a speech dereverberation approach using a maximum-kurtosis criteria, and a robust voice activity detection module based on using the sub-harmonic ratio.
Abstract: In this paper, we introduce the system developed at the Institute for Infocomm Research (I2 R) for the ASpIRE (Automatic Speech recognition In Reverberant Environments) challenge. The main components of the system are a front-end processing system consisting of a distributed beam-forming algorithm, that performs adaptive weighting and channel elimination, a speech dereverberation approach using a maximum-kurtosis criteria, and a robust voice activity detection (VAD) module based on using the sub-harmonic ratio (SHR). The acoustic back-end consists of a multi-conditional Deep Neural Network (DNN) model that uses speaker adapted features combined with a decoding strategy that performs semi-supervised DNN model adaptation using weighted labels generated by the first-pass decoding output. On the single-microphone evaluation, our system achieved a word error rate (WER) of 44.8%. With the incorporation of beamforming on the multi-microphone evaluation, our system achieved an improvement in WER of over 6% to give the best evaluation result of 38.5%.

15 citations


Proceedings ArticleDOI
19 Apr 2015
TL;DR: The proposed method simultaneously enhances the sparsity of the sound event spectrogram, producing a representation which is robust against noise, as well as maximises the discriminability of the spike coding input in terms of its temporal information, which is important for sound event classification.
Abstract: This paper proposes a novel biologically inspired method for sound event classification which combines spike coding with a spiking neural network (SNN). Our spike coding extracts keypoints that represent the local maxima components of the sound spectrogram, and are encoded based on their local time-frequency information; hence both location and spectral information are being extracted. We then design a modified tempotron SNN that, unlike the original tempotron, allows the network to learn the temporal distributions of spike coding input, in an analogous way to the generalized Hough transform. The proposed method simultaneously enhances the sparsity of the sound event spectrogram, producing a representation which is robust against noise, as well as maximises the discriminability of the spike coding input in terms of its temporal information, which is important for sound event classification. Experimental results on a large dataset of 50 environment sound events show the superiority of both the spike coding versus the raw spectrogram and the SNN versus conventional cross-entropy neural networks.

10 citations


Journal ArticleDOI
TL;DR: A neural network is proposed to be used, with features derived from the probabilistic Hough voting step of the Generalized Hough Transform, to implement an improved version of the GHT where the output of the network represents the conventional target class posteriors.
Abstract: While typical hybrid neural network architectures for automatic speech recognition (ASR) use a context window of frame-based features, this may not be the best approach to capture the wider temporal context, which contains phonetic and linguistic information that is equally important. In this paper, we introduce a system that integrates both the spectral and geometrical shape information from the acoustic spectrum, inspired by research in the field of machine vision. In particular, we focus on the Generalized Hough Transform (GHT), which is a sophisticated technique that can model the geometrical distribution of speech information over the wider temporal context. To integrate the GHT as part of a hybrid-ASR system, we propose to use a neural network, with features derived from the probabilistic Hough voting step of the GHT, to implement an improved version of the GHT where the output of the network represents the conventional target class posteriors. A major advantage of our approach is that each step of the GHT is highly interpretable, particularly compared to deep neural network (DNN) systems which are commonly treated as powerful black-box classifiers that give little insight into how the output is achieved. Experiments are carried out on two speech pattern classification tasks. The first is the TIMIT phoneme classification, which demonstrates the performance of the approach on a standard ASR task. The second is a spoken word recognition challenge, which highlights the flexibility of the approach to capture phonetic information within a longer temporal context.

3 citations


Book ChapterDOI
20 Sep 2015
TL;DR: Experimental results show superiority of the proposed method for challenging tasks of SEC, when signals come out with severe noises and distortions.
Abstract: Sound Event Classification (SEC) aims to understand the real life events using sound information A major problem of SEC is that it has to deal with uncontrolled environmental conditions, leading to extremely high levels of noise, reverberation, overlapping, attenuation and distortion As a result, some parts of the captured signals could be masked out or completely missing In this paper, we propose a novel missing feature classification method by utilizing a missing feature kernel in the classification optimization machine The proposed method first transforms audio segments into the Subband Power Distribution (SPD), a novel image representation where the pure signal’s area is separable A novel masking approach is then proposed to separate the SPD into reliable and non-reliable parts Next, missing feature kernel (MFK), in forms of probabilistic distances on the intersection between reliable areas of the SPD images, is developed and integrated into SVM optimization framework Experimental results show superiority of the proposed method for challenging tasks of SEC, when signals come out with severe noises and distortions

1 citations


01 Jan 2015
TL;DR: The system developed at the Institute for Infocomm Research for the English ASR task within the IWSLT 2015 evaluation campaign includes a harmonic modelling based automatic segmentation and the conventional MFCC feature extraction and Recurrent Neural network is used to train and rescore the language modelling to further improve the performances.
Abstract: In this paper, we introduce the system developed at the Institute for Infocomm Research (I 2 R) for the English ASR task within the IWSLT 2015 evaluation campaign. The frontend module of our system includes a harmonic modelling based automatic segmentation and the conventional MFCC feature extraction. The back-end module consists of an auxiliary GMM-HMM training to provide the speaker adaptive transform (SAT) and the initial forced alignment, followed by a discriminative training DNN acoustic modelling. Multistage decoding strategy is employed with a semi-supervised DNN adaptation which uses weighted labels generated by the previous-pass decoding output to update the trained DNN models. Finally, Recurrent Neural network (RNN) is used to train and rescore the language modelling to further improve the performances. Our system achieved 8.4 % WER on the tst2013 development set, which is better than the official results on the same set reported from the previous evaluation. For this year’s tst2015 test set, we obtained 7.7% WER.

1 citations


Proceedings ArticleDOI
06 Sep 2015
TL;DR: A novel spiking neural network architecture that integrates with the generalised Hough transform (GHT) framework for the task of detecting specific speech patterns such as command words and has the advantage that it does not require a voice activity detection module or an explicit noise model to reject non-target frames.
Abstract: This paper proposes a novel spiking neural network (SNN) architecture that integrates with the generalised Hough transform (GHT) framework for the task of detecting specific speech patterns such as command words. The idea is that the GHT can model the geometrical distribution of speech information over the wider temporal context, while the SNN to used learn the discriminative prior weighting in the GHT to provide a spike output indicating a detection decision. The SNN therefore enhances the projection of the GHT from the input acoustic information into the sparse Hough accumulator space for detecting specific sound patterns. Compared using conventional neural network architectures for this task, the GHT-SNN system has the advantage that it does not require a voice activity detection module or an explicit noise model to reject non-target frames. Instead the output of the SNN is a voltage that is trained to exceed a threshold for positive instances of the sound pattern while remaining below this threshold otherwise, requiring no explicit noise model. Experiments are carried out on the challenging Chalearn gesture recognition task where spoken commands must be detected against variable background noise while rejecting a range of out-of-vocabulary words.

1 citations