scispace - formally typeset
Search or ask a question

Showing papers by "Jonathan J. Dennis published in 2014"


Journal ArticleDOI
TL;DR: Aerosol phage therapy appears to be an effective method for treating highly antibiotic-resistant bacterial respiratory infections, including those caused by BCC bacteria.
Abstract: Phage therapy has been suggested as a potential treatment for highly antibiotic-resistant bacteria, such as the species of the Burkholderia cepacia complex (BCC). To address this hypothesis, experimental B. cenocepacia respiratory infections were established in mice using a nebulizer and a nose-only inhalation device. Following infection, the mice were treated with one of five B. cenocepacia-specific phages delivered as either an aerosol or intraperitoneal injection. The bacterial and phage titers within the lungs were assayed 2 days after treatment, and mice that received the aerosolized phage therapy demonstrated significant decreases in bacterial loads. Differences in phage activity were observed in vivo. Mice that received phage treatment by intraperitoneal injection did not demonstrate significantly reduced bacterial loads, although phage particles were isolated from their lung tissue. Based on these data, aerosol phage therapy appears to be an effective method for treating highly antibiotic-resistant bacterial respiratory infections, including those caused by BCC bacteria.

82 citations


DissertationDOI
01 Jan 2014
TL;DR: The approach taken is to interpret the sound event as a two-dimensional spectrogram image, with the two axes as the time and frequency dimensions, which enables novel methods for SER to be developed based on spectrogramimage processing, which are inspired by techniques from the field of image processing.
Abstract: The objective of this research is to develop feature extraction and classification techniques for the task of sound event recognition (SER) in unstructured environments. Although this field is traditionally overshadowed by the popular field of automatic speech recognition (ASR), an SER system that can achieve human-like sound recognition performance opens up a range of novel application areas. These include acoustic surveillance, bio-acoustical monitoring, environmental context detection, healthcare applications and more generally the rich transcription of acoustic environments. The challenge in such environments are the adverse effects such as noise, distortion and multiple sources, which are more likely to occur with distant microphones compared to the close-talking microphones that are more common in ASR. In addition, the characteristics of acoustic events are less well defined than those of speech, and there is no sub-word dictionary available like the phonemes in speech. Therefore, the performance of ASR systems typically degrades dramatically in these challenging unstructured environments, and it is important to develop new methods that can perform well for this challenging task. In this thesis, the approach taken is to interpret the sound event as a two-dimensional spectrogram image, with the two axes as the time and frequency dimensions. This enables novel methods for SER to be developed based on spectrogram image processing, which are inspired by techniques from the field of image processing. The motivation for such an approach is based on finding an automatic approach to “spectrogram reading”, where it is possible for humans to visually recognise the different sound event signatures in the spectrogram. The advantages of such an approach are twofold. Firstly, the sound event image representation makes it possible to naturally capture the sound information in a two-dimensional feature. This has advantages over conventional onedimensional frame-based features, which capture only a slice of spectral information

62 citations


Journal ArticleDOI
TL;DR: The PglLBc O‐oligosaccharyltransferase (O‐OTase), encoded by the cloned gene bcal0960, was shown to be capable of transferring a heptasaccharide from the Campylobacter jejuni system to a Neisseria meningitides‐derived acceptor protein in an Escherichia coli background, indicating that the enzyme has relaxed specificities for both the sugar donor and protein acceptor.
Abstract: Summary Bacteria of the Burkholderia cepacia complex (Bcc) are pathogens of humans, plants, and animals. Burkholderia cenocepacia is one of the most common Bcc species infecting cystic fibrosis (CF) patients and its carriage is associated with poor prognosis. In this study, we characterized a general O-linked protein glycosylation system in B. cenocepacia K56-2. The PglLBc O-oligosaccharyltransferase (O-OTase), encoded by the cloned gene bcal0960, was shown to be capable of transferring a heptasaccharide from the Campylobacter jejuni N-glycosylation system to a Neisseria meningitides-derived acceptor protein in an Escherichia coli background, indicating that the enzyme has relaxed specificities for both the sugar donor and protein acceptor. In B cenocepacia K56-2, PglLBc is responsible for the glycosylation of 23 proteins involved in diverse cellular processes. Mass spectrometry analysis revealed that these proteins are modified with a trisaccharide HexNAc-HexNAc-Hex, which is unrelated to the O-antigen biosynthetic process. The glycosylation sites that were identified existed within regions of low complexity, rich in serine, alanine, and proline. Disruption of bcal0960 abolished glycosylation and resulted in reduced swimming motility and attenuated virulence towards both plant and insect model organisms. This study demonstrates the first example of post-translational modification in Bcc with implications for pathogenesis.

55 citations


Proceedings ArticleDOI
14 Sep 2014
TL;DR: The idea and structure behind six recent spectrogram image methods are introduced and their performance on a large database containing 50 different environmental sounds is analysed to give a standardised comparison that is not often available in sound event classification tasks.
Abstract: The time-frequency spectrogram representation of an audio signal can be visually analysed by a trained researcher to recognise any underlying sound events in a process called “spectrogram reading”. However, this has not become a popular approach for automatic classification, as the field is driven by Automatic Speech Recognition (ASR) where frame-based features are popular. As opposed to speech, sound events typically have a more distinctive time-frequency representation, with the energy concentrated in a small number of spectral components. This makes them more suitable for classification based on their visual signature, and enables inspiration to be found in techniques from the related field of image processing. Recently, there have been a range of techniques that extract image processing-inspired features from the spectrogram for sound event classification. In this paper, we introduce the idea and structure behind six recent spectrogram image methods and analyse their performance on a large database containing 50 different environmental sounds to give a standardised comparison that is not often available in sound event classification tasks.

12 citations


Proceedings ArticleDOI
04 May 2014
TL;DR: A novel robust spectrogram image method where the key is the observed sparsity of the sound Spectrogram image in wavelet representations, which is modeled by the Generalized Gaussian Distributions modeling.
Abstract: In previous works, we have developed a spectrogram image feature extraction framework for robust sound event recognition. The basic idea here is to extract useful information from the 2D time-frequency representation of the sound signal to build up specific feature extractions and classifier under noisy conditions. In this paper, we propose a novel robust spectrogram image method where the key is the observed sparsity of the sound spectrogram image in wavelet representations, which is modeled by the Generalized Gaussian Distributions modeling. Furthermore, the Generalized Gaussian Distribution Kullback-Leibler (GGD-KL) kernel SVM is developed to embed the given probabilistic distance into the quadratic programming machine to optimize the classification The experimental result shows the superiority of the proposed method to the previous works and the state-of-the-art in the field.

3 citations


01 Jan 2014
TL;DR: This working note summarizes the submission to the LifeCLEF 2014 Bird Task which combines the outputs from a Python and Matlab classification system to achieve a Mean Average Precision (MAP) score that is far superior to what is possible with any single classifier.
Abstract: This working note summarizes our submission to the LifeCLEF 2014 Bird Task which combines the outputs from a Python and Matlab classification system. The features used for both systems include Mel-Frequency Cepstral Coefficients (MFCC), time-averaged spectrograms and the provided meta-data. The Python subsystem combines a large ensemble of different classifiers with different subsets of the features while the Matlab subsystem is an ensemble of the Random Forest and Linear Discriminant Analysis (LDA) classifiers using local spectral and meta features. By combining this disparate set of features and classifiers, we managed to achieve a Mean Average Precision (MAP) score that is far superior to what is possible with any single classifier.

3 citations


Proceedings ArticleDOI
04 May 2014
TL;DR: This paper proposes an alternative to conventional MFCC or filterbank features, using an approach based on the Generalised Hough Transform (GHT), a common approach used in the field of image processing for the task of object detection.
Abstract: Despite recent advances in the use of Artificial Neural Network (ANN) architectures for automatic speech recognition (ASR), relatively little attention has been given to using feature inputs beyond MFCCs in such systems. In this paper, we propose an alternative to conventional MFCC or filterbank features, using an approach based on the Generalised Hough Transform (GHT). The GHT is a common approach used in the field of image processing for the task of object detection, where the idea is to learn the spatial distribution of a codebook of feature information relative to the location of the target class. During recognition, a simple weighted summation of the codebook activations is commonly used to detect the presence of the target classes. Here we propose to learn the weighting discriminatively in an ANN, where the aim is to optimise the static phone classification error at the output of the network. As such an ANN is common to hybrid ASR architectures, the output activations from the GHT can be considered as a novel feature for ASR. Experimental results on the TIMIT phoneme recognition task demonstrate the state-of-the-art performance of the approach.

2 citations


01 Jan 2014
TL;DR: This working note summarizes the submission to the Life- CLEF 2014 Bird Task which combines the outputs from a Python and Matlab classication system and achieves a Mean Average Precision (MAP) score that is far superior to what is possible with any single classier.
Abstract: This working note summarizes our submission to the Life- CLEF 2014 Bird Task which combines the outputs from a Python and Matlab classication system. The features used for both systems include Mel-Frequency Cepstral Coecients (MFCC), time-averaged spectro- grams and the provided meta-data. The Python subsystem combines a large ensemble of dierent classiers with dierent subsets of the fea- tures while the Matlab subsystem is an ensemble of the Random Forest and Linear Discriminant Analysis (LDA) classiers using local spectral and meta features. By combining this disparate set of features and clas- siers, we managed to achieve a Mean Average Precision (MAP) score that is far superior to what is possible with any single classier.

1 citations


Proceedings ArticleDOI
01 Dec 2014
TL;DR: A feature-based approach to address the challenging task of recognising overlapping sound events from single channel audio by taking the output from the GHT and using it as a feature for classification, and demonstrating that such an approach can improve upon the previous knowledge-based scoring system.
Abstract: In this paper, we propose a feature-based approach to address the challenging task of recognising overlapping sound events from single channel audio. Our approach is based on our previous work on Local Spectrogram Features (LSFs), where we combined a local spectral representation of the spectrogram with the Generalised Hough Transform (GHT) voting system for recognition. Here we propose to take the output from the GHT and use it as a feature for classification, and demonstrate that such an approach can improve upon the previous knowledge-based scoring system. Experiments are carried out on a challenging set of five overlapping sound events, with the addition of non-stationary background noise and volume change. The results show that the proposed system can achieve a detection rate of 99% and 91% in clean and 0dB noise conditions respectively, which is a strong improvement over our previous work.

1 citations