scispace - formally typeset
Search or ask a question
Author

Edilson Delgado-Trejos

Bio: Edilson Delgado-Trejos is an academic researcher from National University of Colombia at Manizales. The author has contributed to research in topics: Attractor & Noise. The author has an hindex of 10, co-authored 47 publications receiving 348 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Fractal type features were the most robust family of parameters (in the sense of accuracy vs. computational load) for the automatic detection of murmurs from phonocardiographic signals.
Abstract: This work presents a comparison of different approaches for the detection of murmurs from phonocardiographic signals. Taking into account the variability of the phonocardiographic signals induced by valve disorders, three families of features were analyzed: (a) time-varying & time-frequency features; (b) perceptual; and (c) fractal features. With the aim of improving the performance of the system, the accuracy of the system was tested using several combinations of the aforementioned families of parameters. In the second stage, the main components extracted from each family were combined together with the goal of improving the accuracy of the system. The contribution of each family of features extracted was evaluated by means of a simple k-nearest neighbors classifier, showing that fractal features provide the best accuracy (97.17%), followed by time-varying & time-frequency (95.28%), and perceptual features (88.7%). However, an accuracy around 94% can be reached just by using the two main features of the fractal family; therefore, considering the difficulties related to the automatic intrabeat segmentation needed for spectral and perceptual features, this scheme becomes an interesting alternative. The conclusion is that fractal type features were the most robust family of parameters (in the sense of accuracy vs. computational load) for the automatic detection of murmurs. This work was carried out using a database that contains 164 phonocardiographic recordings (81 normal and 83 records with murmurs). The database was segmented to extract 360 representative individual beats (180 per class).

92 citations

Journal Article
01 Jan 2011-Scopus
TL;DR: Nonlinear dynamic features are valuable tool for automatic detection of hypernasality; addtionally both feature selection techniques show stable and consistent results, achieving accuracy levels of up to 93.73%.
Abstract: Automatic detection of hypernasality in voices of children with Cleft Lip and Palate (CLP) is made considering two charcaterization techniques, one based on acoustic, noise and cepstral analysis and other based on nonlinear dynamic features. Besides characterization, two automatic feature selection techniques are implemented in order to find optimal sub-spaces to better discriminate between healthy and hypernasal voices. Results indicate that nonlinear dynamic features are valuable tool for automatic detection of hypernasality; addtionally both feature selection techniques show stable and consistent results, achieving accuracy levels of up to 93.73%. Index Terms: Hypernasality, Cleft Lip and Palate, acoustic, cepstral, nonlinear dynamics.

41 citations

Journal ArticleDOI
TL;DR: This paper presents a review of indirect measurement with the aim of understanding the state of development in this area, as well as the current challenges and opportunities; and proposes to gather all the different designations under the term soft metrology, broadening its definition.
Abstract: Soft metrology has been defined as a set of measurement techniques and models that allow the objective quantification of properties usually determined by human perception such as smell, sound or taste. The development of a soft metrology system requires the measurement of physical parameters and the construction of a model to correlate them with the variables that need to be quantified. This paper presents a review of indirect measurement with the aim of understanding the state of development in this area, as well as the current challenges and opportunities; and proposes to gather all the different designations under the term soft metrology, broadening its definition. For this purpose, the literature on indirect measurement techniques and systems has been reviewed, encompassing recent as well as a few older key documents to present a time line of development and map out application contexts and designations. As machine learning techniques have been extensively used in indirect measurement strategies, this review highlights them, and also makes an effort to describe the state of the art regarding the determination of uncertainty. This study does not delve into developments and applications for human and social sciences, although the proposed definition considers the use that this term has had in these areas.

32 citations

Journal ArticleDOI
10 Apr 2019-Entropy
TL;DR: The results seem to indicate that shorter lengths than those suggested by N>>m! are sufficient for a stable PE calculation, and even very short time series can be robustly classified based on PE measurements before the stability point is reached.
Abstract: Permutation Entropy (PE) is a time series complexity measure commonly used in a variety of contexts, with medicine being the prime example. In its general form, it requires three input parameters for its calculation: time series length N, embedded dimension m, and embedded delay τ . Inappropriate choices of these parameters may potentially lead to incorrect interpretations. However, there are no specific guidelines for an optimal selection of N, m, or τ , only general recommendations such as N > > m ! , τ = 1 , or m = 3 , … , 7 . This paper deals specifically with the study of the practical implications of N > > m ! , since long time series are often not available, or non-stationary, and other preliminary results suggest that low N values do not necessarily invalidate PE usefulness. Our study analyses the PE variation as a function of the series length N and embedded dimension m in the context of a diverse experimental set, both synthetic (random, spikes, or logistic model time series) and real–world (climatology, seismic, financial, or biomedical time series), and the classification performance achieved with varying N and m. The results seem to indicate that shorter lengths than those suggested by N > > m ! are sufficient for a stable PE calculation, and even very short time series can be robustly classified based on PE measurements before the stability point is reached. This may be due to the fact that there are forbidden patterns in chaotic time series, not all the patterns are equally informative, and differences among classes are already apparent at very short lengths.

28 citations

Journal ArticleDOI
TL;DR: This paper describes, in detail, 27 techniques that mainly focus on the smoothing or elimination of speckle noise in medical ultrasound images, and describes recent techniques in the field of machine learning focused on deep learning, which are not yet well known but greatly relevant.
Abstract: In recent years, many studies have examined filters for eliminating or reducing speckle noise, which is inherent to ultrasound images, in order to improve the metrological evaluation of their biomedical applications. In the case of medical ultrasound images, said noise can produce uncertainty in the diagnosis because details, such as limits and edges, should be preserved. Most algorithms can eliminate speckle noise, but they do not consider the conservation of these details. This paper describes, in detail, 27 techniques that mainly focus on the smoothing or elimination of speckle noise in medical ultrasound images. The aim of this study is to highlight the importance of improving said smoothing and elimination, which are directly related to several processes (such as the detection of regions of interest) described in other articles examined in this study. Furthermore, the description of this collection of techniques facilitates the implementation of evaluations and research with a more specific scope. This study initially covers several classical methods, such as spatial filtering, diffusion filtering, and wavelet filtering. Subsequently, it describes recent techniques in the field of machine learning focused on deep learning, which are not yet well known but greatly relevant, along with some modern and hybrid models in the field of speckle-noise filtering. Finally, five Full-Reference (FR) distortion metrics, common in filter evaluation processes, are detailed along with a compensation methodology between FR and Non-Reference (NR) metrics, which can generate greater certainty in the classification of the filters by considering the information of their behavior in terms of perceptual quality provided by NR metrics.

28 citations


Cited by
More filters
01 Mar 1995
TL;DR: This thesis applies neural network feature selection techniques to multivariate time series data to improve prediction of a target time series and results indicate that the Stochastics and RSI indicators result in better prediction results than the moving averages.
Abstract: : This thesis applies neural network feature selection techniques to multivariate time series data to improve prediction of a target time series. Two approaches to feature selection are used. First, a subset enumeration method is used to determine which financial indicators are most useful for aiding in prediction of the S&P 500 futures daily price. The candidate indicators evaluated include RSI, Stochastics and several moving averages. Results indicate that the Stochastics and RSI indicators result in better prediction results than the moving averages. The second approach to feature selection is calculation of individual saliency metrics. A new decision boundary-based individual saliency metric, and a classifier independent saliency metric are developed and tested. Ruck's saliency metric, the decision boundary based saliency metric, and the classifier independent saliency metric are compared for a data set consisting of the RSI and Stochastics indicators as well as delayed closing price values. The decision based metric and the Ruck metric results are similar, but the classifier independent metric agrees with neither of the other metrics. The nine most salient features, determined by the decision boundary based metric, are used to train a neural network and the results are presented and compared to other published results. (AN)

1,545 citations

Book ChapterDOI
E.R. Davies1
01 Jan 1990
TL;DR: This chapter introduces the subject of statistical pattern recognition (SPR) by considering how features are defined and emphasizes that the nearest neighbor algorithm achieves error rates comparable with those of an ideal Bayes’ classifier.
Abstract: This chapter introduces the subject of statistical pattern recognition (SPR). It starts by considering how features are defined and emphasizes that the nearest neighbor algorithm achieves error rates comparable with those of an ideal Bayes’ classifier. The concepts of an optimal number of features, representativeness of the training data, and the need to avoid overfitting to the training data are stressed. The chapter shows that methods such as the support vector machine and artificial neural networks are subject to these same training limitations, although each has its advantages. For neural networks, the multilayer perceptron architecture and back-propagation algorithm are described. The chapter distinguishes between supervised and unsupervised learning, demonstrating the advantages of the latter and showing how methods such as clustering and principal components analysis fit into the SPR framework. The chapter also defines the receiver operating characteristic, which allows an optimum balance between false positives and false negatives to be achieved.

1,189 citations

Journal ArticleDOI
B.B. Bauer1
01 Apr 1963

897 citations

01 Jan 2014

872 citations