scispace - formally typeset
Search or ask a question
Author

V. B. Surya Prasath

Bio: V. B. Surya Prasath is an academic researcher from Cincinnati Children's Hospital Medical Center. The author has contributed to research in topics: Image restoration & Image segmentation. The author has an hindex of 19, co-authored 152 publications receiving 1552 citations. Previous affiliations of V. B. Surya Prasath include University of Cincinnati & University of Missouri.


Papers
More filters
Journal ArticleDOI
TL;DR: New deterministic control approaches for crossover and mutation rates are defined, namely Dynamic Decreasing of high mutation ratio/dynamic increasing of low crossover ratio (DHM/ILC), and Dynamic Increasing of Low Mutation/Dynamic decreasing of High Crossover (ILM/DHC).
Abstract: Genetic algorithm (GA) is an artificial intelligence search method that uses the process of evolution and natural selection theory and is under the umbrella of evolutionary computing algorithm. It is an efficient tool for solving optimization problems. Integration among (GA) parameters is vital for successful (GA) search. Such parameters include mutation and crossover rates in addition to population that are important issues in (GA). However, each operator of GA has a special and different influence. The impact of these factors is influenced by their probabilities; it is difficult to predefine specific ratios for each parameter, particularly, mutation and crossover operators. This paper reviews various methods for choosing mutation and crossover ratios in GAs. Next, we define new deterministic control approaches for crossover and mutation rates, namely Dynamic Decreasing of high mutation ratio/dynamic increasing of low crossover ratio (DHM/ILC), and Dynamic Increasing of Low Mutation/Dynamic Decreasing of High Crossover (ILM/DHC). The dynamic nature of the proposed methods allows the ratios of both crossover and mutation operators to be changed linearly during the search progress, where (DHM/ILC) starts with 100% ratio for mutations, and 0% for crossovers. Both mutation and crossover ratios start to decrease and increase, respectively. By the end of the search process, the ratios will be 0% for mutations and 100% for crossovers. (ILM/DHC) worked the same but the other way around. The proposed approach was compared with two parameters tuning methods (predefined), namely fifty-fifty crossover/mutation ratios, and the most common approach that uses static ratios such as (0.03) mutation rates and (0.9) crossover rates. The experiments were conducted on ten Traveling Salesman Problems (TSP). The experiments showed the effectiveness of the proposed (DHM/ILC) when dealing with small population size, while the proposed (ILM/DHC) was found to be more effective when using large population size. In fact, both proposed dynamic methods outperformed the predefined methods compared in most cases tested.

225 citations

Journal ArticleDOI
16 Dec 2019
TL;DR: In this paper, the performance of the KNN classifier is evaluated using a large number of distance measures, tested on a number of real-world data sets, with and without adding different levels of noise.
Abstract: The K-nearest neighbor (KNN) classifier is one of the simplest and most common classifiers, yet its performance competes with the most complex classifiers in the literature. The core of this classifier depends mainly on measuring the distance or similarity between the tested examples and the training examples. This raises a major question about which distance measures to be used for the KNN classifier among a large number of distance and similarity measures available? This review attempts to answer this question through evaluating the performance (measured by accuracy, precision, and recall) of the KNN using a large number of distance measures, tested on a number of real-world data sets, with and without adding different levels of noise. The experimental results show that the performance of KNN classifier depends significantly on the distance used, and the results showed large gaps between the performances of different distances. We found that a recently proposed nonconvex distance performed the best when applied on most data sets comparing with the other tested distances. In addition, the performance of the KNN with this top performing distance degraded only ∼20% while the noise level reaches 90%, this is true for most of the distances used as well. This means that the KNN classifier using any of the top 10 distances tolerates noise to a certain degree. Moreover, the results show that some distances are less affected by the added noise comparing with other distances.

181 citations

Journal ArticleDOI
TL;DR: Evaluating the performance of the KNN using a large number of distance measures, tested on a number of real-world data sets, with and without adding different levels of noise found that a recently proposed nonconvex distance performed the best when applied on most data sets comparing with the other tested distances.
Abstract: The K-nearest neighbor (KNN) classifier is one of the simplest and most common classifiers, yet its performance competes with the most complex classifiers in the literature. The core of this classifier depends mainly on measuring the distance or similarity between the tested examples and the training examples. This raises a major question about which distance measures to be used for the KNN classifier among a large number of distance and similarity measures available? This review attempts to answer this question through evaluating the performance (measured by accuracy, precision and recall) of the KNN using a large number of distance measures, tested on a number of real-world datasets, with and without adding different levels of noise. The experimental results show that the performance of KNN classifier depends significantly on the distance used, and the results showed large gaps between the performances of different distances. We found that a recently proposed non-convex distance performed the best when applied on most datasets comparing to the other tested distances. In addition, the performance of the KNN with this top performing distance degraded only about $20\%$ while the noise level reaches $90\%$, this is true for most of the distances used as well. This means that the KNN classifier using any of the top $10$ distances tolerate noise to a certain degree. Moreover, the results show that some distances are less affected by the added noise comparing to other distances.

170 citations

Journal ArticleDOI
TL;DR: The available methods on skull stripping are described and an exploratory review of recent literature on the existing skull stripping methods is reviewed.
Abstract: The high resolution magnetic resonance (MR) brain images contain some non-brain tissues such as skin, fat, muscle, neck, and eye balls compared to the functional images namely positron emission tomography (PET), single photon emission computed tomography (SPECT), and functional magnetic resonance imaging (fMRI) which usually contain relatively less non-brain tissues. The presence of these non-brain tissues is considered as a major obstacle for automatic brain image segmentation and analysis techniques. Therefore, quantitative morphometric studies of MR brain images often require a preliminary processing to isolate the brain from extra-cranial or non-brain tissues, commonly referred to as skull stripping. This paper describes the available methods on skull stripping and an exploratory review of recent literature on the existing skull stripping methods.

161 citations

Journal ArticleDOI
TL;DR: The goal of this paper is to provide an apt choice of denoising method that suits to CT and X-ray images and to provide a review of the following important Poisson removal methods.
Abstract: In medical imaging systems, denoising is one of the important image processing tasks. Automatic noise removal will improve the quality of diagnosis and requires careful treatment of obtained imagery. Com-puted tomography (CT) and X-Ray imaging systems use the X radiation to capture images and they are usually corrupted by noise following a Poisson distribution. Due to the importance of Poisson noise re-moval in medical imaging, there are many state-of-the-art methods that have been studied in the image processing literature. These include methods that are based on total variation (TV) regularization, wave-lets, principal component analysis, machine learning etc. In this work, we will provide a review of the following important Poisson removal methods: the method based on the modified TV model, the adaptive TV method, the adaptive non-local total variation method, the method based on the higher-order natural image prior model, the Poisson reducing bilateral filter, the PURE-LET method, and the variance stabi-lizing transform-based methods. Our task focuses on methodology overview, accuracy, execution time and their advantage/disadvantage assessments. The goal of this paper is to provide an apt choice of denoising method that suits to CT and X-ray images. The integration of several high-quality denoising methods in image processing software for medical imaging systems will be always excellent option and help further image analysis for computer-aided diagnosis.

80 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The analysis of recent advances in genetic algorithms is discussed and the well-known algorithms and their implementation are presented with their pros and cons with the aim of facilitating new researchers.
Abstract: In this paper, the analysis of recent advances in genetic algorithms is discussed. The genetic algorithms of great interest in research community are selected for analysis. This review will help the new and demanding researchers to provide the wider vision of genetic algorithms. The well-known algorithms and their implementation are presented with their pros and cons. The genetic operators and their usages are discussed with the aim of facilitating new researchers. The different research domains involved in genetic algorithms are covered. The future research directions in the area of genetic operators, fitness function and hybrid algorithms are discussed. This structured review will be helpful for research and graduate teaching.

1,271 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of finding the best approximation operator for a given function, and the uniqueness of best approximations and the existence of best approximation operators.
Abstract: Preface 1. The approximation problem and existence of best approximations 2. The uniqueness of best approximations 3. Approximation operators and some approximating functions 4. Polynomial interpolation 5. Divided differences 6. The uniform convergence of polynomial approximations 7. The theory of minimax approximation 8. The exchange algorithm 9. The convergence of the exchange algorithm 10. Rational approximation by the exchange algorithm 11. Least squares approximation 12. Properties of orthogonal polynomials 13. Approximation of periodic functions 14. The theory of best L1 approximation 15. An example of L1 approximation and the discrete case 16. The order of convergence of polynomial approximations 17. The uniform boundedness theorem 18. Interpolation by piecewise polynomials 19. B-splines 20. Convergence properties of spline approximations 21. Knot positions and the calculation of spline approximations 22. The Peano kernel theorem 23. Natural and perfect splines 24. Optimal interpolation Appendices Index.

841 citations

01 Jan 2016
TL;DR: This book helps people to enjoy a good book with a cup of coffee in the afternoon, instead they juggled with some malicious bugs inside their laptop.
Abstract: Thank you for downloading magnetic resonance imaging physical principles and sequence design. As you may know, people have look numerous times for their chosen books like this magnetic resonance imaging physical principles and sequence design, but end up in harmful downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they juggled with some malicious bugs inside their laptop.

695 citations

Journal ArticleDOI
TL;DR: This paper first introduces the basic concepts of image segmentation, then explains different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue.
Abstract: Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain’s anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation.

513 citations

Journal ArticleDOI
TL;DR: The open-source framework for classification of AD using CNN and T1-weighted MRI is extended and found that more than half of the surveyed papers may have suffered from data leakage and thus reported biased performance.

346 citations