scispace - formally typeset
Search or ask a question

Showing papers on "Thresholding published in 2010"


Journal ArticleDOI
TL;DR: With this modification, empirical evidence suggests that the algorithm is faster than many other state-of-the-art approaches while showing similar performance, and the modified algorithm retains theoretical performance guarantees similar to the original algorithm.
Abstract: Sparse signal models are used in many signal processing applications. The task of estimating the sparsest coefficient vector in these models is a combinatorial problem and efficient, often suboptimal strategies have to be used. Fortunately, under certain conditions on the model, several algorithms could be shown to efficiently calculate near-optimal solutions. In this paper, we study one of these methods, the so-called Iterative Hard Thresholding algorithm. While this method has strong theoretical performance guarantees whenever certain theoretical properties hold, empirical studies show that the algorithm's performance degrades significantly, whenever the conditions fail. What is more, in this regime, the algorithm also often fails to converge. As we are here interested in the application of the method to real world problems, in which it is not known in general, whether the theoretical conditions are satisfied or not, we suggest a simple modification that guarantees the convergence of the method, even in this regime. With this modification, empirical evidence suggests that the algorithm is faster than many other state-of-the-art approaches while showing similar performance. What is more, the modified algorithm retains theoretical performance guarantees similar to the original algorithm.

504 citations


Journal ArticleDOI
TL;DR: Under a very mild condition on the sparsity and on the dictionary characteristics, it is shown that the probability of recovery failure decays exponentially in the number of channels, demonstrating that most of the time, multichannel sparse recovery is indeed superior to single channel methods.
Abstract: This paper considers recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relaxation based on a mixed matrix norm. Typically, worst case analysis is carried out in order to analyze conditions under which the algorithms are able to recover any jointly sparse set of vectors. However, such an approach is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Previous work considered an average case analysis of thresholding and SOMP by imposing a probability model on the measured signals. Here, the main focus is on analysis of convex relaxation techniques. In particular, the mixed l 2,1 approach to multichannel recovery is investigated. Under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, it is shown that the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods. The probability bounds are valid and meaningful even for a small number of signals. Using the tools developed to analyze the convex relaxation technique, also previous bounds for thresholding and SOMP recovery are tightened.

365 citations


Journal ArticleDOI
TL;DR: Several image segmentation approaches have been proposed and used in the clinical setting including thresholding, edge detection, region growing, clustering, stochastic models, deformable models, classifiers and several other approaches.
Abstract: Historically, anatomical CT and MR images were used to delineate the gross tumour volumes (GTVs) for radiotherapy treatment planning. The capabilities offered by modern radiation therapy units and the widespread availability of combined PET/CT scanners stimulated the development of biological PET imaging-guided radiation therapy treatment planning with the aim to produce highly conformal radiation dose distribution to the tumour. One of the most difficult issues facing PET-based treatment planning is the accurate delineation of target regions from typical blurred and noisy functional images. The major problems encountered are image segmentation and imperfect system response function. Image segmentation is defined as the process of classifying the voxels of an image into a set of distinct classes. The difficulty in PET image segmentation is compounded by the low spatial resolution and high noise characteristics of PET images. Despite the difficulties and known limitations, several image segmentation approaches have been proposed and used in the clinical setting including thresholding, edge detection, region growing, clustering, stochastic models, deformable models, classifiers and several other approaches. A detailed description of the various approaches proposed in the literature is reviewed. Moreover, we also briefly discuss some important considerations and limitations of the widely used techniques to guide practitioners in the field of radiation oncology. The strategies followed for validation and comparative assessment of various PET segmentation approaches are described. Future opportunities and the current challenges facing the adoption of PET-guided delineation of target volumes and its role in basic and clinical research are also addressed.

357 citations


Journal ArticleDOI
TL;DR: The SumThreshold method is a new method formed from a combination of existing techniques, including a new way of thresholding, that is fast, robust, does not need a data model before it can be executed and works in almost all configurations with its default parameters.
Abstract: We describe and compare several post-correlation radio frequency interference (RFI) classification methods. As data sizes of observations grow with new and improved telescopes, the need for completely automated, robust methods for RFI mitigation is pressing. We investigated several classification methods and find that, for the data sets we used, the most accurate among them is the SumThreshold method. This is a new method formed from a combination of existing techniques, including a new way of thresholding. This iterative method estimates the astronomical signal by carrying out a surface fit in the time-frequency plane. With a theoretical accuracy of 95 per cent recognition and an approximately 0.1 per cent false probability rate in simple simulated cases, the method is in practice as good as the human eye in finding RFI. In addition, it is fast, robust, does not need a data model before it can be executed and works in almost all configurations with its default parameters. The method has been compared using simulated data with several other mitigation techniques, including one based upon the singular value decomposition of the time-frequency matrix, and has shown better results than the rest.

307 citations


Journal ArticleDOI
TL;DR: This work provides insight on the advantages and drawbacks of l1 relaxation techniques such as BPDN and the Dantzig selector, as opposed to greedy approaches such as OMP and thresholding and provides theoretical performance guarantees for three sparse estimation algorithms.
Abstract: We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements A x0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We provide theoretical performance guarantees for three sparse estimation algorithms: basis pursuit denoising (BPDN), orthogonal matching pursuit (OMP), and thresholding. The performance of these techniques is quantified as the l2 distance between the estimate and the true value of x0. We demonstrate that, with high probability, the analyzed algorithms come close to the behavior of the oracle estimator, which knows the locations of the nonzero elements in x0. Our results are non-asymptotic and are based only on the coherence of A, so that they are applicable to arbitrary dictionaries. This provides insight on the advantages and drawbacks of l1 relaxation techniques such as BPDN and the Dantzig selector, as opposed to greedy approaches such as OMP and thresholding.

262 citations


Posted Content
TL;DR: A thresholding based iterative procedure for outlier detection (Θ–IPOD) based on hard thresholding correctly identifies outliers on some hard test problems and is much faster than iteratively reweighted least squares for large data, because each iteration costs at most O(np) (and sometimes much less), avoiding an O( np2) least squares estimate.
Abstract: This paper studies the outlier detection problem from the point of view of penalized regressions. Our regression model adds one mean shift parameter for each of the $n$ data points. We then apply a regularization favoring a sparse vector of mean shift parameters. The usual $L_1$ penalty yields a convex criterion, but we find that it fails to deliver a robust estimator. The $L_1$ penalty corresponds to soft thresholding. We introduce a thresholding (denoted by $\Theta$) based iterative procedure for outlier detection ($\Theta$-IPOD). A version based on hard thresholding correctly identifies outliers on some hard test problems. We find that $\Theta$-IPOD is much faster than iteratively reweighted least squares for large data because each iteration costs at most $O(np)$ (and sometimes much less) avoiding an $O(np^2)$ least squares estimate. We describe the connection between $\Theta$-IPOD and $M$-estimators. Our proposed method has one tuning parameter with which to both identify outliers and estimate regression coefficients. A data-dependent choice can be made based on BIC. The tuned $\Theta$-IPOD shows outstanding performance in identifying outliers in various situations in comparison to other existing approaches. This methodology extends to high-dimensional modeling with $p\gg n$, if both the coefficient vector and the outlier pattern are sparse.

230 citations


Journal ArticleDOI
TL;DR: The results lead us to conclude that the best methods are those that are normalized with respect to illumination, such as RGB or Ohta Normalized, and there is no improvement in the use of Hue Saturation Intensity (HSI)-like spaces.
Abstract: This paper presents a quantitative comparison of several segmentation methods (including new ones) that have successfully been used in traffic sign recognition The methods presented can be classified into color-space thresholding, edge detection, and chromatic/achromatic decomposition Our support vector machine (SVM) segmentation method and speed enhancement using a lookup table (LUT) have also been tested The best algorithm will be the one that yields the best global results throughout the whole recognition process, which comprises three stages: 1) segmentation; 2) detection; and 3) recognition Thus, an evaluation method, which consists of applying the entire recognition system to a set of images with at least one traffic sign, is attempted while changing the segmentation method used This way, it is possible to observe modifications in performance due to the kind of segmentation used The results lead us to conclude that the best methods are those that are normalized with respect to illumination, such as RGB or Ohta Normalized, and there is no improvement in the use of Hue Saturation Intensity (HSI)-like spaces In addition, an LUT with a reduction in the less-significant bits, such as that proposed here, improves speed while maintaining quality SVMs used in color segmentation give good results, but some improvements are needed when applied to achromatic colors

202 citations


Journal ArticleDOI
TL;DR: By preserving the fast convergence rate of particle swarm optimization (PSO), the quantum-behaved PSO employing the cooperative method (CQPSO) is proposed to save computation time and to conquer the curse of dimensionality.
Abstract: Multilevel thresholding is one of the most popular image segmentation techniques. Some of these are time-consuming algorithms. In this paper, by preserving the fast convergence rate of particle swarm optimization (PSO), the quantum-behaved PSO employing the cooperative method (CQPSO) is proposed to save computation time and to conquer the curse of dimensionality. Maximization of the measure of separability on the basis of between-classes variance method (often called the OTSU method), which is a popular thresholding technique, is employed to evaluate the performance of the proposed method. The experimental results show that, compared with the existing population-based thresholding methods, the proposed PSO algorithm gets more effective and efficient results. It also shortens the computation time of the traditional OTSU method. Therefore, it can be applied in complex image processing such as automatic target recognition.

190 citations


Journal ArticleDOI
TL;DR: A novel method to segment retinal blood vessels to overcome the variations in contrast of large and thin vessels using adaptive local thresholding to produce a binary image then extract large connected components as large vessels.
Abstract: The morphological changes of the retinal blood vessels in retinal images are important indicators for diseases like diabetes, hypertension and glaucoma. Thus the accurate segmentation of blood vessel is of diagnostic value. In this paper, we present a novel method to segment retinal blood vessels to overcome the variations in contrast of large and thin vessels. This method uses adaptive local thresholding to produce a binary image then extract large connected components as large vessels. The residual fragments in the binary image including some thin vessel segments (or pixels), are classified by Support Vector Machine (SVM). The tracking growth is applied to the thin vessel segments to form the whole vascular network. The proposed algorithm is tested on DRIVE database, and the average sensitivity is over 77% while the average accuracy reaches 93.2%. In this paper, we distinguish large vessels by adaptive local thresholding for their good contrast. Then identify some thin vessel segments with bad contrast by SVM, which can be lengthened by tracking. This proposed method can avoid heavy computation and manual intervention.

172 citations


Journal ArticleDOI
15 Jun 2010-Geoderma
TL;DR: In this paper, the variability of the outcomes of a set of automatic thresholding algorithms, applied to portions of the test images, was also investigated, and the experimental results indicated that experts rely on very different approaches to threshold images of soils, and that there is considerable observer influence associated with this thresholding.

171 citations


Book
12 Feb 2010
TL;DR: This textbook provides a comprehensive introduction to the theories, techniques and applications of image fusion and is aimed at advanced undergraduate and first-year graduate students in electrical engineering and computer science.
Abstract: This textbook provides a comprehensive introduction to the theories, techniques and applications of image fusion. It is aimed at advanced undergraduate and first-year graduate students in electrical engineering and computer science. It should also be useful to practicing engineers who wish to learn the concepts of image fusion and use them in real-life applications. The book is intended to be self-contained. No previous knowledge of image fusion is assumed, although some familiarity with elementary image processing and the basic tools of linear algebra is recommended. The book may also be used as a supplementary text for a course on advanced image processing. Apart from two preliminary chapters, the book is divided into three parts. Part I deals with the conceptual theories and ideas which underlie image fusion. Particular emphasis is given to the concept of a common representational framework and includes detailed discussions on the techniques of image registration, radiometric calibration and semantic equivalence. Part II deals with a wide range of techniques and algorithms which are in common use in image fusion. Among the topics considered are: sub-space transformations, multi-resolution analysis, wavelets, ensemble learning, bagging, boosting, color spaces, image thresholding, Markov random fields, image similarity measures and the expectation-maximization algorithm. Together Parts I and II form an integrated and comprehensive overview of image fusion. Part III deals with applications. In it several real-life examples of image fusion are examined in detail, including panchromatic sharpening, ensemble color image segmentation and the Simultaneous Truth and Performance algorithm of Warfield et al. The book is accompanied by a webpage from which supplementary material may be obtained. This includes support for course instructors and links to relevant matlab code.

Journal ArticleDOI
TL;DR: The proposed method uses intensity thresholding followed by removal of narrow connections to obtain a brain mask and is best used in conjunction with HWA as the errors produced by the two approaches often occur at different locations and cancel out when their masks are combined.

Journal ArticleDOI
TL;DR: Initial testing suggests that the multispectral MR image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions, is highly reproducible and accurate with the potential to be applied to routinely collected clinical MRI data.
Abstract: Objective Brain tissue segmentation by conventional threshold-based techniques may have limited accuracy and repeatability in older subjects. We present a new multispectral magnetic resonance (MR) image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions (WMLs).

Journal ArticleDOI
TL;DR: This paper presents a novel approach for automated segmentation of the vasculature in retinal images that uses the intensity information from red and green channels of the same retinal image to correct non-uniform illumination in color fundus images.
Abstract: Performing the segmentation of vasculature in the retinal images having pathology is a challenging problem. This paper presents a novel approach for automated segmentation of the vasculature in retinal images. The approach uses the intensity information from red and green channels of the same retinal image to correct non-uniform illumination in color fundus images. Matched filtering is utilized to enhance the contrast of blood vessels against the background. The enhanced blood vessels are then segmented by employing spatially weighted fuzzy c-means clustering based thresholding which can well maintain the spatial structure of the vascular tree segments. The proposed method's performance is evaluated on publicly available DRIVE and STARE databases of manually labeled images. On the DRIVE and STARE databases, it achieves an area under the receiver operating characteristic curve of 0.9518 and 0.9602 respectively, being superior to those presented by state-of-the-art unsupervised approaches and comparable to those obtained with the supervised methods.

Journal ArticleDOI
TL;DR: The test results of the automated algorithms indicate that using multispectral MRI improves prostate cancer segmentation performance when compared to single MR images, a result similar to the human reader studies that were performed before.
Abstract: Purpose: Magnetic resonance imaging(MRI) has been proposed as a promising alternative to transrectal ultrasound for the detection and localization of prostate cancer and fusing the information from multispectral MRimages is currently an active research area In this study, the goal is to develop automated methods that combine the pharmacokinetic parameters derived from dynamic contrast enhanced (DCE) MRI with quantitative T 2 MRI and diffusion weighted imaging (DWI) in contrast to most of the studies which were performed with human readers The main advantages of the automated methods are that the observer variability is removed and easily reproducible results can be efficiently obtained when the methods are applied to a test data The goal is also to compare the performance of automated supervised and unsupervised methods for prostate cancer localization with multispectral MRI Methods: The authors use multispectral MRI data from 20 patients with biopsy-confirmed prostate cancer patients, and the image set consists of parameters derived from T 2 , DWI, and DCE-MRI The authors utilize large margin classifiers for prostate cancer segmentation and compare them to an unsupervised method the authors have previously developed The authors also develop thresholding schemes to tune support vector machines (SVMs) and their probabilistic counterparts, relevance vector machines (RVMs), for an improved performance with respect to a selected criterion Moreover, the authors apply a thresholding method to make the unsupervised fuzzy Markov random fields method fully automatic Results: The authors have developed a supervised machine learning method that performs better than the previously developed unsupervised method and, additionally, have found that there is no significant difference between the SVM and RVM segmentation results The results also show that the proposed methods for threshold selection can be used to tune the automated segmentation methods to optimize results for certain criteria such as accuracy or sensitivity The test results of the automated algorithms indicate that using multispectral MRI improves prostate cancer segmentation performance when compared to single MRimages, a result similar to the human reader studies that were performed before Conclusions: The automated methods presented here can help diagnose and detect prostate cancer, and improve segmentation results For that purpose, multispectral MRI provides better information about cancer and normal regions in the prostate when compared to methods that use single MRI techniques; thus, the different MRI measurements provide complementary information in the automated methods Moreover, the use of supervised algorithms in such automated methods remain a good alternative to the use of unsupervised algorithms

Journal ArticleDOI
TL;DR: Extensive simulation results clearly show that the proposed unsupervised change-detection method not only consistently provides more accurate detection of small changes but also demonstrates attractive robustness against noise interference under various noise types and noise levels.
Abstract: In this paper, an unsupervised change-detection method for multitemporal satellite images is proposed. The algorithm exploits the inherent multiscale structure of the dual-tree complex wavelet transform (DT-CWT) to individually decompose each input image into one low-pass subband and six directional high-pass subbands at each scale. To avoid illumination variation issue possibly incurred in the low-pass subband, only the DT-CWT coefficient difference resulted from the six high-pass subbands of the two satellite images under comparison is analyzed in order to decide whether each subband pixel intensity has incurred a change. Such a binary decision is based on an unsupervised thresholding derived from a mixture statistical model, with a goal of minimizing the total error probability of change detection. The binary change-detection mask is thus formed for each subband, and all the produced subband masks are merged by using both the intrascale fusion and the interscale fusion to yield the final change-detection mask. For conducting the performance evaluation of change detection, the proposed DT-CWT-based unsupervised change-detection method is exploited for both the noise-free and the noisy images. Extensive simulation results clearly show that the proposed algorithm not only consistently provides more accurate detection of small changes but also demonstrates attractive robustness against noise interference under various noise types and noise levels.

Journal ArticleDOI
TL;DR: This work proposes an automatic, fast, robust and accurate method for the segmentation of bone using 3D adaptive thresholding that can achieve sub-voxel accuracy very rapidly and be further reduced by optimizing the iterative convergence process.

Proceedings ArticleDOI
08 Mar 2010
TL;DR: An active vision system for the automatic detection of falls and the recognition of several postures for elderly homecare applications using a wall-mounted Time-Of-Flight camera with high performances in terms of efficiency and reliability on a large real dataset.
Abstract: The paper presents an active vision system for the automatic detection of falls and the recognition of several postures for elderly homecare applications. A wall-mounted Time-Of-Flight camera provides accurate measurements of the acquired scene in all illumination conditions, allowing the reliable detection of critical events. Preliminarily, an off-line calibration procedure estimates the external camera parameters automatically without landmarks, calibration patterns or user intervention. The calibration procedure searches for different planes in the scene selecting the one that accomplishes the floor plane constraints. Subsequently, the moving regions are detected in real-time by applying a Bayesian segmentation to the whole 3D points cloud. The distance of the 3D human centroid from the floor plane is evaluated by using the previously defined calibration parameters and the corresponding trend is used as feature in a thresholding-based clustering for fall detection. The fall detection shows high performances in terms of efficiency and reliability on a large real dataset in which almost one half of events are falls acquired in different conditions. The posture recognition is carried out by using both the 3D human centroid distance from the floor plane and the orientation of the body spine estimated by applying a topological approach to the range images. Experimental results on synthetic data validate the correctness of the proposed posture recognition approach.

Journal ArticleDOI
TL;DR: A new approach to real-time human detection through processing video captured by a thermal infrared camera mounted on the autonomous mobile platform mSecurit^T^M is introduced and optical flow or image difference will emphasize the foreground hot spot areas obtained at the initial human candidates' detection.

Journal ArticleDOI
TL;DR: This work proposes an automatic and robust technique for threshold selection based on edge detection which uses gradient masks which are defined as regions of interest for the determination of threshold values.

Journal ArticleDOI
TL;DR: The experimental results manifest that the proposed MEHBMOT algorithm can search for multiple thresholds which are very close to the optimal ones examined by the exhaustive search method, and the segmentation results using the MEH BMOT algorithm is the best and its computation time is relatively low.

Journal ArticleDOI
TL;DR: A segmentation method based on nonlinear filtering and colour image thresholding and an efficient inpainting method that eliminates the negative effect of specular highlights on other image analysis algorithms and also gives a visually pleasing result are proposed.
Abstract: Minimally invasive medical procedures have become increasingly common in today's healthcare practice. Images taken during such procedures largely show tissues of human organs, such as the mucosa of the gastrointestinal tract. These surfaces usually have a glossy appearance showing specular highlights. For many visual analysis algorithms, these distinct and bright visual features can become a significant source of error. In this article, we propose two methods to address this problem: (a) a segmentation method based on nonlinear filtering and colour image thresholding and (b) an efficient inpainting method. The inpainting algorithm eliminates the negative effect of specular highlights on other image analysis algorithms and also gives a visually pleasing result. The methods compare favourably to the existing approaches reported for endoscopic imaging. Furthermore, in contrast to the existing approaches, the proposed segmentation method is applicable to the widely used sequential RGB image acquisition systems.

Journal ArticleDOI
TL;DR: This work presents a method to localize a thin surgical tool such as a biopsy needle or a microelectrode in a 3-D ultrasound image using thresholding and model fitting using random sample consensus for robust localization of the axis.
Abstract: Ultrasound guidance is used for many surgical interventions such as biopsy and electrode insertion. We present a method to localize a thin surgical tool such as a biopsy needle or a microelectrode in a 3-D ultrasound image. The proposed method starts with thresholding and model fitting using random sample consensus for robust localization of the axis. Subsequent local optimization refines its position. Two different tool image models are presented: one is simple and fast and the second uses learned a priori information about the tool's voxel intensities and the background. Finally, the tip of the tool is localized by finding an intensity drop along the axis. The simulation study shows that our algorithm can localize the tool at nearly real-time speed, even using a MATLAB implementation, with accuracy better than 1 mm. In an experimental comparison with several alternative localization methods, our method appears to be the fastest and the most robust one. We also show the results on real 3-D ultrasound data from a PVA cryogel phantom, turkey breast, and breast biopsy.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A simple, fast, and effective method to detect defects on textured surfaces based on the Phase Only Transform which correspond to the Discrete Fourier Transform (DFT), normalized by the magnitude is presented.
Abstract: We present a simple, fast, and effective method to detect defects on textured surfaces. Our method is unsupervised and contains no learning stage or information on the texture being inspected. The new method is based on the Phase Only Transform (PHOT) which correspond to the Discrete Fourier Transform (DFT), normalized by the magnitude. The PHOT removes any regularities, at arbitrary scales, from the image while preserving only irregular patterns considered to represent defects. The localization is obtained by the inverse transform followed by adaptive thresholding using a simple standard statistical method. The main computational requirement is thus to apply the DFT on the input image. The new method is also easy to implement in a few lines of code. Despite its simplicity, the methods is shown to be effective and generic as tested on various inputs, requiring only one parameter for sensitivity. We provide theoretical justification based on a simple model and show results on various kinds of patterns. We also discuss some limitations.

Journal ArticleDOI
TL;DR: In this article, the affine-like system of functions known as the shearlet system is applied to obtain a highly effective reconstruction algorithm which provides near-optimal rate of convergence in estimating a large class of images from noisy Radon data.

Journal ArticleDOI
TL;DR: In this paper, a new multilevel MCET algorithm based on the technology of the honey bee mating optimization (HBMO) is proposed, which can efficiently search for multiple thresholds which are very close to the optimal ones examined by the exhaustive search method.
Abstract: Image thresholding is an important technique for image processing and pattern recognition. Many thresholding techniques have been proposed in the literature. Among them, the minimum cross entropy thresholding (MCET) has been widely applied. In this paper, a new multilevel MCET algorithm based on the technology of the honey bee mating optimization (HBMO) is proposed. Three different methods included the exhaustive search, the particle swarm optimization (PSO) and the quantum particle swarm optimization (QPSO) methods are also implemented for comparison with the results of the proposed method. The experimental results manifest that the proposed HBMO-based MCET algorithm can efficiently search for multiple thresholds which are very close to the optimal ones examined by the exhaustive search method. In comparison with the other two thresholding methods, the segmentation results using the HBMO-based MCET algorithm is the best. Furthermore, the convergence of the HBMO-based MCET algorithm can rapidly achieve, and the results are validated that the proposed HBMO-based MCET algorithm is efficient.

Journal ArticleDOI
TL;DR: This paper proposes a new approach for the segmentation of both near-end and far-end intima-media regions of the common carotid artery in ultrasound images that requires minimal user interaction and is able to segment the near- end wall in arteries with large, hypoechogenic and irregular plaques, issues usually not considered previously.

Journal ArticleDOI
01 Jul 2010
TL;DR: This paper formulates the problem of distinguishing changed from unchanged pixels in multitemporal remote sensing images as a minimum enclosing ball (MEB) problem with changed pixels as target class and uses both target and outlier samples for defining the MEB.
Abstract: This paper formulates the problem of distinguishing changed from unchanged pixels in multitemporal remote sensing images as a minimum enclosing ball (MEB) problem with changed pixels as target class. The definition of the sphere-shaped decision boundary with minimal volume that embraces changed pixels is approached in the context of the support vector formalism adopting a support vector domain description (SVDD) one-class classifier. SVDD maps the data into a high dimensional feature space where the spherical support of the high dimensional distribution of changed pixels is computed. Unlike the standard SVDD, the proposed formulation of the SVDD uses both target and outlier samples for defining the MEB, and is included here in an unsupervised scheme for change detection. To this purpose, nearly certain training examples for the classes of both targets (i.e., changed pixels) and outliers (i.e., unchanged pixels) are identified by thresholding the magnitude of the spectral change vectors. Experimental results obtained on two different multitemporal and multispectral remote sensing images demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
01 Oct 2010-Micron
TL;DR: An automated approach to leukocyte recognition using fuzzy divergence and modified thresholding techniques is introduced and it is found that Cauchy leads better segmentation as compared to others.

Journal ArticleDOI
TL;DR: A novel approach for automated dark-spot detection using synthetic aperture radar (SAR) intensity imagery is presented, making use of a spatial density feature to differentiate between dark spots and the background.