scispace - formally typeset
Search or ask a question
Book ChapterDOI

A Study on Different Edge Detection Techniques in Digital Image Processing

TL;DR: The main objective is to study the theory of edge detection for image segmentation using various computing approaches.
Abstract: Image segmentation is one of the fundamental problems in image processing. In digital image processing, there are many image segmentation techniques. One of the most important techniques is Edge detection techniques for natural image segmentation. Edge is a one of the basic feature of an image. Edge detection can be used as a fundamental tool for image segmentation. Edge detection methods transform original images into edge images benefits from the changes of grey tones in the image. The image edges include a good number of rich information that is very significant for obtaining the image characteristic by object recognition and analyzing the image. In a gray scale image, the edge is a local feature that, within a neighborhood, separates two regions, in each of which the gray level is more or less uniform with different values on the two sides of the edge. In this paper, the main objective is to study the theory of edge detection for image segmentation using various computing approaches.
Citations
More filters
Proceedings ArticleDOI
01 Aug 2017
TL;DR: Experimental results clearly show the superiority of the proposed NN-NSGA-II model with different features, which has been evaluated using various performances measuring metrics such as accuracy, precision, recall and F-measure.
Abstract: Automated, efficient and accurate classification of skin diseases using digital images of skin is very important for bio-medical image analysis. Various techniques have already been developed by many researchers. In this work, a technique based on meta-heuristic supported artificial neural network has been proposed to classify images. Here 3 common skin diseases have been considered namely angioma, basal cell carcinoma and lentigo simplex. Images have been obtained from International Skin Imaging Collaboration (ISIC) dataset. A popular multi objective optimization method called Non-dominated Sorting Genetic Algorithm — II is employed to train the ANN (NNNSGA-II). Different feature have been extracted to train the classifier. A comparison has been made with the proposed model and two other popular meta-heuristic based classifier namely NN-PSO (ANN trained with Particle Swarm Optimization) and NN-GA (ANN trained with Genetic algorithm). The results have been evaluated using various performances measuring metrics such as accuracy, precision, recall and F-measure. Experimental results clearly show the superiority of the proposed NN-NSGA-II model with different features.

39 citations


Cites background from "A Study on Different Edge Detection..."

  • ...Different contour based methods can be used in the analysis of biomedical images [7]....

    [...]

Book ChapterDOI
01 Jan 2020
TL;DR: The use of EMD to correctly identify infected pneumonia lungs from normal non-infected lungs is shown.
Abstract: Pneumonia is a common lung infection in which an individual’s alveoli fill up with fluid and form a cloudy-like structure. Pneumonia is of two types: (a) bacterial and (b) viral, but both the X-rays have a very similar pattern. The accurate identification along with how much extent the person is infected is still a challenge for doctors. In this paper, the use of EMD to correctly identify infected pneumonia lungs from normal non-infected lungs is shown. EMD, also known as Earth Mover’s Distance is the distance of two probability distributions over some region D. First, we preprocessed the images to just have the images of lungs, and then we did some re-scaling, rotation, and normalization of intensity so that we will have a set of uniform size/shape of lungs X-rays, and then, we calculated EMD and compared the results.

35 citations

Book ChapterDOI
01 Jan 2018
TL;DR: The main goal of this chapter is to give a comprehensive study of multiobjective optimization techniques in biomedical image analysis problem that consolidated some of the recent works along with future directions.
Abstract: Multiobjective optimization methods in image analysis are one of the active research domains in the current years. These methods are used for the decision-making process in case of image segmentation. Multiobjective techniques are popular and suitable model for many difficult optimization problems. In various practical problems, different objectives are to be considered. Now, most of the problems have some objectives those are conflicting in nature. Hence, only one objective cannot be optimized or prioritize because it can result in some adverse effect on other objective, and can produce some undesired results in terms of the other objectives. The main goal of this chapter is to give a comprehensive study of multiobjective optimization techniques in biomedical image analysis problem. The different models are categorized depending on the relevant features. For example, the different aspects of the optimization methods employed, different formulations of the problems, categories of data, and the domain of the application. This study mainly focuses on the multiobjective optimization techniques that can be used to analyze digital images specially biomedical images. Here, some of the problems, and challenges related to images are diagnosed and analyzed with multiple objectives. It is a comprehensive study that consolidated some of the recent works along with future directions.

28 citations

Book ChapterDOI
01 Jan 2020
TL;DR: In this chapter, a comprehensive overview of the deep learning-assisted biomedical image analysis methods is presented and can be helpful for the researchers to understand the recent developments and drawbacks of the present systems.
Abstract: Biomedical image analysis methods are gradually shifting towards computer-aided solutions from manual investigations to save time and improve the quality of the diagnosis. Deep learning-assisted biomedical image analysis is one of the major and active research areas. Several researchers are working in this domain because deep learning-assisted computer-aided diagnostic solutions are well known for their efficiency. In this chapter, a comprehensive overview of the deep learning-assisted biomedical image analysis methods is presented. This chapter can be helpful for the researchers to understand the recent developments and drawbacks of the present systems. The discussion is made from the perspective of the computer vision, pattern recognition, and artificial intelligence. This chapter can help to get future research directions to exploit the blessings of deep learning techniques for biomedical image analysis.

28 citations

Proceedings ArticleDOI
01 Feb 2019
TL;DR: A contrast optimization method based on well-known metaheuristic technique called genetic algorithm with elitism is used that can enhance the biomedical images for better analysis that can illustrate the efficiency of the proposed algorithm.
Abstract: Biomedical image analysis is one of the most challenging and inevitable part of the computer aided diagnostic systems. Automated analysis of the image can detect various diseases automatically without human intervention. Computer vision and artificial intelligence can sometimes defeat human diagnostic power and can reveal some hidden information from the biomedical images. In the field of health care, accurate results are highly required within stipulated amount of time. But to increase accuracy, proper preprocessing with sophisticated algorithms is required. Low quality image can affect processing algorithm which can leads to the poor result. Therefore, sophisticated preprocessing methods are required to get reliable results. Contrast is one of the most important parameter for any image. Poor contrast may cause several problems for computer vision algorithms. Conventional algorithms for contrast adjustment may not be suitable for many purposes. Sometimes, these methods can generate some images that may lose some critical information. In this work, a contrast optimization method based on well-known metaheuristic technique called genetic algorithm with elitism is used that can enhance the biomedical images for better analysis. A new kernel has been proposed to detect the edges. Obtained results illustrate the efficiency of the proposed algorithm.

26 citations


Cites background from "A Study on Different Edge Detection..."

  • ...In general, some highly intensive edges can be observed in the contrast optimized images [32], [33]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

28,073 citations

Journal ArticleDOI
TL;DR: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced, chosen to vary spatially in such a way as to encourage intra Region smoothing rather than interregion smoothing.
Abstract: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image. >

12,560 citations

Journal ArticleDOI
Robert M. Haralick1
TL;DR: The facet model is used to accomplish step edge detection and the Marr-Hildreth zero crossing of the Laplacian operator is found that it is the best performer; next is the Prewitt gradient operator.
Abstract: We use the facet model to accomplish step edge detection. The essence of the facet model is that any analysis made on the basis of the pixel values in some neighborhood has its final authoritative interpretation relative to the underlying gray tone intensity surface of which the neighborhood pixel values are observed noisy samples. With regard to edge detection, we define an edge to occur in a pixel if and only if there is some point in the pixel's area having a negatively sloped zero crossing of the second directional derivative taken in the direction of a nonzero gradient at the pixel's center. Thus, to determine whether or not a pixel should be marked as a step edge pixel, its underlying gray tone intensity surface must be estimated on the basis of the pixels in its neighborhood. For this, we use a functional form consisting of a linear combination of the tensor products of discrete orthogonal polynomials of up to degree three. The appropriate directional derivatives are easily computed from this kind of a function. Upon comparing the performance of this zero crossing of second directional derivative operator with the Prewitt gradient operator and the Marr-Hildreth zero crossing of the Laplacian operator, we find that it is the best performer; next is the Prewitt gradient operator. The Marr-Hildreth zero crossing of the Laplacian operator performs the worst.

1,130 citations

Journal ArticleDOI
TL;DR: A class of algorithms is described which enables computer quantized images to be decomposed into constituent reflecting the structure of the images, viewed as the morphological precursor to a higher level syntactic analysis.

590 citations

Journal ArticleDOI
TL;DR: It is shown that ``edge focusing'', i.e., a coarse-to-fine tracking in a continuous manner, combines high positional accuracy with good noise-reduction, which is of vital interest in several applications.
Abstract: Edge detection in a gray-scale image at a fine resolution typically yields noise and unnecessary detail, whereas edge detection at a coarse resolution distorts edge contours. We show that ``edge focusing'', i.e., a coarse-to-fine tracking in a continuous manner, combines high positional accuracy with good noise-reduction. This is of vital interest in several applications. Junctions of different kinds are in this way restored with high precision, which is a basic requirement when performing (projective) geometric analysis of an image for the purpose of restoring the three-dimensional scene. Segmentation of a scene using geometric clues like parallelism, etc., is also facilitated by the algorithm, since unnecessary detail has been filtered away. There are indications that an extension of the focusing algorithm can classify edges, to some extent, into the categories diffuse and nondiffuse (for example diffuse illumination edges). The edge focusing algorithm contains two parameters, namely the coarseness of the resolution in the blurred image from where we start the focusing procedure, and a threshold on the gradient magnitude at this coarse level. The latter parameter seems less critical for the behavior of the algorithm and is not present in the focusing part, i.e., at finer resolutions. The step length of the scale parameter in the focusing scheme has been chosen so that edge elements do not move more than one pixel per focusing step.

498 citations