scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Edge Detectors Based Anisotropic Diffusion for Enhancement of Digital Images

16 Dec 2008-pp 33-38
TL;DR: A new enhancement scheme for noisy digital images using inhomogeneous anisotropic diffusion scheme via the edge indicator provided by well known edge detection methods and addition of a fidelity term facilitates the proposed scheme to remove the noise while preserving edges.
Abstract: Using the edge detection techniques we propose a new enhancement scheme for noisy digital images. This uses inhomogeneous anisotropic diffusion scheme via the edge indicator provided by well known edge detection methods. Addition of a fidelity term facilitates the proposed scheme to remove the noise while preserving edges. This method is general in the sense that it can be incorporated into any of the nonlinear anisotropic diffusion methods. Numerical results show the promise of this hybrid technique on real and noisy images.
Citations
More filters
Journal ArticleDOI
TL;DR: An inhomogeneous partial differential equation which includes a separate edge detection part to control smoothing in and around possible discontinuities, under the framework of anisotropic diffusion is studied.
Abstract: We study an inhomogeneous partial differential equation which includes a separate edge detection part to control smoothing in and around possible discontinuities, under the framework of anisotropic diffusion. By incorporating edges found at multiple scales via an adaptive edge detector-based indicator function, the proposed scheme removes noise while respecting salient boundaries. We create a smooth transition region around probable edges found and reduce the diffusion rate near it by a gradient-based diffusion coefficient. In contrast to the previous anisotropic diffusion schemes, we prove the well-posedness of our scheme in the space of bounded variation. The proposed scheme is general in the sense that it can be used with any of the existing diffusion equations. Numerical simulations on noisy images show the advantages of our scheme when compared to other related schemes.

31 citations

Journal ArticleDOI
01 Aug 2015
TL;DR: A rough set theory (RST) based approach is used to derive pixel level edge map and class labels which in turn are used to improve the performance of bilateral filters, which are extensively applied to denoise brain MR images.
Abstract: Graphical abstractDisplay Omitted HighlightsThe soft computing, in precise rough set theory (RST) based class information and edge information is used in bilateral framework for medical image denoising problem.RST based framework is proposed as prior information for denoising purpose.The proposal is able to restrict the conventional bilateral filter to over smooth the region near the boundaries.It is extension of our conference paper with wide range of noise amount and different modalities of MR images. We extended it to real human brain MR images also.Performance has been compared with state-of-the-art methods and found to be satisfactory. A study on bilateral filter for denoising reveals that more informative the filters are, better is the result expected. Moreover, getting precise information of the image with noise is a difficult task. In the current work, a rough set theory (RST) based approach is used to derive pixel level edge map and class labels which in turn are used to improve the performance of bilateral filters. RST handles the uncertainty present in the data even under noise. The basic structure of existing bilateral filter is not changed much, however, boosted up by prior information derived by rough edge map and rough class labels. The filter is extensively applied to denoise brain MR images. The results are compared with that of the state-of-the-art approaches. The experiments have been performed on two real (normal and pathological disordered) human MR databases. The performance of the proposed filter is found to be better, in terms of benchmark metrics.

19 citations

Book ChapterDOI
01 Jan 2016
TL;DR: A novel telegraph total variational PDE model based on edge detector based on image structure tensor as an edge detector to control smoothing process and keep more detail features is proposed.
Abstract: For the existing issues of edge blur and uncertainty of parameter selection during image filtering, a novel telegraph total variational PDE model based on edge detector is proposed. We propose image structure tensor as an edge detector to control smoothing process and keep more detail features. The proposed model takes advantages of both telegraph and total variational model, which is edge preserving and robust to noise. Experimental results illustrate the effectiveness of the proposed model and demonstrate that our algorithm competes favorably with state of the-art approaches in terms of producing better denoising results.

12 citations

Proceedings ArticleDOI
16 Dec 2012
TL;DR: A weighing function is introduced that controls the impact of existing bilateral filter for denoising, conditioned by Rough Edge Map and Rough Class Label, to handle the impreciseness of edge and class label and preserve these two by controlling the bilateral filter more efficiently.
Abstract: A new denoising filter is proposed for human brain MR image. The proposed filter is based on the notion of existing bilateral filter whose objective is to get a noise-free smooth image, preserving edges and other features intact. We have introduced a weighing function that controls the impact of existing bilateral filter for denoising. It is conditioned by Rough Edge Map (REM) and Rough Class Label (RCL). The presence of noise makes difficult to get precise information of edge and class label. Rough Set Technique is expected to assign rough (imprecise) class label and edge label to the pixels in the given image. This function thus is expected to handle the impreciseness of edge and class label and thereby preserving these two by controlling the bilateral filter more efficiently. The filter is extensively applied on brain MR images. The current proposal is compared with some of state-of-the-art approaches using different image quality measures and found to be efficient in most of the cases.

9 citations


Cites methods from "Edge Detectors Based Anisotropic Di..."

  • ...Further development has been made by incorporating robust statistics [3], probability based approach [1] and adaptive smoothing edge map [14]....

    [...]

Book ChapterDOI
18 Dec 2014
TL;DR: A fuzzy diffusion coefficient which takes into account local pixel variability for better denoising and selective smoothing of edges is proposed which improves over traditional filters in terms of structural similarity and signal to noise ratio.
Abstract: Nonlinear anisotropic diffusion is widely used in image processing and computer vision for various problems. One of the basic and important problem is that of restoring noisy images and diffusion filters are an important class of denoising methods. In this work, we proposed a fuzzy diffusion coefficient which takes into account local pixel variability for better denoising and selective smoothing of edges. By using smoothed gradients along with the fuzzy diffusion coefficient function we obtain edge preserving restoration of noisy images. Experimental results on standard test images and real medical data illustrate that the proposed fuzzy diffusion improves over traditional filters in terms of structural similarity and signal to noise ratio.

8 citations

References
More filters
Journal ArticleDOI
TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

28,073 citations

Journal ArticleDOI
TL;DR: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced, chosen to vary spatially in such a way as to encourage intra Region smoothing rather than interregion smoothing.
Abstract: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image. >

12,560 citations


"Edge Detectors Based Anisotropic Di..." refers background or methods in this paper

  • ...To alleviate this we introduce a more stable and robust edge indicator function using the edge detectors as discussed in [7]....

    [...]

  • ...Anisotropic diffusion paradigm for gray images was started by Perona-Malik [1] in 1990’s....

    [...]

  • ...To remedy this Perona-Malik (PM) in [1] introduced an edge indicator function g for reducing the diffusion near the edges via the nonlinear partial differential equation (PDE): ut = div(g(|∇u|)∇u) (2) with a decreasing function g : R+ → R+ (g(0) = 1, lims→∞ g(s) = 0) with the initial condition u(x,…...

    [...]

  • ...By using a pixel-wise adaptive scheme we overcome the ambiguities of the original gradient based models like the Perona-Malik PDE....

    [...]

  • ...To remedy this Perona-Malik (PM) in [1] introduced an edge indicator function g for reducing the diffusion near the edges via the nonlinear partial differential equation (PDE): ut = div(g(|∇u|)∇u) (2) with a decreasing function g : R+ → R+ (g(0) = 1, lims→∞ g(s) = 0) with the initial condition u(x, 0) = I(x)....

    [...]

Book ChapterDOI
01 Jan 1987
TL;DR: Scale-space filtering is a method that describes signals qualitatively, managing the ambiguity of scale in an organized and natural way.
Abstract: The extrema in a signal and its first few derivatives provide a useful general-purpose qualitative description for many kinds of signals. A fundamental problem in computing such descriptions is scale: a derivative must be taken over some neighborhood, but there is seldom a principled basis for choosing its size. Scale-space filtering is a method that describes signals qualitatively, managing the ambiguity of scale in an organized and natural way. The signal is first expanded by convolution with gaussian masks over a continuum of sizes. This "scale-space" image is then collapsed, using its qualitative structure, into a tree providing a concise but complete qualitative description covering all scales of observation. The description is further refined by applying a stability criterion, to identify events that persist of large changes in scale.

3,008 citations


"Edge Detectors Based Anisotropic Di..." refers background in this paper

  • ...Assume that I is a noisy version of the true image u with noise field n of known mean and variance....

    [...]

Journal ArticleDOI
TL;DR: It is shown that any image can be embedded in a one-parameter family of derived images (with resolution as the parameter) in essentially only one unique way if the constraint that no spurious detail should be generated when the resolution is diminished, is applied.
Abstract: In practice the relevant details of images exist only over a restricted range of scale. Hence it is important to study the dependence of image structure on the level of resolution. It seems clear enough that visual perception treats images on several levels of resolution simultaneously and that this fact must be important for the study of perception. However, no applicable mathematically formulated theory to deal with such problems appears to exist. In this paper it is shown that any image can be embedded in a one-parameter family of derived images (with resolution as the parameter) in essentially only one unique way if the constraint that no spurious detail should be generated when the resolution is diminished, is applied. The structure of this family is governed by the well known diffusion equation (a parabolic, linear, partial differential equation of the second order). As such the structure fits into existing theories that treat the front end of the visual system as a continuous stack of homogeneous layers, characterized by iterated local processing schemes. When resolution is decreased the images becomes less articulated because the extrem ("light and dark blobs") disappear one after the other. This erosion of structure is a simple process that is similar in every case. As a result any image can be described as a juxtaposed and nested set of light and dark blobs, wherein each blob has a limited range of resolution in which it manifests itself. The structure of the family of derived images permits a derivation of the sampling density required to sample the image at multiple scales of resolution.(ABSTRACT TRUNCATED AT 250 WORDS)

2,641 citations


"Edge Detectors Based Anisotropic Di..." refers methods in this paper

  • ...This had its original idea of a scale based image filtering based on Gaussian kernel as noted by Koenderink [2] and extended further as a scale-space methodology by Witkin [3]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a new version of the Perona and Malik theory for edge detection and image restoration is proposed, which keeps all the improvements of the original model and avoids its drawbacks.
Abstract: A new version of the Perona and Malik theory for edge detection and image restoration is proposed. This new version keeps all the improvements of the original model and avoids its drawbacks: it is proved to be stable in presence of noise, with existence and uniqueness results. Numerical experiments on natural images are presented.

2,565 citations


"Edge Detectors Based Anisotropic Di..." refers background in this paper

  • ...|) can 978-0-7695-3476-3/08 $25.00 © 2008 IEEE DOI 10.1109/ICVGIP.2008.68 33 be used for edge identification, this also alleviates the theoretical instability [6] associated with the PM equation (2)....

    [...]