Journal•ISSN: 1524-0703

# Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing

Elsevier BV

About: Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing is an academic journal published by Elsevier BV. The journal publishes majorly in the area(s): Image processing & Polygon mesh. It has an ISSN identifier of 1524-0703. Over the lifetime, 1261 publications have been published receiving 86645 citations.

Topics: Image processing, Polygon mesh, Surface (mathematics), Rendering (computer graphics), Edge detection

##### Papers published on a yearly basis

##### Papers

More filters

••

01 Mar 1985-Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing

TL;DR: Two methods of entropic thresholding proposed by Pun (Signal Process.,2, 1980, 223–237;Comput.16, 1981, 210–239) have been carefully and critically examined and a new method with a sound theoretical foundation is proposed.

Abstract: Two methods of entropic thresholding proposed by Pun (Signal Process.,2, 1980, 223–237;Comput. Graphics Image Process.16, 1981, 210–239) have been carefully and critically examined. A new method with a sound theoretical foundation is proposed. Examples are given on a number of real and artifically generated histograms.

3,551 citations

••

01 Sep 1987-Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing

TL;DR: It is concluded that clipped ahe should become a method of choice in medical imaging and probably also in other areas of digital imaging, and that clip ahe can be made adequately fast to be routinely applied in the normal display sequence.

Abstract: Adaptive histogram equalization (ahe) is a contrast enhancement method designed to be broadly applicable and having demonstrated effectiveness. However, slow speed and the overenhancement of noise it produces in relatively homogeneous regions are two problems. We report algorithms designed to overcome these and other concerns. These algorithms include interpolated ahe, to speed up the method on general purpose computers; a version of interpolated ahe designed to run in a few seconds on feedback processors; a version of full ahe designed to run in under one second on custom VLSI hardware; weighted ahe, designed to improve the quality of the result by emphasizing pixels' contribution to the histogram in relation to their nearness to the result pixel; and clipped ahe, designed to overcome the problem of overenhancement of noise contrast. We conclude that clipped ahe should become a method of choice in medical imaging and probably also in other areas of digital imaging, and that clipped ahe can be made adequately fast to be routinely applied in the normal display sequence.

3,041 citations

••

01 Feb 1988-Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing

TL;DR: This paper presents a survey of thresholding techniques and attempts to evaluate the performance of some automatic global thresholding methods using the criterion functions such as uniformity and shape measures.

Abstract: In digital image processing, thresholding is a well-known technique for image segmentation. Because of its wide applicability to other areas of the digital image processing, quite a number of thresholding methods have been proposed over the years. In this paper, we present a survey of thresholding techniques and update the earlier survey work by Weszka (Comput. Vision Graphics & Image Process 7, 1978 , 259–265) and Fu and Mu (Pattern Recognit. 13, 1981 , 3–16). We attempt to evaluate the performance of some automatic global thresholding methods using the criterion functions such as uniformity and shape measures. The evaluation is based on some real world images.

2,771 citations

••

01 Jan 1987-Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing

TL;DR: A neural network architecture for the learning of recognition categories is derived which circumvents the noise, saturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models.

Abstract: A neural network architecture for the learning of recognition categories is derived. Real-time network dynamics are completely characterized through mathematical analysis and computer simulations. The architecture self-organizes and self-stabilizes its recognition codes in response to arbitrary orderings of arbitrarily many and arbitrarily complex binary input patterns. Top-down attentional and matching mechanisms are critical in self-stabilizing the code learning process. The architecture embodies a parallel search scheme which updates itself adaptively as the learning process unfolds. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time does not grow as a function of code complexity. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. These invariant properties emerge in the form of learned critical feature patterns, or prototypes. The architecture possesses a context-sensitive self-scaling property which enables its emergent critical feature patterns to form. They detect and remember statistically predictive configurations of featural elements which are derived from the set of all input patterns that are ever experienced. Four types of attentional process—priming, gain control, vigilance, and intermodal competition—are mechanistically characterized. Top—down priming and gain control are needed for code matching and self-stabilization. Attentional vigilance determines how fine the learned categories will be. If vigilance increases due to an environmental disconfirmation, then the system automatically searches for and learns finer recognition categories. A new nonlinear matching law (the ⅔ Rule) and new nonlinear associative laws (the Weber Law Rule, the Associative Decay Rule, and the Template Learning Rule) are needed to achieve these properties. All the rules describe emergent properties of parallel network interactions. The architecture circumvents the noise, saturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models.

2,462 citations

••

01 Apr 1985-Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing

TL;DR: Two border following algorithms are proposed for the topological analysis of digitized binary images, which determine the surroundness relations among the borders of a binary image and follow only the outermost borders.

Abstract: Two border following algorithms are proposed for the topological analysis of digitized binary images. The first one determines the surroundness relations among the borders of a binary image. Since the outer borders and the hole borders have a one-to-one correspondence to the connected components of 1-pixels and to the holes, respectively, the proposed algorithm yields a representation of a binary image, from which one can extract some sort of features without reconstructing the image. The second algorithm, which is a modified version of the first, follows only the outermost borders (i.e., the outer borders which are not surrounded by holes). These algorithms can be effectively used in component counting, shrinking, and topological structural analysis of binary images, when a sequential digital computer is used.

2,303 citations