Showing papers in "Pattern Recognition Letters in 1990"
••
TL;DR: This work proposes a new method for curve detection that has the advantages of small storage, high speed, infinite parameter space and arbitrarily high resolution, and the preliminary experiments have shown that the new method is quite effective.
1,080 citations
••
TL;DR: The gray value distribution of the runs is proposed to be used to define two new features, viz., low gray level run emphasis ( LGRE) and high gray levelrun emphasis ( HGRE).
443 citations
••
TL;DR: It is shown that as the temperature approaches zero, the algorithm becomes the basic ISODATA algorithm and the method is independent of the initial choice of cluster means.
393 citations
••
TL;DR: It is demonstrated that the fundamental reason for this shortcoming is the subsampling introduced in the higher levels of the pyramid and the multi-resolution algorithms in general have a fundamental and inherent difficulty in analyzing elongated objects and ensuring connectivity.
139 citations
••
TL;DR: This paper examines feature classification based on local energy detection and shows that local energy measures are intrinsically capable of making this classification because of the use of odd and even filters.
127 citations
••
TL;DR: The transform space obtained by this algorithm contains less extraneous data and more significant maxima, thus making it easier to extract the desired parameters from it.
102 citations
••
TL;DR: A new minimization technique, tree annealing, is presented which finds the global minimum and experimental results for histograms with two and three modes are presented.
89 citations
••
TL;DR: The variation in membership function is seen to be restricted by bound functions, thus enabling the method of segmentation more flexible but effective and can be viewed as a weighted moving average technique, greyness ambiguity being the weights.
88 citations
••
TL;DR: A model for genetic algorithms with semantic nets is derived for which the relationships between concepts is depicted as a semantic net and predicates between pairs of concepts are resolved by taking the average of the combined predicate values of the objects attached to the concept at the tail of the arc representing the predicate in the semantic net.
81 citations
••
TL;DR: A novel fast method for the calculation of an approximative solution of the eikonal equation is proposed and it is proposed to construct a matte 3-D surface that, when illuminated perpendicularly and imaged in eye or camera, yields a grey value that renders the image.
77 citations
••
TL;DR: A statistical property called agreement is recommended as an indication of periodic structure in cooccurrence histograms, measured by a κ statistic.
••
TL;DR: The proposed algorithm for providing both fuzzy and nonfuzzy segmentation based on these measures is found to be successful even for the input images containing multiple objects or an elongated object, where the existing fuzzy compactness based algorithm fails.
••
TL;DR: A new transportable image processing environment is presented, which consists of a standard C interpreter, a reconfigurable window manager, a command expander, a library handler, and an image processing library (AIM).
••
TL;DR: It is proved that R induces a complete compact distributive lattice over the set of B 's (and hence theSet of d ( B )'s) in 2-D.
••
TL;DR: The algorithm does not need iterative visual interaction and prior knowledge of image statistics in order to select the transformation function for its optimal enhancement, and a quantitative measure for evaluating enhancement equality has been provided based on fuzzy geometry.
••
TL;DR: A method is presented for depth recovery through the analysis of scene sharpness across changing focus position by Modeling a defocused image as the application of a low pass filter on a properly focused image of the same scene.
••
TL;DR: The Graham scan is shown to be a fundamental backtracking technique in computational geometry which was originally designed to compute the convex hull of a set of point in the plane and has since found application in several different contexts.
••
TL;DR: A theorem is proved to show that a super-Knight's distance is a metric if and only if the underlying super-knight's move is well-behaved in a specific sense.
••
TL;DR: A modification of ESPTA, called MESPTA, is presented, which preserves the connectivity of the image while maintaining its 3-D shape.
••
TL;DR: A number of parameters (luminance, contrast, sharpness, width) are computed at each contour point in order to allow a faithful reconstruction of the original image to describe luminance changes in images by means of their contours.
••
TL;DR: This definition is based on the curvature maxima computed along the object's contours, and some principles of this mathematical model are used to implement a new method of parametrisable skeletonization of binary images.
••
TL;DR: This paper presents an adaptive contrast enhancement technique based on a hierarchical image representation that adaptively stretches local image contrast at all levels of resolution in an overall contrast enhancement of the recombined image.
••
TL;DR: A new hierarchical image description based on a morphological skeleton representation that combines the construction of the skeleton itself and the determination of the hierarchical relation between its components is presented.
••
TL;DR: The first algorithm to handle a defined class of curved-surface objects, and automatically computes the partition of the Guassian sphere, and thereby the aspect graph, for solids of revolution defined as Right, Circular, Straight, Homogeneous Generalized Cylinders.
••
TL;DR: A method is described for calibrating a colour video camera using the Macbeth colour chart under fixed lighting conditions in the L ∗ u ∗ v ∗ colour space and should be useful for comparing and analysing colour images.
••
TL;DR: An efficient algorithm is proposed for computing the rank orders of pel gray levels over running windows in a 2-D image array that may be used for min, max or median filtering as well as for image transformations involving rank orders.
••
TL;DR: It is shown that the Euler characteristic of a binary digital image—the number of components minus the number of holes (components of 0's surrounded by 1's)—is locally computable; in fact, it can be computed by counting the numbers of occurrences of various local patterns of 1's in the image.
••
TL;DR: A technique for colour discrimination based on template matching with tristimulus colour fractions that overcomes problems of shading and can be used to produce histograms or colour-bin descriptions of images for direct use in recognition or error detection.
••
TL;DR: Two new fuzzy cluster validity functionals (minimum and mean hard tendencies) are presented, based on the analysis of the hard tendency of the fuzzy classification generated by the fuzzy c-means algorithm, using the bootstrap technique.
••
TL;DR: It is shown that the branch and bound algorithm guarantees the optimal feature subset without evaluating all possible feature subsets, if the criterion function used satisfies the ‘monotonicity’ property.