scispace - formally typeset
Search or ask a question

Showing papers on "Feature extraction published in 1987"


Proceedings ArticleDOI
R.K. Lenz1, R. Tsai
01 Mar 1987
TL;DR: This paper describes techniques for calibrating certain intrinsic camera parameters for machine vision and reports accuracy and reproducibility of the calibrated parameters, as well as the improvement in actual 3D measurement due to center calibration.
Abstract: This paper describes techniques for calibrating certain intrinsic camera parameters for machine vision. The parameters to be calibrated are the horizontal scale factor, i.e. the factor that relates the sensor element spacing of a discrete array camera to the picture element spacing after sampling by the image acquisition circuitry, and the image center, i.e. the intersection of the optical axis with the camera sensor. The scale factor calibration uses a 1D-FFT and is accurate and efficient. It also permits the use of only one coplanar set of calibration points for general camera calibration. Three groups of techniques for center calibration are presented: Group I requires using a laser and a four-degree of freedom adjustment of its orientation, but is simplest in concept, and is accurate and reproducible. Group II is simple to perform, but is less accurate than the other two. The most general Group III is accurate and efficient, but requires accurate image feature extraction of calibration points with known 3D coordinates. A feasible setup is described. Results of real experiments are presented and compared with theoretical predictions. Accuracy and reproducibility of the calibrated parameters are reported, as well as the improvement in actual 3D measurement due to center calibration.

248 citations


Patent
30 Jun 1987
TL;DR: In this article, a method and system for detecting and displaying abnormal anatomic regions existing in a digital X-ray image, wherein a single projection digital Xray image is processed to obtain signal-enhanced image data with a maximum signal-to-noise ratio (SNR) and is also processed to get signal-suppressed image with a suppressed SNR, is presented.
Abstract: A method and system for detecting and displaying abnormal anatomic regions existing in a digital X-ray image, wherein a single projection digital X-ray image is processed to obtain signal-enhanced image data with a maximum signal-to-noise ratio (SNR) and is also processed to obtain signal-suppressed image data with a suppressed SNR. Then, difference image data are formed by subtraction of the signal-suppressed image data from the signal-enhanced image data to remove low-frequency structured anatomic background, which is basically the same in both the signal-suppressed and signal-enhanced image data. Once the structured background is removed, feature extraction, is performed. For the detection of lung nodules, pixel thresholding is performed, followed by circularity and/or size testing of contiguous pixels surviving thresholding. Threshold levels are varied, and the effect of varying the threshold on circularity and size is used to detect nodules. For the detection of mammographic microcalcifications, pixel thresholding and contiguous pixel area thresholding are performed. Clusters of suspected abnormalities are then detected.

209 citations


Journal ArticleDOI
TL;DR: Computer vision algorithms that recognize and locate partially occluded objects using a generate-test paradigm that iteratively generates and tests hypotheses for compatibility with the scene until it identifies all the scene objects.
Abstract: We present computer vision algorithms that recognize and locate partially occluded objects. The scene may contain unknown objects that may touch or overlap giving rise to partial occlusion. The algorithms revolve around a generate-test paradigm. The paradigm iteratively generates and tests hypotheses for compatibility with the scene until it identifies all the scene objects. Polygon representations of the object's boundary guide the hypothesis generation scheme. Choosing the polygon representation turns out to have powerful consequences in all phases of hypothesis generation and verification. Special vertices of the polygon called ``corners'' help detect and locate the model in the scene. Polygon moment calculations lead to estimates of the dissimilarity between scene and model corners, and determine the model corner location in the scene. An association graph represents the matches and compatibility constraints. Extraction of the largest set of mutually compatible matches from the association graph forms a model hypothesis. Using a coordinate transform that maps the model onto the scene, the hypothesis gives the proposed model's location and orientation. Hypothesis verification requires checking for region consistency. The union of two polygons and other polygon operations combine to measure the consistency of the hypothesis with the scene. Experimental results give examples of all phases of recognizing and locating the objects.

148 citations


Proceedings ArticleDOI
12 Oct 1987
TL;DR: A primary advantage of this on-line learning algorithm is that the number of mistakes that it makes is relatively little affected by the presence of large numbers of irrelevant attributes in the examples.
Abstract: Valiant and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss on-line learning of these functions. In on-line learning, the learner responds to each example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in the on-line setting is the number of mistakes the learner makes. For suitable classes of functions, on-line learning algorithms are available that make a bounded number of mistakes, with the bound independent of the number of examples seen by the learner. We present one such algorithm, which learns disjunctive Boolean functions, and variants of the algorithm for learning other classes of Boolean functions. The algorithm can be expressed as a linear-threshold algorithm. A primary advantage of this algorithm is that the number of mistakes that it makes is relatively little affected by the presence of large numbers of irrelevant attributes in the examples; we show that the number of mistakes grows only logarithmically with the number of irrelevant attributes. At the same time, the algorithm is computationaUy time and space efficient.

130 citations


Journal ArticleDOI
TL;DR: In this paper, a digital image processing system is described to facilitate objective inspection and classification of cereal grains using a charge-coupled device (CCD) video camera interfaced to a custom-built data-acquisition system.

96 citations


Journal ArticleDOI
TL;DR: Using Brodatz's textures, the proposed features are evaluated and compared with those suggested by Conners et al. (1984) to present a new approach to texture feature extraction from a cooccurrence matrix.

62 citations


Journal ArticleDOI
01 Jan 1987
TL;DR: A two-pass relaxation method is developed for matching features extracted from successive depth maps, based on the principle of conservation of distance and angle between features during rigid motion.
Abstract: The motion of a three-dimensional object is determined from a sequence of stereo images by extracting three-dimensional features, establishing correspondences between these features, and finally, computing the rigid motion parameters. Three-dimensional features are extracted from the depth map of a scene. A two-pass relaxation method is developed for matching features extracted from successive depth maps. In each iteration, geometrical relationships between a feature and its neighbors in one map are compared to those between a candidate in the other map and its neighbors to update the matching probability of the candidate. The comparison of the geometrical relationship is based on the principle of conservation of distance and angle between features during rigid motion. The use of three-dimensional features allows one to find the rotation and translation components of motion separately via solving linear equations. Experimental results using several sets of real data are presented to illustrate results and difficulties.

58 citations


Journal ArticleDOI
TL;DR: Algorithms for computing image transforms and features such as projections along linear patterns, convex hull approximations, Hough transform for line detection, diameter, moments, and principal components suitable for implementation in image analysis pipeline architectures are presented.
Abstract: In this correspondence, some image transforms and features such as projections along linear patterns, convex hull approximations, Hough transform for line detection, diameter, moments, and principal components will be considered. Specifically, we present algorithms for computing these features which are suitable for implementation in image analysis pipeline architectures. In particular, random access memories and other dedicated hardware components which may be found in the implementation of classical techniques are not longer needed in our algorithms. The effectiveness of our approach is demonstrated by running some of the new algorithms in conventional short-pipelines for image analysis. In related papers, we have shown a pipeline architecture organization called PPPE (Parallel Pipeline Projection Engine), which unleashes the power of projection-based computer vision, image processing, and computer graphics. In the present correspondence, we deal with just a few of the many algorithms which can be supported in PPPE. These algorithms illustrate the use of the Radon transform as a tool for image analysis.

43 citations


Proceedings ArticleDOI
01 Jan 1987
TL;DR: An efficient distributed processing scheme has been developed for visual road boundary tracking by 'VaMoRs', a testbed vehicle for autonomous mobility and computer vision, and the system structure and the techniques applied for real-time scene analysis are presented along with experimental results.
Abstract: An efficient distributed processing scheme has been developed for visual road boundary tracking by 'VaMoRs', a testbed vehicle for autonomous mobility and computer vision Ongoing work described here is directed to improving the robustness of the road boundary detection process in the presence of shadows, ill-defined edges and other disturbing real world effects The system structure and the techniques applied for real-time scene analysis are presented along with experimental results All subfunctions of road boundary detection for vehicle guidance, such as edge extraction, feature aggregation and camera pointing control, are executed in parallel by an onboard multiprocessor system On the image processing level local oriented edge extraction is performed in multiple 'windows', tighly controlled from a hierarchically higher, modelbased level The interpretation process involving a geometric road model and the observer's relative position to the road boundaries is capable of coping with ambiguity in measurement data By using only selected measurements to update the model parameters even high noise levels can be dealt with and misleading edges be rejected

41 citations


Proceedings ArticleDOI
01 Jan 1987
TL;DR: A heterogeneous algebra is defined which is capable of expressing all image-to-image transformations that can be defined in terms of finite algorithmic procedures and provides a common mathematical environment for image processing algorithm development, comparison, performance characterization, and optimization.
Abstract: Current image processing algorithm development is not based on an efficient mathematical structure that is specifically designed for image manipulation, feature extraction and analysis. Vast increases in image processing activities in such areas as robotics, medicine, and expert computer vision systems have resulted in an immense proliferation of different operations and architectures that all too often perform similar or identical tasks. Due to this ever-increasing diversity of image processing architectures and languages, several attempts have been made to develop a unified algebraic approach to image processing. However, these attempts have been only partially successful. In this paper, we define a heterogeneous algebra (in the sense of G. Birkhoff) which is capable of expressing all image-to-image transformations that can be defined in terms of finite algorithmic procedures. Conversely, for any image-to-image transformation defined as a finite sequence of terms in the image algebra, there is a structured program scheme that computes the transformation. Consequently, this algebra provides a common mathematical environment for image processing algorithm development, comparison, performance characterization, and optimization.

23 citations


Proceedings ArticleDOI
01 Jun 1987
TL;DR: In this article, the authors describe an expository manner ongoing research concerned with the identification and extraction of topographic features relevant to automated navigation algorithms for an autonomous underwater vehicle, presented within the framework of the extremal point topography network (EPTN), an idea going back to Arthur Cayley and J. Clerk Maxwell.
Abstract: We describe in an expository manner ongoing research concerned with the identification and extraction of topographic features relevant to automated navigation algorithms for an autonomous underwater vehicle. These features are presented within the framework of the extremal point topography network (EPTN), an idea going back to Arthur Cayley and J. Clerk Maxwell. The computational problems addressed here are the reconstruction of the surface terrain from irregular spaced bathymetric data and the subsequent extraction of the EPTN. While clearly no single best method exists for this latter step, we present here a description of several methods we have tried with some success. The data used for this research is that for a selected area of Lake Winnipesaukee, New Hampshire.

Journal Article
TL;DR: Different stages in the analysis procedure are discussed in this paper and illustrated by examples from the author's experience of cell analysis studies, ranging from single-cell classification experiments to a test of a complete automated system under realistic conditions on a set of 397 cell specimens.
Abstract: The usefulness of various feature sets for discriminating between different cell populations cannot be assessed without considering the entire automated cytology system, from specimen preparation through scanning, cell search, cell segmentation, artifact rejection and feature extraction to object classification and multivariate data analysis methods. These different stages in the analysis procedure are discussed in this paper and illustrated by examples from the author's experience of cell analysis studies, ranging from single-cell classification experiments to a test of a complete automated system under realistic conditions on a set of 397 cell specimens.

01 May 1987
TL;DR: The General Motors Research Laboratories has developed an image processing system that automatically analyzes the size distributions in fuel spray video images and can distinguish nonspherical anomalies from droplets, which allows sizing of droplets near the spray nozzle.
Abstract: An image processing system was developed which automatically analyzes the size distributions in fuel spray video images. Images are generated by using pulsed laser light to freeze droplet motion in the spray sample volume under study. This coherent illumination source produces images which contain droplet diffraction patterns representing the droplets degree of focus. The analysis is performed by extracting feature data describing droplet diffraction patterns in the images. This allows the system to select droplets from image anomalies and measure only those droplets considered in focus. Unique features of the system are the totally automated analysis and droplet feature measurement from the grayscale image. The feature extraction and image restoration algorithms used in the system are described. Preliminary performance data is also given for two experiments. One experiment gives a comparison between a synthesized distribution measured manually and automatically. The second experiment compares a real spray distribution measured using current methods against the automatic system.

Proceedings ArticleDOI
23 Dec 1987
TL;DR: This work presents several experimental results of applying mathematical morphology techniques to real and synthetic range imagery, both for noise removal and feature extraction and uses MM to extract features and recognize objects in range imagery.
Abstract: Although little known, mathematical morphology (MM) offers great potential in the areas of image enhancement, feature extraction, and object recognition. MM has the intrinsic ability to quantitatively analyze object shapes in both 2 and 3 dimensions. Using MM to extract features and recognize objects in range imagery seems particularly appropriate since range data is a natural source of shape information. We present several experimental results of applying MM techniques to real and synthetic range imagery, both for noise removal and feature extraction.

Journal Article
TL;DR: The system promises a cost-effective solution to the automatic monitoring of turning movements by road traffic, and accuracies of 90 to 95 per cent have been achieved on simple networks such as T-junctions.
Abstract: This paper shows how with suitable reduction of image data and appropriate feature extraction, it is possible to track several vehicles concurrently through a road network of almost arbitary complexity. The solution is believed to be cost-effective, using off-the-shelf technology and readily-available computing power. The processing falls into 2 parts: (1) feature highlighting and extraction, carried out by the RAPAC image processing system; and (2) vehicle tracking, carried out with a 5MHZ 8086-based microcomputer. The system promises a cost-effective solution to the automatic monitoring of turning movements by road traffic. With this system accuracies of 90 to 95 per cent have been achieved on simple networks such as T-junctions.


Proceedings ArticleDOI
14 Oct 1987
TL;DR: A method for the automatic recognition of defects in wood has been developed and implemented on the Visual Interpretation System for Technical Applications (VISTA) and works in real time.
Abstract: A method for the automatic recognition of defects in wood has been developed and implemented on the Visual Interpretation System for Technical Applications (VISTA). VISTA hardware modules for the computationally complex algorithms are available or under development. By means of these modules the method works in real time.

Journal ArticleDOI
TL;DR: An attempt to apply AI techniques is introduced, and the organization of an expert image analysis system for chromosome classification is described, mainly at a conceptual level, which adopts a hierarchical hypothesize-and-verify paradigm.
Abstract: Automation of chromosome analysis has long been considered as a very difficult task. Efforts to computerize some or all of the procedures using various conventional pattern recognition techniques have had only limited success. In this paper the previous work in this domain is briefly reviewed, with a discussion of the limitations of the existing approaches. An attempt to apply AI techniques is introduced, and the organization of an expert image analysis system for chromosome classification is described, mainly at a conceptual level. Based on the proposed architecture, the low-level processes (segmentation, feature extraction) and the high-level processes (classifications or interpretations) can be carried out in a knowledge-guided fashion with a combinational use of image processing and pattern recognition knowledge, as well as expert chromosome classification knowledge embedded in a rule-based structure. A knowledge-based chromosome image analysis scheme is presented which adopts a hierarchical hypothesize-and-verify paradigm. Example rules are given to illustrate how they can be used in this scheme.

Proceedings ArticleDOI
01 Mar 1987
TL;DR: The image preprocessor developed reduces the time ratio of feature extraction and model-based analysis to about 1/10 and recognizes partially visible or overlapping industrial workpieces and detects these locations and orientations.
Abstract: A special purpose image preprocessor for the visual system of assembly robots has been developed. The main function unit is composed of look-up tables to utilize the advantage of semiconductor memory for large scale integration, high speed and low price. More than one units may be operated in parallel since it is designed on the standard IEEE 796 bus. The operation time of the preprocessor in line segment extraction is usually 200 ms per 500 segments, though it differs according to the complexity of scene image. The gray-scale visual system supported by the model-based analysis program using the extracted line segments recognizes partially visible or overlapping industrial workpieces, and detects these locations and orientations. In recognition test using plastic workpieces, the recognition time was about 9 seconds for five pieces. In most of conventional model-based vision systems, the feature extraction time was extremely longer than that of the model-based analysis. The image preprocessor we have developed reduces the time ratio of feature extraction and model-based analysis to about 1/10.

Journal ArticleDOI
TL;DR: An optical technique for finding the centroids of nonoverlapping objects in a scene, thus locating the objects and preserving the underlying advantage of matched filtering approaches to pattern recognition and allowing general feature extraction avoiding prior scene segmentation into individual objects.
Abstract: We present an optical technique for finding the centroids of nonoverlapping objects in a scene, thus locating the objects and preserving the underlying advantage of matched filtering approaches to pattern recognition. One is then free to extract any feature desired at these centroid locations rather than restricted to the matched filter test statistic. Furthermore, this allows general feature extraction avoiding prior scene segmentation into individual objects. The technique can also be used for tracking the motion of rigid or nonrigid objects. It consists of cross-correlating the input f(x,y) with a windowed version of the function x + iy and detecting the zeros of the magnitude of the resulting correlation. At these points the x and y first moments vanish. The window is selected based on the size and separation of the objects in a scene. Experimental verification as well as restrictions are also presented.

Book ChapterDOI
01 Jan 1987
TL;DR: This chapter deals with the problem of extracting features from two-dimensional image data and three-dimensional features from range data.
Abstract: This chapter deals with the problem of extracting features from two-dimensional image data. Extraction of three-dimensional features from range data is described in Chap. 10.

Journal ArticleDOI
TL;DR: A strategy for recognizing partially occluded parts that supposes that the objects have been previously separated by mechanical devices such as shakers, bowl-feeders, conveyors, or other special purpose arrangements is discussed.
Abstract: Recognizing partially occluded objects requires off-line modeling and planning, and runtime recognition. After deriving a customized method, we compare recognition with and without off-line planning.

Proceedings ArticleDOI
13 Oct 1987
TL;DR: A method for converting paper-written electrocardiograms to one dimensional (1-D) signals for archival storage on floppy disk is presented and may be useful in robotic vision.
Abstract: A method for converting paper-written electrocardiograms to one dimensional (1-D) signals for archival storage on floppy disk is presented here. Appropriate image processing techniques were employed to remove the back-ground noise inherent to ECG recorder charts and to reconstruct the ECG waveform. The entire process consists of (1) digitization of paper-written ECGs with an image processing system via a TV camera; (2) image preprocessing, including histogram filtering and binary image generation; (3) ECG feature extraction and ECG wave tracing, and (4) transmission of the processed ECG data to IBM-PC compatible floppy disks for storage and retrieval. The algorithms employed here may also be used in the recognition of paper-written EEG or EMG and may be useful in robotic vision.

Journal ArticleDOI
TL;DR: A distributed parameter system (DPS) framework and the concept of weak solutions are used to develop image motion estimation algorithms which represent a region-oriented approach to image motion analysis which is theoretically justifiable, computationally advantageous, and leads to interesting extensions.
Abstract: Basic research into the modeling and analysis of image sequence dynamics is presented. A distributed parameter system (DPS) framework and the concept of weak solutions are used to develop image motion estimation algorithms. These algorithms represent a region-oriented (as opposed to point-by-point) approach to image motion analysis which is theoretically justifiable, computationally advantageous, and leads to interesting extensions. Particularly noteworthy is the use of weak solution-based motion features to obtain static image structural information and multiple object motion estimates. Experimental results confirm the validity and accuracy of the approach. Future research topics are described.

Journal ArticleDOI
01 May 1987
TL;DR: A pattern recognition system is designed to solve the model characterization problem of distributed systems and is assessed by using the leaving-one-out method on simulated data.
Abstract: A pattern recognition system is designed to solve the model characterization problem of distributed systems. The characterization problem in the pattern recognition domain is the mapping of the observation and input data of the distributed system into one of the designated classes. Mathematical structures which are likely to represent the system are designated as classes from the a priori information of the system under consideration. A feature extractor and a classifier are designed which are tailored for this problem. The pattern recognition system is assessed by using the leaving-one-out method on simulated data.

Proceedings ArticleDOI
01 Jan 1987
TL;DR: In this article, a method is proposed to provide enhancement of the intelligibilty of speech which has been contaminated by additive noise, based on the assumption that intelligibility enhancement can be achieved by making use of information directly related to speech intelligibility.
Abstract: In this paper a method is proposed to provide enhancement of the intelligibilty of speech which has been contaminated by additive noise. The method is based on the assumption that intelligibility enhancement can be achieved by making use of information directly related to speech intelligibility. A structure which provides the means to implement the method is presented and a discussion of some preliminary results obtained with the method is included.

Book ChapterDOI
01 Jan 1987
TL;DR: In this article, image segmentation, feature extraction and classification methods which screen and diagnose blood malignancies are presented. But, these methods require higher scanning densities and higher optical magnification than used in screening normal blood cells.
Abstract: This paper outlines image segmentation, feature extraction and classification methods which screen and diagnose blood malignancies. New algorithms had to be developed because analyzing blood malignancies requires higher scanning densities and higher optical magnification than used in screening normal blood cells. The cell image segmentation method combines color differences, equidistance isograms, geometric operations, and a cell model. The algorithm always starts with the largest color differences and successively detects less certain areas. This eliminates the need for contour following algorithms. The feature extraction combines geometric parameters with texture and color. “Classification And Regression Tree” statistical software test the classification power of the cell markers extracted by the image processing. The feature distributions from the tested blood cell population correlate directly to the specific blood malignancies.

Journal ArticleDOI
Przytula1, Hansen1
TL;DR: The Systolic/Cellu-a controller as mentioned in this paper is a spread-sheet processor with a separate program memory for each cell, which is used to store linear algebraic and cellular programs for the coprocessor.
Abstract: Va71 0 e,W 06 eo&% /(4#ied these normally unusable times (the clock cycles), data is allowed to flow through cells transparently. A network of East, West, North, and South buses allows data to travel as far as 50 cells away. Many systolic-array architectures only allow direct data transfer between nearest-neighbor processing elements. 1'2 DSP-software development is often a time-consuming and arduous task. Moto-rola's new systolic-array architecture has already reduced this effort. Algorithms are implemented directly in this architecture. Every cell is initialized at least once after each power-up. From that point (power-up) on, the cell performs the same operation every processing cycle. It also gets its input data from the output of an assigned \"neighbor\" cell. This pre-set operation makes it possible to assign the function of each cell directly from a signal-flow diagram. The new systolic-array architecture essentially eliminates sequential software. The software-development tool is written in Pascal for an IBM PC. The basic concept for the development tool is a spread-sheet processor. Algorithms are entered, copied, and replicated just as in any spreadsheet tool. As an added feature, the algorithm can be tested, simulated, and debugged with the same tool. Architecture simulations have reconfirmed that this systolic design performs best on algorithms with strong locality of signal flow.3'4 The date when the architecture will be usable is drawing nearer as logic simulations approach completion.. r e designed the Systolic/Cellu-a controller with a separate program mem-lar System for large classes of ory (see Figure 1). The input data and the *V * linear algebraic and cellular programs for the coprocessor are loaded operations that are used in signal process-from the host into the array memory and ing. It consists of a host and a programma-the program memory, respectively.

01 Jan 1987
TL;DR: A new on-line unsupervised feature extraction method for high-dimensional remotely sensed image data compaction is presented and can be utilized to solve the problem of data redundancy in scene representation by satellite-borne high resolution multispectral sensors.
Abstract: A new on-line unsupervised feature extraction method for high-dimensional remotely sensed image data compaction is presented. This method can be utilized to solve the problem of data redundancy in scene representation by satellite-borne high resolution multispectral sensors. The algorithm first partitions the observation space into an exhaustive set of disjoint objects. Then, pixels that belong to an object are characterized by an object feature. Finally, the set of object features is used for data transmission and classification. The example results show that the performance with the compacted features provides a slight improvement in classification accuracy instead of any degradation. Also, the information extraction method does not need to be preceded by a data decompaction.

Journal ArticleDOI
TL;DR: It is shown that great simplicity is obtained by identifying and eliminating the least desirable feature out of the original feature space by using J-divergence as a measure of the discrimination between the classes.