scispace - formally typeset
Search or ask a question

Showing papers on "Feature (computer vision) published in 1993"


Posted Content
01 Jan 1993
TL;DR: In this article, the authors used Adaptive Simulated Annealing (ASA) code to demonstrate how simulated quenching (SQ) can be much faster than SA without sacrificing accuracy.
Abstract: Simulated annealing (SA) presents an optimization technique with several striking positive and negative features. Perhaps its most salient feature, statistically promising to deliver an optimal solution, in current practice is often spurned to use instead modified faster algorithms, ''simulated quenching'' (SQ). Using the author's Adaptive Simulated Annealing (ASA) code, some examples are given which demonstrate how SQ can be much faster than SA without sacrificing accuracy.

777 citations


Journal ArticleDOI
TL;DR: The area of texture segmentation has undergone tremendous growth in recent years as discussed by the authors, and there has been a great deal of activity both in the refinement of previously known approaches and in the development of completely new techniques.
Abstract: The area of texture segmentation has undergone tremendous growth in recent years. There has been a great deal of activity both in the refinement of previously known approaches and in the development of completely new techniques. Although a wide variety of methodologies have been applied to this problem, there is a particularly strong concentration in the development of feature-based approaches and on the search for appropriate texture features. In this paper, we present a survey of current texture segmentation and feature extraction methods. Our emphasis is on techniques developed since 1980, particularly those with promise for unsupervised applications.

726 citations


ReportDOI
TL;DR: In this paper, the authors introduce a class of statistical tests for the hypothesis that some feature that is present in each of several variables is common to them, which are data properties such as serial correlation, trends, seasonality, heteroscedasticity, auto-regression, and excess kurtosis.
Abstract: This article introduces a class of statistical tests for the hypothesis that some feature that is present in each of several variables is common to them. Features are data properties such as serial correlation, trends, seasonality, heteroscedasticity, autoregressive conditional hetero-scedasticity, and excess kurtosis. A feature is detected by a hypothesis test taking no feature as the null, and a common feature is detected by a test that finds linear combinations of variables with no feature. Often, an exact asymptotic critical value can be obtained that is simply a test of overidentifying restrictions in an instrumental variable regression. This article tests for a common international business cycle.

550 citations


Patent
08 Sep 1993
TL;DR: In this article, a 3D human interface apparatus using a motion recognition based on a dynamic image processing in which the motion of an operator operated object as an imaging target can be recognized accurately and stably.
Abstract: A 3D human interface apparatus using a motion recognition based on a dynamic image processing in which the motion of an operator operated object as an imaging target can be recognized accurately and stably. The apparatus includes: an image input unit for entering a plurality of time series images of an object operated by the operator into a motion representing a command; a feature point extraction unit for extracting at least four feature points including at least three reference feature points and one fiducial feature point on the object, from each of the images; a motion recognition unit for recognizing the motion of the object by calculating motion parameters, according to an affine transformation determined from changes of positions of the reference feature points on the images, and a virtual parallax for the fiducial feature point expressing a difference between an actual position change on the images and a virtual position change according to the affine transformation; and a command input unit for inputting the command indicated by the motion of the object recognized by the motion recognition unit.

365 citations


Journal ArticleDOI
TL;DR: In this article, an automatic feature recognizer decomposes the total volume to be machined into volumetric features that satisfy stringent conditions for manufacturability, and correspond to operations typically performed in 3-axis machining centers.
Abstract: Discusses an automatic feature recognizer that decomposes the total volume to be machined into volumetric features that satisfy stringent conditions for manufacturability, and correspond to operations typically performed in 3-axis machining centers. Unlike most of the previous research, the approach is based on general techniques for dealing with features with intersecting volumes. Feature interactions are represented explicitly in the recognizer's output, to facilitate spatial reasoning in subsequent planning stages. A generate-and-test strategy is used. OPS-5 production rules generate hints or clues for the existence of features, and post them on a blackboard. The clues are assessed, and those judged promising are processed to ensure that they correspond to actual features, and to gather information for process planning. Computational geometry techniques are used to produce the largest volumetric feature compatible with the available data. The feature's accessibility, and its interactions with others are analyzed. The validity tests ensure that the proposed features are accessible, do not intrude into the desired part, and satisfy other machinability conditions. The process continues until it produces a complete decomposition of the volume to be machined into fully-specified features. >

347 citations


Book
01 Dec 1993
TL;DR: The potential benefits of Monte Carlo approaches such as simulated annealing and genetic algorithms are described and compared to facilitate the planning of future research on feature selection.
Abstract: We review recent research on methods for selecting features for multidimensional pattern classification. These methods include nonmonotonicity-tolerant branch-and-bound search and beam search. We describe the potential benefits of Monte Carlo approaches such as simulated annealing and genetic algorithms. We compare these methods to facilitate the planning of future research on feature selection.

320 citations


PatentDOI
TL;DR: A computer-based engineering design system to design a part, a tool to make the part, and the process to making the part by accessing the plurality of feature templates in the memory to locate one or more primitive objects that perform the oneor more predetermined functions.

296 citations


Patent
22 Oct 1993
TL;DR: In this paper, an integrated visual defect detection and classification system is presented, which includes adaptive defect detection, image labeling, defect feature measures, and a knowledge based inference shell/engine for classification based on fuzzy logic.
Abstract: An integrated visual defect detection and classification system. The invention includes adaptive defect detection and image labeling, defect feature measures, and a knowledge based inference shell/engine for classification based on fuzzy logic. The combination of these elements comprises a method and system for providing detection and analysis of product defects in many application domains, such as semiconductor and electronic packaging manufacturing.

276 citations


Journal ArticleDOI
TL;DR: Examples and taxonomies of feature interaction examples are developed to improve understanding of the problem's scope and to provide a benchmark for analyzing the coverage of a proposed approach to solving the problem.
Abstract: It is argued that the goal of the intelligent network (IN) is to accelerate the introduction of new telecommunications features in a multisupplier competitive environment. One major roadblock to fulfilling such requirements is the feature interaction problem-a new feature may interact with existing features in some undesirable ways, resulting in adverse behavior. Feature interaction examples are categorized by their causes, since problems arising from the same cause may have the same solution. These examples and taxonomies are developed to improve understanding of the problem's scope and to provide a benchmark for analyzing the coverage of a proposed approach to solving the problem. Informal definitions of the concepts given, feature, service, and feature interaction, are presented. Prototypical examples of feature interactions and their categorization by causes are also presented. Possible approaches to the problem are discussed. >

270 citations


Journal ArticleDOI
TL;DR: The conclusion is that feature-based design is still in its infancy, and that more research is needed for a better support of the design process and better integration with manufacturing, although major advances have already been made.

266 citations


Proceedings Article
01 Jun 1993
TL;DR: This work applies Genetic Programming to the development of a processing tree for the classification of features extracted from images: measurements from a set of input nodes are weighted and combined through linear and nonlinear operations to form an output response.
Abstract: We apply Genetic Programming (GP) to the development of a processing tree for the classification of features extracted from images: measurements from a set of input nodes are weighted and combined through linear and nonlinear operations to form an output response. No constraints are placed upon size, shape, or order of processing within the network. This network is used to classify feature vectors extracted from IR imagery into target/nontarget categories using a database of 2000 training samples. Performance is tested against a separate database of 7000 samples. This represents a significant scaling up from the problems to which GP has been applied to date. Two experiments are performed: in the first set, we input classical "statistical" image features and minimize misclassification of target and non-target samples. In the second set of experiments, GP is allowed to form it's own feature set from primitive intensity measurements. For purposes of comparison, the same training and test sets are used to train two other adaptive classifier systems, the binary tree classifier and the Backpropagation neural network. The GP network achieves higher performance with reduced computational requirements. The contributions of GP "schemata," or subtrees, to the performance of generated trees are examined. Genetic Programming for Feature Discovery and Image Discrimination 1

Journal ArticleDOI
TL;DR: Experimental results suggest that message reception is influenced by the interplay of two musical properties: attention-gaining value and music-message congruency.
Abstract: Music is an increasingly prominent and expensive feature of broadcast ads, yet its effects on message reception are controversial. The authors propose and test a contingency that may help resolve t...

Book ChapterDOI
05 Apr 1993
TL;DR: The paper is related to one of the aspects of learning from examples, namely learning how to identify a class of objects a given object instance belongs to, and a method of generating sequence of features allowing such identification is presented.
Abstract: The paper is related to one of the aspects of learning from examples, namely learning how to identify a class of objects a given object instance belongs to In the paper a method of generating sequence of features allowing such identification is presented In this approach examples are represented in the form of attribute-value table with binary values of attributes The main assumption is that one feature sequence is determined for all possible object instances, that is next feature in the order does not depend on values of the previous features The algorithm is given generating a sequence under these conditions Theoretical background of the proposed method is rough sets theory Some generalizations of this theory are introduced in the paper Finally, a discussion of the presented approach is provided and results of functioning of the proposed algorithm are summarized Direction of further research is also indicated

Journal ArticleDOI
TL;DR: The presents a knowledge-based approach to automatic classification and tissue labeling of 2D magnetic resonance (MR) images of the human brain that provides an accurate complete labeling of all normal tissues in the absence of large amounts of data nonuniformity.
Abstract: Presents a knowledge-based approach to automatic classification and tissue labeling of 2D magnetic resonance (MR) images of the human brain. The system consists of 2 components: an unsupervised clustering algorithm and an expert system. MR brain data is initially segmented by the unsupervised algorithm, then the expert system locates a landmark tissue or cluster and analyzes it by matching it with a model or searching in it for an expected feature. The landmark tissue location and its analysis are repeated until a tumor is found or all tissues are labeled. The knowledge base contains information on cluster distribution in feature space and tissue models. Since tissue shapes are irregular, their models and matching are specially designed: 1) qualitative tissue models are defined for brain tissues such as white matter; 2) default reasoning is used to match a model with an MR image; that is, if there is no mismatch between a model and an image, they are taken as matched. The system has been tested with 53 slices of MR images acquired at different times by 2 different scanners. It accurately identifies abnormal slices and provides a partial labeling of the tissues. It provides an accurate complete labeling of all normal tissues in the absence of large amounts of data nonuniformity, as verified by radiologists. Thus the system can be used to provide automatic screening of slices for abnormality. It also provides a first step toward the complete description of abnormal images for use in automatic tumor volume determination. >

Journal ArticleDOI
TL;DR: The problems of defining features, services, and feature interactions in telecommunication systems are discussed and it is seen that no full approach exists; they are all partial solutions.
Abstract: The problems of defining features, services, and feature interactions in telecommunication systems are discussed. Several ways of classifying feature interactions are surveyed. Existing approaches for solving the feature-interaction problem are reviewed. It is seen that no full approach exists; they are all partial solutions. The approaches are divided into three classes: avoidance, detection, and resolution. Avoidance looks at ways to prevent undesired feature interactions. Detection assumes that feature interactions will be present, and determines methods for identifying and locating them. Resolution assumes that feature interactions will be present and detected, and looks at mechanisms for minimizing their potential adverse effects. >

Journal ArticleDOI
TL;DR: A comparative performance analysis of the two algorithms establishes some important theoretical properties of adaptive spectral detectors and leads to practical guidelines for applying the algorithms to multispectral sensor data.
Abstract: The fully adaptive hypothesis testing algorithm developed by I.S. Reed and X. Yu (1990) for detecting low-contrast objects of unknown spectral features in a nonstationary background is extended to the case in which the relative spectral signatures of objects can be specified in advance. The resulting background-adaptive algorithm is analyzed and shown to achieve robust spectral feature discrimination with a constant false-alarm rate (CFAR) performance. A comparative performance analysis of the two algorithms establishes some important theoretical properties of adaptive spectral detectors and leads to practical guidelines for applying the algorithms to multispectral sensor data. The adaptive detection of man-made artifacts in a natural background is demonstrated by processing multiband infrared imagery collected by the Thermal Infrared Multispectral Scanner (TIMS) instrument. >

Journal ArticleDOI
TL;DR: The classification of 3 common breast lesions, fibroadenomas, cysts, and cancers, was achieved using computerized image analysis of tumor shape in conjunction with patient age using a video camera and commercial frame grabber on a PC-based computer system.
Abstract: The classification of 3 common breast lesions, fibroadenomas, cysts, and cancers, was achieved using computerized image analysis of tumor shape in conjunction with patient age. The process involved the digitization of 69 mammographic images using a video camera and a commercial frame grabber on a PC-based computer system. An interactive segmentation procedure identified the tumor boundary using a thresholding technique which successfully segmented 57% of the lesions. Several features were chosen based on the gross and fine shape describing properties of the tumor boundaries as seen on the radiographs. Patient age was included as a significant feature in determining whether the tumor was a cyst, fibroadenoma, or cancer and was the only patient history information available for this study. The concept of a radial length measure provided a basis from which 6 of the 7 shape describing features were chosen, the seventh being tumor circularity. The feature selection process was accomplished using linear discriminant analysis and a Euclidean distance metric determined group membership. The effectiveness of the classification scheme was tested using both the apparent and the leaving-one-out test methods. The best results using the apparent test method resulted in correctly classifying 82% of the tumors segmented using the entire feature space and the highest classification rate using the leaving-one-out test method was 69% using a subset of the feature space. The results using only the shape descriptors, and excluding patient age resulted in correctly classifying 72% using the entire feature space (except age), and 51% using a subset of the feature space. >

Journal ArticleDOI
TL;DR: An overview of feature modelling builds on solid modelling, an overview of advanced solid modelling is given, with emphasis on the concepts of parametric and constraint-based modelling.


Proceedings ArticleDOI
01 Jan 1993
TL;DR: In the proposed approach, nodes are added incrementally to a regular two-dimensional grid, which is drawable at all times, irrespective of the dimensionality of the input space, resulting in a map that explicitly represents the cluster structure of the high-dimensional input.
Abstract: Ordinary feature maps with fully connected, fixed grid topology cannot properly reflect the structure of clusters in the input space. Incremental feature map algorithms, where nodes and connections are added to or deleted from the map according to the input distribution can overcome this problem. Such algorithms have been limited to maps that can be drawn in 2-D only in the case of two-dimensional input space. In the proposed approach, nodes are added incrementally to a regular two-dimensional grid, which is drawable at all times, irrespective of the dimensionality of the input space. The process results in a map that explicitly represents the cluster structure of the high-dimensional input. >

Journal ArticleDOI
TL;DR: A novel feature-modelling system which implements a hybrid of feature-based design and feature recognition in a single framework that allows changes to a geometric model to be recognized as new or modified features while preserving previously recognized features that remain unchanged in the geometric model.
Abstract: A novel feature-modelling system which implements a hybrid of feature-based design and feature recognition in a single framework is described. During the design process of a part, the user can modify interactively either the solid model or the feature model of the part while the system keeps the other model 3onsistent with the changed one. This gives the user the freedom of choosing the most convenient means of expressing each required operation. The system is based on a novel feature recognizer that provides incremental feature recognition , which allows changes to a geometric model to be recognized as new or modified features while preserving previously recognized features that remain unchanged in the geometric model. Each recognizable feature type is specified by means of a feature-definition language which facilitates the addition of new feature types into the system.

Journal ArticleDOI
TL;DR: A general architecture for visual information-management systems (VIMS), which combine the strengths of both approaches, is presented, and a VIMS developed for face-image retrieval is presented to demonstrate these ideas.
Abstract: The complex nature of two-dimensional image data has presented problems for traditional information systems designed strictly for alphanumeric data. Systems aimed at effectively managing image data have generally approached the problem from two different views: They either possess a strong database component with little image understanding, or they serve as an image repository for computer vision applications, with little emphasis on the image retrieval process. A general architecture for visual information-management systems (VIMS), which combine the strengths of both approaches, is presented. The system utilizes computer vision routines for both insertion and retrieval and allows easy query-by-example specifications. The vision routines are used to segment and evaluate objects based on domain-knowledge describing the objects and their attributes. The vision system can then assign feature values to be used for similarity-measures and image retrieval. A VIMS developed for face-image retrieval is presented to demonstrate these ideas. >

Proceedings ArticleDOI
19 Apr 1993
TL;DR: A shape similarity-based retrieval method for image databases that supports a variety of queries that is flexible with respect to the choice of feature and definition of similarity and is implementable using existing multidimensional point access methods.
Abstract: A shape similarity-based retrieval method for image databases that supports a variety of queries is proposed It is flexible with respect to the choice of feature and definition of similarity and is implementable using existing multidimensional point access methods A prototype system that handles the problems of distortion and occlusion is described Experiments with one specific point access method (PAM) are presented >

Proceedings ArticleDOI
20 Oct 1993
TL;DR: An adaptation of hidden Markov models (HMM) to automatic recognition of unrestricted handwritten words and many interesting details of a 50,000 vocabulary recognition system for US city names are described.
Abstract: The paper describes an adaptation of hidden Markov models (HMM) to automatic recognition of unrestricted handwritten words. Many interesting details of a 50,000 vocabulary recognition system for US city names are described. This system includes feature extraction, classification, estimation of model parameters, and word recognition. The feature extraction module transforms a binary image to a sequence of feature vectors. The classification module consists of a transformation based on linear discriminant analysis and Gaussian soft-decision vector quantizers which transform feature vectors into sets of symbols and associated likelihoods. Symbols and likelihoods form the input to both HMM training and recognition. HMM training performed in several successive steps requires only a small amount of gestalt labeled data on the level of characters for initialization. HMM recognition based on the Viterbi algorithm runs on subsets of the whole vocabulary. >

Patent
20 Apr 1993
TL;DR: An object recognition system using the image processing in which an area having a unique feature is extracted from an input image of an object, the unique image is registered in a shade template memory circuit as a shading template, the input image is searched for an image similar to the shade template registered by a shade pattern matching circuit, the position of the object is determined for each template, and the speed and direction of movement of the objects are determined from the positional information as discussed by the authors.
Abstract: An object recognition system using the image processing in which an area having a unique feature is extracted from an input image of an object, the unique image is registered in a shade template memory circuit as a shade template, the input image is searched for an image similar to the shade template registered by a shade pattern matching circuit, the position of an object is determined for each template, the speed and direction of movement of the object is determined from the positional information, and the results thereof are integrated by a separation/integration circuit, thereby recognizing the whole of the moving object.

Book ChapterDOI
13 Sep 1993
TL;DR: U-Matrix methods have been developed to enhance the representation of data on Feature Maps and are usefull to monitor critical processes and for the design of suitable human interfaces for process monitoring stands.
Abstract: An application of Self-Organizing Feature Maps to the problem of process control for a chemical process is described. Very few of the nature and structure of the process can be learned from the trained Feature Map itself. A set of methods called U-Matrix methods have been developed to enhance the representation of data on Feature Maps. This methods allows to discover structure in the process data and allows to judge the quality of the learned maps. It can be used to extract knowledge for xpert systems from the Feature Maps. The extracted rules were able to control the chemical process as well as a human designed expert system. The methods presented here are in particular usefull to monitor critical processes and for the design of suitable human interfaces for process monitoring stands.

Proceedings ArticleDOI
TL;DR: Content-based retrieval is founded on neural networks, this technology allows automatic filing of images and a wide range of possible queries of the resulting database, in contrast to methods such as entering SQL keys manually for each image as it is filed and later correctly re-entering those keys to retrieve the same image.
Abstract: Content-based retrieval is founded on neural networks, this technology allows automatic filing of images and a wide range of possible queries of the resulting database. This is in contrast to methods such as entering SQL keys manually for each image as it is filed and later correctly re-entering those keys to retrieve the same image. An SQL-based approach does not take into account information that is hard to describe with text, such as sounds and images. Neural networks can be trained to translate `noisy' or chaotic image data into simpler, more reliable feature sets. By converting the images into the level of abstraction necessary for symbolic processing, standard database indexing methods can then be applied, or used in layers of associative database neural networks directly.

Journal ArticleDOI
TL;DR: In this article, two color image segmentation methods are described: spherical coordinate transform (SCT) and principal component transform (PCT) based on the Hotelling transform (HCT).
Abstract: Two color-image segmentation methods are described. The first is based on a spherical coordinate transform of original RGB data. The second is based on a mathematically optimal transform, the principal components transform (also known as eigenvector, discrete Karhunen-Loeve, or Hotelling transform). These algorithms are applied to the extraction from skin tumor images of various features such as tumor border, crust, hair scale, shiny areas, and ulcer. The results of this research will be used in the development of a computer vision system that will seve as the visual front-end of a medical expert system to automate visual feature identification for skin tumor evaluation. >

Journal ArticleDOI
TL;DR: It is shown that a combination of the mathematical morphology operation, opening, with a linear rotating structuring element (ROSE) and dual feature thresholding can semi-automatically segment categories of vessels in a vascular network.
Abstract: A method for measuring the spatial concentration of specific categories of vessels in a vascular network consisting of vessels of several diameters, lengths, and orientations is demonstrated. It is shown that a combination of the mathematical morphology operation, opening, with a linear rotating structuring element (ROSE) and dual feature thresholding can semi-automatically segment categories of vessels in a vascular network. Capillaries and larger vessels (arterioles and venules) are segmented here in order to assess their spatial concentrations. The ROSE algorithm generates the initial segmentation, and dual feature thresholding provides a means of eliminating the nonedge artifact pixels. The subsequent gray-scale histogram of only the edge pixels yields the correct segmentation threshold value. This image processing strategy is demonstrated on micrographs of vascular casts. By adjusting the structuring element and rotation angles, it could be applied to other network structures where a segmentation by network component categories is advantageous, but where the objects can have any orientation. >

Journal ArticleDOI
TL;DR: These algorithms for the extraction of tangent directions and curvatures of these level curves are presented and examples on real images are given which show application of these algorithms for directional enhancement, and feature point detection.
Abstract: A digitized image is viewed as a surface over the xy-plane. The level curves of this surface provide information about edge directions and feature locations. This paper presents algorithms for the extraction of tangent directions and curvatures of these level curves. The tangent direction is determined by a least-squares minimization over the surface normals (calculated for each 2 × 2 pixel neighborhood) in an averaging window. The curvature calculation, unlike most previous work on this topic, does not require a parameterized curve, but works instead directly on the tangents across adjacent level curves. The curvature is found by fitting concentric circles to the tangent directions via least-squares minimization. The stability of these algorithms with respect to noise is studied via controlled tests on computer generated data corrupted by simulated noise. Examples on real images are given which show application of these algorithms for directional enhancement, and feature point detection.