scispace - formally typeset
Search or ask a question

Showing papers on "Feature (computer vision) published in 1999"


Journal ArticleDOI
TL;DR: The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multidimensional indexing, and system design, three of the fundamental bases of content-based image retrieval.

2,197 citations


Journal ArticleDOI
TL;DR: A similarity measure is developed, based on fuzzy logic, that exhibits several features that match experimental findings in humans and is an extension to a more general domain of the feature contrast model due to Tversky (1977).
Abstract: With complex multimedia data, we see the emergence of database systems in which the fundamental operation is similarity assessment. Before database issues can be addressed, it is necessary to give a definition of similarity as an operation. We develop a similarity measure, based on fuzzy logic, that exhibits several features that match experimental findings in humans. The model is dubbed fuzzy feature contrast (FFC) and is an extension to a more general domain of the feature contrast model due to Tversky (1977). We show how the FFC model can be used to model similarity assessment from fuzzy judgment of properties, and we address the use of fuzzy measures to deal with dependencies among the properties.

834 citations


Journal ArticleDOI
TL;DR: It is conjecture that texture can be characterized by the statistics of the wavelet detail coefficients and therefore two feature sets are introduced: the wavelets histogram signatures which capture all first order statistics using a model based approach and the co-occurrence signatures which reflect the coefficients' second-order statistics.
Abstract: We conjecture that texture can be characterized by the statistics of the wavelet detail coefficients and therefore introduce two feature sets: (1) the wavelet histogram signatures which capture all first order statistics using a model based approach and (2) the wavelet co-occurrence signatures, which reflect the coefficients' second-order statistics. The introduced feature sets outperform the traditionally used energy. Best performance is achieved by combining histogram and co-occurrence signatures.

610 citations


Journal ArticleDOI
TL;DR: A novel classification method, called the nearest feature line (NFL), for face recognition, based on the nearest distance from the query feature point to each FL, which achieves the lowest error rate reported for the ORL face database.
Abstract: We propose a classification method, called the nearest feature line (NFL), for face recognition. Any two feature points of the same class (person) are generalized by the feature line (FL) passing through the two points. The derived FL can capture more variations of face images than the original points and thus expands the capacity of the available database. The classification is based on the nearest distance from the query feature point to each FL. With a combined face database, the NFL error rate is about 43.7-65.4% of that of the standard eigenface method. Moreover, the NFL achieves the lowest error rate reported to date for the ORL face database.

555 citations


Proceedings Article
01 May 1999
TL;DR: A new fllter approach to feature selection that uses a correlation based heuristic to evaluate the worth of feature subsets when applied as a data preprocessing step for two common machine learning algorithms.
Abstract: Feature selection is often an essential data processing step prior to applying a learning algorithm. The removal of irrelevant and redundant information often improves the performance of machine learning algorithms. There are two common approaches: a wrapper uses the intended learning algorithm itself to evaluate the usefulness of features, while a fllter evaluates features according to heuristics based on general characteristics of the data. The wrapper approach is generally considered to produce better feature subsets but runs much more slowly than a fllter. This paper describes a new fllter approach to feature selection that uses a correlation based heuristic to evaluate the worth of feature subsets When applied as a data preprocessing step for two common machine learning algorithms, the new method compares favourably with the wrapper but requires much less computation.

547 citations


Journal ArticleDOI
TL;DR: An unsupervised texture segmentation method is presented, which uses distributions of local binary patterns and pattern contrasts for measuring the similarity of adjacent image regions during the segmentation process.

441 citations


Journal ArticleDOI
TL;DR: An implementation of stochastic mapping that uses a delayed nearest neighbor data association strategy to initialize new features into the map, match measurements to map features, and delete out-of-date features is described.
Abstract: The task of building a map of an unknown environment and concurrently using that map to navigate is a central problem in mobile robotics research. This paper addresses the problem of how to perform concurrent mapping and localization (CML) adaptively using sonar. Stochastic mapping is a feature-based approach to CML that generalizes the extended Kalman filter to incorporate vehicle localization and environmental mapping. The authors describe an implementation of stochastic mapping that uses a delayed nearest neighbor data association strategy to initialize new features into the map, match measurements to map features, and delete out-of-date features. The authors introduce a metric for adaptive sensing that is defined in terms of Fisher information and represents the sum of the areas of the error ellipses of the vehicle and feature estimates in the map. Predicted sensor readings and expected dead-reckoning errors are used to estimate the metric for each potential action of the robot, and the action that yi...

373 citations


Patent
Yuji Takata1, Hideaki Matsuo1, Seiji Igi1, Shan Lu1, Yuji Nagashima1 
27 Sep 1999
TL;DR: In this article, a method of segmenting hand gestures is presented, which automatically segments hand gestures to be detected into words or apprehensible units structured by a plurality of words when recognizing the hand gestures without the user's presentation where to segment.
Abstract: An object of the present invention is to provide a method of segmenting hand gestures which automatically segments hand gestures to be detected into words or apprehensible units structured by a plurality of words when recognizing the hand gestures without the user's presentation where to segment. Transition feature data in which a feature of a transition gesture being not observed during a gesture representing a word but is described when transiting from a gesture to another is previously stored. Thereafter, a motion of image corresponding to the part of body in which the transition gesture is observed is detected (step S 106 ), the detected motion of image is compared with the transition feature data (step S 107 ), and a time position where the transition gesture is observed is determined so as to segment the hand gestures (step S 108 ).

370 citations


Proceedings Article
18 Jul 1999
TL;DR: This paper presents an ensemble feature selection approach that is based on genetic algorithms and shows improved performance over the popular and powerful ensemble approaches of AdaBoost and Bagging and demonstrates the utility of ensemble features selection.
Abstract: The traditional motivation behind feature selection algorithms is to find the best subset of features for a task using one particular learning algonthm. Given the recent success of ensembles, however, we investigate the notion of ensemble feature selection in this paper. This task is harder than traditional feature selection in that one not only needs to find features germane to the learning task and learning algorithm, but one also needs to find a set of feature subsets that will promote disagreement among the ensemble's classifiers. In this paper, we present an ensemble feature selection approach that is based on genetic algorithms. Our algorithm shows improved performance over the popular and powerful ensemble approaches of AdaBoost and Bagging and demonstrates the utility of ensemble feature selection.

354 citations


Journal ArticleDOI
Jing Huang1, S. Ravi Kumar1, Mandar Mitra1, Wei-Jing Zhu1, Ramin Zabih1 
TL;DR: Experimental evidence shows that the color correlogram outperforms not only the traditional color histogram method but also the recently proposed histogram refinement methods for image indexing/retrieval.
Abstract: We define a new image feature called the color correlogram and use it for image indexing and comparison. This feature distills the spatial correlation of colors and when computed efficiently, turns out to be both effective and inexpensive for content-based image retrieval. The correlogram is robust in tolerating large changes in appearance and shape caused by changes in viewing position, camera zoom, etc. Experimental evidence shows that this new feature outperforms not only the traditional color histogram method but also the recently proposed histogram refinement methods for image indexing/retrieval. We also provide a technique to cut down the storage requirement of the correlogram so that it is the same as that of histograms, with only negligible performance penalty compared to the original correlogram. We also suggest the use of color correlogram as a generic indexing tool to tackle various problems arising from image retrieval and video browsing. We adapt the correlogram to handle the problems of image subregion querying, object localization, object tracking, and cut detection. Experimental results again suggest that the color correlogram is more effective than the histogram for these applications, with insignificant additional storage or processing cost.

337 citations



Journal ArticleDOI
TL;DR: The multiscale behavior of gradient watershed regions is investigated, a mechanism for imposing a scale-based hierarchy on the watersheds associated with these minima that can be used to label watershed boundaries according to their scale.
Abstract: Multiscale image analysis has been used successfully in a number of applications to classify image features according to their relative scales. As a consequence, much has been learned about the scale-space behavior of intensity extrema, edges, intensity ridges, and grey-level blobs. We investigate the multiscale behavior of gradient watershed regions. These regions are defined in terms of the gradient properties of the gradient magnitude of the original image. Boundaries of gradient watershed regions correspond to the edges of objects in an image. Multiscale analysis of intensity minima in the gradient magnitude image provides a mechanism for imposing a scale-based hierarchy on the watersheds associated with these minima. This hierarchy can be used to label watershed boundaries according to their scale. This provides valuable insight into the multiscale properties of edges in an image without following these curves through scale-space. In addition, the gradient watershed region hierarchy can be used for automatic or interactive image segmentation. By selecting subtrees of the region hierarchy, visually sensible objects in an image can be easily constructed.

Journal ArticleDOI
TL;DR: The need for reducing the dimensionality via a preprocessing method that takes into consideration high-dimensional feature-space properties should enable the estimation of feature-extraction parameters to be more accurate and bypass the problems of the limitation of small numbers of training samples.
Abstract: As the number of spectral bands of high-spectral resolution data increases, the ability to detect more detailed classes should also increase, and the classification accuracy should increase as well. Often the number of labelled samples used for supervised classification techniques is limited, thus limiting the precision with which class characteristics can be estimated. As the number of spectral bands becomes large, the limitation on performance imposed by the limited number of training samples can become severe. A number of techniques for case-specific feature extraction have been developed to reduce dimensionality without loss of class separability. Most of these techniques require the estimation of statistics at full dimensionality in order to extract relevant features for classification. If the number of training samples is not adequately large, the estimation of parameters in high-dimensional data will not be accurate enough. As a result, the estimated features may not be as effective as they could be. This suggests the need for reducing the dimensionality via a preprocessing method that takes into consideration high-dimensional feature-space properties. Such reduction should enable the estimation of feature-extraction parameters to be more accurate. Using a technique referred to as projection pursuit (PP), such an algorithm has been developed. This technique is able to bypass many of the problems of the limitation of small numbers of training samples by making the computations in a lower-dimensional space, and optimizing a function called the projection index. A current limitation of this method is that, as the number of dimensions increases, it is likely that a local maximum of the projection index will be found that does not enable one to fully exploit hyperspectral-data capabilities.

Proceedings ArticleDOI
01 Jul 1999
TL;DR: Results show the method preserves visual quality while achieving significant computational gains in areas of images with high frequency texture patterns, geometric details, and lighting variations.
Abstract: We introduce a new concept for accelerating realistic image synthesis algorithms. At the core of this procedure is a novel physical error metric that correctly predicts the perceptual threshold for detecting artifacts in scene features. Built into this metric is a computational model of the human visual system's loss of sensitivity at high background illumination levels, high spatial frequencies, and high contrast levels (visual masking). An important feature of our model is that it handles the luminance-dependent processing and spatiallydependent processing independently. This allows us to precompute the expensive spatially-dependent component, making our model extremely efficient. We illustrate the utility of our procedure with global illumination algorithms used for realistic image synthesis. The expense of global illumination computations is many orders of magnitude higher than the expense of direct illumination computations and can greatly benefit by applying our perceptually based technique. Results show our method preserves visual quality while achieving significant computational gains in areas of images with high frequency texture patterns, geometric details, and lighting variations.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A novel feature extraction method which is based on incorporating information about the nearby orientation of the feature, as well as the anisotropic classification of the local tensor, which has the effect of stabilizing the tracking.
Abstract: Tracking linear features through tensor field datasets is an open research problem with widespread utility in medical and engineering disciplines. Existing tracking methods, which consider only the preferred local diffusion direction as they propagate, fail to accurately follow features as they enter regions of local complexity. This shortcoming is a result of partial voluming; that is, voxels in these regions often contain contributions from multiple features. These combined contributions result in ambiguities when deciding local primary feature orientation based solely on the preferred diffusion direction. In this paper, we introduce a novel feature extraction method, which we term tensorline propagation. Our method resolves the above ambiguity by incorporating information about the nearby orientation of the feature, as well as the anisotropic classification of the local tensor. The nearby orientation information is added in the spirit of an advection term in a standard diffusion-based propagation technique, and has the effect of stabilizing the tracking. To demonstrate the efficacy of tensorlines, we apply this method to the neuroscience problem of tracking white-matter bundles within the brain.

Proceedings ArticleDOI
07 Jun 1999
TL;DR: A learning paradigm to incrementally train the classifiers as additional training samples become available is developed and preliminary results for feature size reduction using clustering techniques are shown.
Abstract: Grouping images into (semantically) meaningful categories using low level visual features is a challenging and important problem in content based image retrieval. Using binary Bayesian classifiers, we attempt to capture high level concepts from low level image features under the constraint that the test image does belong to one of the classes of interest. Specifically, we consider the hierarchical classification of vacation images; at the highest level, images are classified into indoor/outdoor classes, outdoor images are further classified into city/landscape classes, and finally, a subset of landscape images is classified into sunset, forest, and mountain classes. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a vector quantizer can be used to estimate the class-conditional densities of the observed features needed for the Bayesian methodology. On a database of 6931 vacation photographs, our system achieved an accuracy of 90.5% for indoor vs. outdoor classification, 95.3% for city vs. landscape classification, 96.6% for sunset vs. forest and mountain classification, and 95.5% for forest vs. mountain classification. We further develop a learning paradigm to incrementally train the classifiers as additional training samples become available and also show preliminary results for feature size reduction using clustering techniques.

Journal ArticleDOI
01 Feb 1999
TL;DR: A prototype of a reverse engineering system which uses manufacturing features as geometric primitives is described, which has two advantages over current practice: the resulting models can be directly imported into feature-based CAD systems without loss of the semantics and topological information inherent in feature- based representations.
Abstract: Reverse engineering of mechanical parts requires extraction of information about an instance of a particular part sufficient to replicate the part using appropriate manufacturing techniques. This is important in a wide variety of situations, since functional CAD models are often unavailable or unusable for parts which must be duplicated or modified. Computer vision techniques applied to three-dimensional (3-D) data acquired using noncontact, 3-D position digitizers have the potential for significantly aiding the process. Serious challenges must be overcome, however, if sufficient accuracy is to be obtained and if models produced from sensed data are to be truly useful for manufacturing operations. The paper describes a prototype of a reverse engineering system which uses manufacturing features as geometric primitives. This approach has two advantages over current practice. The resulting models can be directly imported into feature-based CAD systems without loss of the semantics and topological information inherent in feature-based representations. In addition, the feature-based approach facilitates methods capable of producing highly accurate models, even when the original 3-D sensor data has substantial errors.

Journal ArticleDOI
TL;DR: The impact that features have on different phases of the life cycle is discussed, some ideas on how these phases can be improved by fully exploiting the concept of feature are provided, and topics for a research agenda in feature engineering are suggested.

Journal Article
TL;DR: The classification of medical image registration and matching transformations are presented and some image registration methods which are frequently used, such as registration based on landmark, feature-based method and mutual information method, etc.
Abstract: Medical image registration is an important task in medical image processing. In this paper, we present the development of medical image registration techniques. The classification of medical image registration and matching transformations are presented. Then we discuss some image registration methods which are frequently used, such as registration based on landmark, feature-based method and mutual information method, etc.

01 Jan 1999
TL;DR: The hybrid tree-a multidimensional data structure for indexing high-dimensional feature spaces is introduced and significantly outperforms both purely DP-based and SP-based index mechanisms as well as linear scans at all dimensionalities for large-sized databases.
Abstract: : Feature based similarity search is emerging as an important search paradigm in database systems. The technique used is to map the data items as points into a high dimensional feature space which is indexed using a multidimensional data structure. Similarity search then corresponds to a range search over the data structure. Although several data structures have been proposed for feature indexing, none of them is known to scale beyond 10-15 dimensional spaces. This paper introduces the hybrid tree a multidimensional data structure for indexing high dimensional feature spaces. Unlike other multidimensional data structures, the hybrid tree cannot be classified as either a pure data partitioning (DP) index structure (e.g., R-tree, SS-tree, SRtree) or a pure space partitioning (SP) one (e.g., KDB-tree, hBtree); rather, it combines positive aspects of the two types of index structures a single data structure to achieve search performance more scalable to high dimensionalities than either of the above techniques (hence, the name hybrid ). Furthermore, unlike many data structures (e.g., distance based index structures like SS-tree, SR-tree), the hybrid tree can support queries based on arbitrary distance functions. Our experiments on real high dimensional large size feature databases demonstrate that the hybrid tree scales well to high dimensionality and large database sizes. It significantly outperforms both purely DP-based and SP-based index mechanisms as well as linear scan at all dimensionalities for large sized databases.

Proceedings ArticleDOI
19 May 1999
TL;DR: This work studies the problem of combining feature maps, from different visual modalities and with unrelated dynamic ranges, into a unique saliency map, and indicates that strategy (4) and its simplified, computationally efficient approximation yielded significantly better performance than (1), with up to 4-fold improvement, while preserving generality.
Abstract: Bottom-up or saliency-based visual attention allows primates to detect non-specific conspicuous targets in cluttered scenes. A classical metaphor, derived from electrophysiological and psychophysical studies, describes attention as a rapidly shiftable 'spotlight'. The model described here reproduces the attentional scanpaths of this spotlight: Simple multi-scale 'feature maps' detect local spatial discontinuities in intensity, color, orientation or optical flow, and are combined into a unique 'master' or 'saliency' map. the saliency map is sequentially scanned, in order of decreasing saliency, by the focus of attention. We study the problem of combining feature maps, from different visual modalities and with unrelated dynamic ranges, into a unique saliency map. Four combination strategies are compared using three databases of natural color images: (1) Simple normalized summation, (2) linear combination with learned weights, (3) global non-linear normalization followed by summation, and (4) local non-linear competition between salient locations. Performance was measured as the number of false detections before the most salient target was found. Strategy (1) always yielded poorest performance and (2) best performance, with a 3- to 8-fold improvement in time to find a salient target. However, (2) yielded specialized systems with poor generations. Interestingly, strategy (4) and its simplified, computationally efficient approximation (3) yielded significantly better performance than (1), with up to 4-fold improvement, while preserving generality.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
23 Mar 1999
TL;DR: The hybrid tree as mentioned in this paper is a multidimensional data structure for indexing high-dimensional feature spaces, which combines the positive aspects of the two types of index structures into a single data structure to achieve a search performance which is more scalable to high dimensionalities than either of the above techniques.
Abstract: Feature-based similarity searching is emerging as an important search paradigm in database systems. The technique used is to map the data items as points into a high-dimensional feature space which is indexed using a multidimensional data structure. Similarity searching then corresponds to a range search over the data structure. Although several data structures have been proposed for feature indexing, none of them is known to scale beyond 10-15 dimensional spaces. This paper introduces the hybrid tree-a multidimensional data structure for indexing high-dimensional feature spaces. Unlike other multidimensional data structures, the hybrid tree cannot be classified as either a pure data partitioning (DP) index structure (such as the R-tree, SS-tree or SR-tree) or a pure space partitioning (SP) one (such as the KDB-tree or hB-tree); rather it combines the positive aspects of the two types of index structures into a single data structure to achieve a search performance which is more scalable to high dimensionalities than either of the above techniques. Furthermore, unlike many data structures (e.g. distance-based index structures like the SS-tree and SR-tree), the hybrid tree can support queries based on arbitrary distance functions. Our experiments on "real" high-dimensional large-size feature databases demonstrate that the hybrid tree scales well to high dimensionality and large database sizes. It significantly outperforms both purely DP-based and SP-based index mechanisms as well as linear scans at all dimensionalities for large-sized databases.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: The results show that the use of the color information embedded in a eigen approach, improve the recognition rate when compared to the same scheme which uses only the luminance information.
Abstract: A common feature found in practically all technical approaches proposed for face recognition is the use of only the luminance information associated to the face image. One may wonder if this is due to the low importance of the color information in face recognition or due to other less technical reasons such as the no availability of color image database. Motivated by this reasoning, we have performed a variety of tests using a global eigen approach developed previously, which has been modified to cope with the color information. Our results show that the use of the color information embedded in a eigen approach, improve the recognition rate when compared to the same scheme which uses only the luminance information.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: This paper finds that the ASM is faster and achieves more accurate feature point location than the AAM, but the A AM gives a better match to the texture.
Abstract: Statistical models of the shape and appearance of image structures can be matched to new images using both the Active Shape Model [7] algorithm and the Active Appearance Model algorithm [2]. The former searches along profiles about the current model point positions to update the current estimate of the shape of the object. The latter samples the image data under the current instance and uses the difference between model and sample to update the appearance model parameters. In this paper we compare and contrast the two algorithms, giving the results of experiments testing their performance on two data sets, one of faces, the other of structures in MR brain sections. We find that the ASM is faster and achieves more accurate feature point location than the AAM, but the AAM gives a better match to the texture.

Journal ArticleDOI
TL;DR: It is concluded that both features and conjunctions are detected in parallel, and implications for the role of attention in visual processing are discussed.
Abstract: Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed-accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed.

Journal ArticleDOI
TL;DR: This model is an attempt to explain the results from a number of different studies in visual attention, including parallel feature searches and serial conjunction searches, variations in search slope with variations in feature contrast and individual subject differences, attentional gradients triggered by cuing, feature-driven spatial selection, split attention, inhibition of distractor locations, and flanking inhibition.
Abstract: The model presented here is an attempt to explain the results from a number of different studies in visual attention, including parallel feature searches and serial conjunction searches, variations in search slope with variations in feature contrast and individual subject differences, attentional gradients triggered by cuing, feature-driven spatial selection, split attention, inhibition of distractor locations, and flanking inhibition. The model is implemented in a neural network consisting of a hierarchy of spatial maps. Attentional gates control the flow of information from each level of the hierarchy to the next. The gates are jointly controlled by a Bottom-Up System favoring locations with unique features and a Top-Down System favoring locations with features designated as target features. Because the gating of each location depends on the features present there, the model is called FeatureGate.

Proceedings ArticleDOI
15 Sep 1999
TL;DR: A color-spatial method to include several spatial features of the colors in an image for retrieval, including area and position, which mean the zero-order and the first-order moments, respectively.
Abstract: Along with the analysis of color features in the hue, saturation and value (HSV) space, a new dividing method to quantize the color space into 36 non-uniform bins is introduced in this paper. Based on this quantization method we propose a color-spatial method to include several spatial features of the colors in an image for retrieval. These features are area and position, which mean the zero-order and the first-order moments, respectively. Experiments on an image database of 838 images show that the algorithm performs well in precision and adaptability.

Journal ArticleDOI
TL;DR: The performance of the PNN when used in conjunction with these feature extraction and postprocessing schemes showed the potential of this neural-network-based cloud classification system.
Abstract: The problem of cloud data classification from satellite imagery using neural networks is considered. Several image transformations such as singular value decomposition (SVD) and wavelet packet (WP) were used to extract the salient spectral and textural features attributed to satellite cloud data in both visible and infrared (IR) channels. In addition, the well-known gray-level cooccurrence matrix (GLCM) method and spectral features were examined for the sake of comparison. Two different neural-network paradigms namely probability neural network (PNN) and unsupervised Kohonen self-organized feature map (SOM) were examined and their performance were also benchmarked on the geostationary operational environmental satellite (GOES) 8 data. Additionally, a postprocessing scheme was developed which utilizes the contextual information in the satellite images to improve the final classification accuracy. Overall, the performance of the PNN when used in conjunction with these feature extraction and postprocessing schemes showed the potential of this neural-network-based cloud classification system.

Journal ArticleDOI
01 Jan 1999
TL;DR: This paper introduces a new algorithm, called lane-finding in another domain (LANA), for detecting lane markers in images acquired from a forward-looking vehicle-mounted camera based on a novel set of frequency domain features that capture relevant information concerning the strength and orientation of spatial edges.
Abstract: This paper introduces a new algorithm, called lane-finding in another domain (LANA), for detecting lane markers in images acquired from a forward-looking vehicle-mounted camera. The method is based on a novel set of frequency domain features that capture relevant information concerning the strength and orientation of spatial edges. The frequency domain features are combined with a deformable template prior, in order to detect the lane markers of interest. Experimental results that illustrate the performance of this algorithm on images with varying lighting and environmental conditions, shadowing, lane occlusion(s), solid and dashed lines, etc. are presented. LANA detects lane markers well under a very large and varied collection of roadway images. A comparison is drawn between this frequency feature-based LANA algorithm and the spatial feature-based LOIS lane detection algorithm. This comparison is made from experimental, computational and methodological standpoints.

Proceedings ArticleDOI
14 Jan 1999
TL;DR: In this paper, a machine vision approach based on classical image processing techniques has been used for plant detection and identification, which is needed for weed detection, herbicide application or other efficient chemical spot spraying operations.
Abstract: Machine vision based on classical image processing techniques has the potential to be a useful tool for plant detection and identification. Plant identification is needed for weed detection, herbicide application or other efficient chemical spot spraying operations. The key to successful detection and identification of plants as species types is the segmentation of plants form background pixel regions. In particular, it would be beneficial to segment individual leaves form tops of canopies as well. The segmentation process yields an edge or binary image which contains shape feature information. Results indicate that red-green-blue formats might provide the best segmentation criteria, based on models of human color perception. The binary image can be also used as a template to investigate textural features of the plant pixel region, using gray image co-occurrence matrices. Texture features considers leaf venation, colors, or additional canopy structure that might be used to identify various type of grasses or broadleaf plants.