scispace - formally typeset
Search or ask a question

Showing papers on "Feature extraction published in 1999"


Proceedings ArticleDOI
20 Sep 1999
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

16,989 citations


01 Apr 1999
TL;DR: This paper describes a fast, correlation-based filter algorithm that can be applied to continuous and discrete problems and performs more feature selection than ReliefF does—reducing the data dimensionality by fifty percent in most cases.
Abstract: Algorithms for feature selection fall into two broad categories: wrappers that use the learning algorithm itself to evaluate the usefulness of features and filters that evaluate features according to heuristics based on general characteristics of the data. For application to large databases, filters have proven to be more practical than wrappers because they are much faster. However, most existing filter algorithms only work with discrete classification problems. This paper describes a fast, correlation-based filter algorithm that can be applied to continuous and discrete problems. The algorithm often outperforms the well-known ReliefF attribute estimator when used as a preprocessing step for naive Bayes, instance-based learning, decision trees, locally weighted regression, and model trees. It performs more feature selection than ReliefF does—reducing the data dimensionality by fifty percent in most cases. Also, decision and model trees built from the preprocessed data are often significantly smaller.

1,653 citations


Book
31 Aug 1999
TL;DR: Pattern Recognition, Cluster Analysis for Object Data, Classifier Design, and Image Processing and Computer Vision are studied.
Abstract: Pattern Recognition.- Cluster Analysis for Object Data.- Cluster Analysis for Relational Data.- Classifier Design.- Image Processing and Computer Vision.

1,133 citations


Proceedings ArticleDOI
01 Aug 1999
TL;DR: An unsupervised, near-linear time text clustering system that offers a number of algorithm choices for each phase, and a refinement to center adjustment, “vector average damping,” that further improves cluster quality.
Abstract: Clustering is a powerful technique for large-scale topic discovery from text. It involves two phases: first, feature extraction maps each document or record to a point in high-dimensional space, then clustering algorithms automatically group the points into a hierarchy of clusters. We describe an unsupervised, near-linear time text clustering system that offers a number of algorithm choices for each phase. We introduce a methodology for measuring the quality of a cluster hierarchy in terms of FMeasure, and present the results of experiments comparing different algorithms. The evaluation considers some feature selection parameters (tfidfand feature vector length) but focuses on the clustering algorithms, namely techniques from Scatter/Gather (buckshot, fractionation, and split/join) and kmeans. Our experiments suggest that continuous center adjustment contributes more to cluster quality than seed selection does. It follows that using a simpler seed selection algorithm gives a better time/quality tradeoff. We describe a refinement to center adjustment, “vector average damping,” that further improves cluster quality. We also compare the near-linear time algorithms to a group average greedy agglomerative clustering algorithm to demonstrate the time/quality tradeoff quantitatively.

958 citations


Journal ArticleDOI
TL;DR: This work presents a fingerprint classification algorithm which is able to achieve an accuracy better than previously reported in the literature and is based on a two-stage classifier to make a classification.
Abstract: Fingerprint classification provides an important indexing mechanism in a fingerprint database. An accurate and consistent classification can greatly reduce fingerprint matching time for a large database. We present a fingerprint classification algorithm which is able to achieve an accuracy better than previously reported in the literature. We classify fingerprints into five categories: whorl, right loop, left loop, arch, and tented arch. The algorithm uses a novel representation (FingerCode) and is based on a two-stage classifier to make a classification. It has been tested on 4000 images in the NIST-4 database. For the five-class problem, a classification accuracy of 90 percent is achieved (with a 1.8 percent rejection during the feature extraction phase). For the four-class problem (arch and tented arch combined into one class), we are able to achieve a classification accuracy of 94.8 percent (with 1.8 percent rejection). By incorporating a reject option at the classifier, the classification accuracy can be increased to 96 percent for the five-class classification task, and to 97.8 percent for the four-class classification task after a total of 32.5 percent of the images are rejected.

639 citations


Journal ArticleDOI
TL;DR: This algorithm has applications for a variety of important problems in visualization and geometrical modeling including 3D feature extraction, mesh reduction, texture mapping 3D surfaces, and computer aided design.
Abstract: This paper describes a method for partitioning 3D surface meshes into useful segments. The proposed method generalizes morphological watersheds, an image segmentation technique, to 3D surfaces. This surface segmentation uses the total curvature of the surface as an indication of region boundaries. The surface is segmented into patches, where each patch has a relatively consistent curvature throughout, and is bounded by areas of higher, or drastically different, curvature. This algorithm has applications for a variety of important problems in visualization and geometrical modeling including 3D feature extraction, mesh reduction, texture mapping 3D surfaces, and computer aided design.

638 citations


Journal ArticleDOI
TL;DR: It is shown that feature sets based upon the short-time Fourier transform, the wavelets transform, and the wavelet packet transform provide an effective representation for classification, provided that they are subject to an appropriate form of dimensionality reduction.

625 citations


Journal ArticleDOI
01 Jun 1999
TL;DR: Its efficiency comes from direct processing on gray-level data without any preprocessing, and from processing only a minimally necessary fraction of pixels in an exploratory manner, avoiding low-level image-wide operations such as thresholding, edge detection, and morphological processing.
Abstract: Algorithms are presented for rapid, automatic, robust, adaptive, and accurate tracing of retinal vasculature and analysis of intersections and crossovers. This method improves upon prior work in several ways: automatic adaptation from frame to frame without manual initialization/adjustment, with few tunable parameters; robust operation on image sequences exhibiting natural variability, poor and varying imaging conditions, including over/under-exposure, low contrast, and artifacts such as glare; does not require the vasculature to be connected, so it can handle partial views; and operation is efficient enough for use on unspecialized hardware, and amenable to deadline-driven computing, being able to produce a rapidly and monotonically improving sequence of usable partial results. Increased computation can be traded for superior tracing performance. Its efficiency comes from direct processing on gray-level data without any preprocessing, and from processing only a minimally necessary fraction of pixels in an exploratory manner, avoiding low-level image-wide operations such as thresholding, edge detection, and morphological processing. These properties make the algorithm suited to real-time, on-line (live) processing and is being applied to computer-assisted laser retinal surgery.

481 citations


Proceedings ArticleDOI
08 Feb 1999
TL;DR: In this paper, a nonlinear form of principal component analysis (PCA) is proposed to perform polynomial feature extraction in high-dimensional feature spaces, related to input space by some nonlinear map; for instance, the space of all possible d-pixel products in images.
Abstract: A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in highdimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible d-pixel products in images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

430 citations


Journal ArticleDOI
TL;DR: A segmented, and possibly multistage, principal components transformation (PCT) is proposed for efficient hyperspectral remote-sensing image classification and display and results have been obtained in terms of classification accuracy, speed, and quality of color image display using two airborne visible/infrared imaging spectrometer (AVIRIS) data sets.
Abstract: A segmented, and possibly multistage, principal components transformation (PCT) is proposed for efficient hyperspectral remote-sensing image classification and display. The scheme requires, initially, partitioning the complete set of bands into several highly correlated subgroups. After separate transformation of each subgroup, the single-band separabilities are used as a guide to carry out feature selection. The selected features can then be transformed again to achieve a satisfactory data reduction ratio and generate the three most significant components for color display. The scheme reduces the computational load significantly for feature extraction, compared with the conventional PCT. A reduced number of features will also accelerate the maximum likelihood classification process significantly, and the process will not suffer the limitations encountered by trying to use the full set of hyperspectral data when training samples are limited. Encouraging results have been obtained in terms of classification accuracy, speed, and quality of color image display using two airborne visible/infrared imaging spectrometer (AVIRIS) data sets.

408 citations


Journal ArticleDOI
TL;DR: Two novel characteristics, datum point invariance and line feature matching, are presented in palmprint verification, which are proposed to detect whether a couple of palmprints are from the same palm.

Proceedings ArticleDOI
27 Sep 1999
TL;DR: It is shown that, in general, the discrimination effectiveness of the features increases with the amount of post-Gabor processing.
Abstract: The performance of a number of texture feature operators is evaluated. The features are all based on the local spectrum which is obtained by a bank of Gabor filters. The comparison is made using a quantitative method which is based on Fisher's criterion. It is shown that, in general, the discrimination effectiveness of the features increases with the amount of post-Gabor processing.

Journal ArticleDOI
TL;DR: A new feature-based approach to automated image-to-image registration that combines an invariant-moment shape descriptor with improved chain-code matching to establish correspondences between the potentially matched regions detected from the two images is presented.
Abstract: A new feature-based approach to automated image-to-image registration is presented. The characteristic of this approach is that it combines an invariant-moment shape descriptor with improved chain-code matching to establish correspondences between the potentially matched regions detected from the two images. It is robust in that it overcomes the difficulties of control-point correspondence by matching the images both in the feature space, using the principle of minimum distance classifier (based on the combined criteria), and sequentially in the image space, using the rule of root mean-square error (RMSE). In image segmentation, the performance of the Laplacian of Gaussian operators is improved by introducing a new algorithm called thin and robust zero crossing. After the detected edge points are refined and sorted, regions are defined. Region correspondences are then performed by an image-matching algorithm developed in this research. The centers of gravity are then extracted from the matched regions and are used as control points. Transformation parameters are estimated based on the final matched control-point pairs. The algorithm proposed is automated, robust, and of significant value in an operational context. Experimental results using multitemporal Landsat TM imagery are presented.

Journal ArticleDOI
TL;DR: The multiscale behavior of gradient watershed regions is investigated, a mechanism for imposing a scale-based hierarchy on the watersheds associated with these minima that can be used to label watershed boundaries according to their scale.
Abstract: Multiscale image analysis has been used successfully in a number of applications to classify image features according to their relative scales. As a consequence, much has been learned about the scale-space behavior of intensity extrema, edges, intensity ridges, and grey-level blobs. We investigate the multiscale behavior of gradient watershed regions. These regions are defined in terms of the gradient properties of the gradient magnitude of the original image. Boundaries of gradient watershed regions correspond to the edges of objects in an image. Multiscale analysis of intensity minima in the gradient magnitude image provides a mechanism for imposing a scale-based hierarchy on the watersheds associated with these minima. This hierarchy can be used to label watershed boundaries according to their scale. This provides valuable insight into the multiscale properties of edges in an image without following these curves through scale-space. In addition, the gradient watershed region hierarchy can be used for automatic or interactive image segmentation. By selecting subtrees of the region hierarchy, visually sensible objects in an image can be easily constructed.

Journal ArticleDOI
TL;DR: The need for reducing the dimensionality via a preprocessing method that takes into consideration high-dimensional feature-space properties should enable the estimation of feature-extraction parameters to be more accurate and bypass the problems of the limitation of small numbers of training samples.
Abstract: As the number of spectral bands of high-spectral resolution data increases, the ability to detect more detailed classes should also increase, and the classification accuracy should increase as well. Often the number of labelled samples used for supervised classification techniques is limited, thus limiting the precision with which class characteristics can be estimated. As the number of spectral bands becomes large, the limitation on performance imposed by the limited number of training samples can become severe. A number of techniques for case-specific feature extraction have been developed to reduce dimensionality without loss of class separability. Most of these techniques require the estimation of statistics at full dimensionality in order to extract relevant features for classification. If the number of training samples is not adequately large, the estimation of parameters in high-dimensional data will not be accurate enough. As a result, the estimated features may not be as effective as they could be. This suggests the need for reducing the dimensionality via a preprocessing method that takes into consideration high-dimensional feature-space properties. Such reduction should enable the estimation of feature-extraction parameters to be more accurate. Using a technique referred to as projection pursuit (PP), such an algorithm has been developed. This technique is able to bypass many of the problems of the limitation of small numbers of training samples by making the computations in a lower-dimensional space, and optimizing a function called the projection index. A current limitation of this method is that, as the number of dimensions increases, it is likely that a local maximum of the projection index will be found that does not enable one to fully exploit hyperspectral-data capabilities.

Journal ArticleDOI
TL;DR: In this paper, the key frame extraction problem is considered from a set-theoretic point of view, and systematic algorithms are derived to find a compact set of key frames that can represent a video segment for a given degree of fidelity.
Abstract: Extracting a small number of key frames that can abstract the content of video is very important for efficient browsing and retrieval in video databases. In this paper, the key frame extraction problem is considered from a set-theoretic point of view, and systematic algorithms are derived to find a compact set of key frames that can represent a video segment for a given degree of fidelity. The proposed extraction algorithms can be hierarchically applied to obtain a tree-structured key frame hierarchy that is a multilevel abstract of the video. The key frame hierarchy enables an efficient content-based retrieval by using the depth-first search scheme with pruning., Intensive experiments on a variety of video sequences are presented to demonstrate the improved performance of the proposed algorithms over the existing approaches.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A novel feature extraction method which is based on incorporating information about the nearby orientation of the feature, as well as the anisotropic classification of the local tensor, which has the effect of stabilizing the tracking.
Abstract: Tracking linear features through tensor field datasets is an open research problem with widespread utility in medical and engineering disciplines. Existing tracking methods, which consider only the preferred local diffusion direction as they propagate, fail to accurately follow features as they enter regions of local complexity. This shortcoming is a result of partial voluming; that is, voxels in these regions often contain contributions from multiple features. These combined contributions result in ambiguities when deciding local primary feature orientation based solely on the preferred diffusion direction. In this paper, we introduce a novel feature extraction method, which we term tensorline propagation. Our method resolves the above ambiguity by incorporating information about the nearby orientation of the feature, as well as the anisotropic classification of the local tensor. The nearby orientation information is added in the spirit of an advection term in a standard diffusion-based propagation technique, and has the effect of stabilizing the tracking. To demonstrate the efficacy of tensorlines, we apply this method to the neuroscience problem of tracking white-matter bundles within the brain.

Journal ArticleDOI
TL;DR: The algorithm proposed withstands JPEG and MPEG artifacts, even at high compression rates, and can detect and classify production effects that are difficult to detect with previous approaches.
Abstract: We describe a new approach to the detection and classification of production effects in video sequences. Our method can detect and classify a variety of effects, including cuts, fades, dissolves, wipes and captions, even in sequences involving significant motion. We detect the appearance of intensity edges that are distant from edges in the previous frame. A global motion computation is used to handle camera or object motion. The algorithm we propose withstands JPEG and MPEG artifacts, even at high compression rates. Experimental evidence demonstrates that our method can detect and classify production effects that are difficult to detect with previous approaches.

Journal ArticleDOI
TL;DR: A hidden Markov model-based approach designed to recognize off-line unconstrained handwritten words for large vocabularies and can be successfully used for handwritten word recognition.
Abstract: Describes a hidden Markov model-based approach designed to recognize off-line unconstrained handwritten words for large vocabularies. After preprocessing, a word image is segmented into letters or pseudoletters and represented by two feature sequences of equal length, each consisting of an alternating sequence of shape-symbols and segmentation-symbols, which are both explicitly modeled. The word model is made up of the concatenation of appropriate letter models consisting of elementary HMMs and an HMM-based interpolation technique is used to optimally combine the two feature sets. Two rejection mechanisms are considered depending on whether or not the word image is guaranteed to belong to the lexicon. Experiments carried out on real-life data show that the proposed approach can be successfully used for handwritten word recognition.

Journal ArticleDOI
01 Feb 1999
TL;DR: A prototype of a reverse engineering system which uses manufacturing features as geometric primitives is described, which has two advantages over current practice: the resulting models can be directly imported into feature-based CAD systems without loss of the semantics and topological information inherent in feature- based representations.
Abstract: Reverse engineering of mechanical parts requires extraction of information about an instance of a particular part sufficient to replicate the part using appropriate manufacturing techniques. This is important in a wide variety of situations, since functional CAD models are often unavailable or unusable for parts which must be duplicated or modified. Computer vision techniques applied to three-dimensional (3-D) data acquired using noncontact, 3-D position digitizers have the potential for significantly aiding the process. Serious challenges must be overcome, however, if sufficient accuracy is to be obtained and if models produced from sensed data are to be truly useful for manufacturing operations. The paper describes a prototype of a reverse engineering system which uses manufacturing features as geometric primitives. This approach has two advantages over current practice. The resulting models can be directly imported into feature-based CAD systems without loss of the semantics and topological information inherent in feature-based representations. In addition, the feature-based approach facilitates methods capable of producing highly accurate models, even when the original 3-D sensor data has substantial errors.

Proceedings ArticleDOI
19 May 1999
TL;DR: This work studies the problem of combining feature maps, from different visual modalities and with unrelated dynamic ranges, into a unique saliency map, and indicates that strategy (4) and its simplified, computationally efficient approximation yielded significantly better performance than (1), with up to 4-fold improvement, while preserving generality.
Abstract: Bottom-up or saliency-based visual attention allows primates to detect non-specific conspicuous targets in cluttered scenes. A classical metaphor, derived from electrophysiological and psychophysical studies, describes attention as a rapidly shiftable 'spotlight'. The model described here reproduces the attentional scanpaths of this spotlight: Simple multi-scale 'feature maps' detect local spatial discontinuities in intensity, color, orientation or optical flow, and are combined into a unique 'master' or 'saliency' map. the saliency map is sequentially scanned, in order of decreasing saliency, by the focus of attention. We study the problem of combining feature maps, from different visual modalities and with unrelated dynamic ranges, into a unique saliency map. Four combination strategies are compared using three databases of natural color images: (1) Simple normalized summation, (2) linear combination with learned weights, (3) global non-linear normalization followed by summation, and (4) local non-linear competition between salient locations. Performance was measured as the number of false detections before the most salient target was found. Strategy (1) always yielded poorest performance and (2) best performance, with a 3- to 8-fold improvement in time to find a salient target. However, (2) yielded specialized systems with poor generations. Interestingly, strategy (4) and its simplified, computationally efficient approximation (3) yielded significantly better performance than (1), with up to 4-fold improvement, while preserving generality.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
01 Aug 1999
TL;DR: A neural-based crowd estimation system for surveillance in complex scenes at underground station platform is presented, based on the proposed hybrid of the least-squares and global search algorithms which are capable of providing the global search characteristic and fast convergence speed.
Abstract: A neural-based crowd estimation system for surveillance in complex scenes at underground station platform is presented. Estimation is carried out by extracting a set of significant features from sequences of images. Those feature indexes are modeled by a neural network to estimate the crowd density. The learning phase is based on our proposed hybrid of the least-squares and global search algorithms which are capable of providing the global search characteristic and fast convergence speed. Promising experimental results are obtained in terms of accuracy and real-time response capability to alert operators automatically.

Journal ArticleDOI
TL;DR: The main assumption underlying the approach is the existence of a dominant global motion that can be assigned to the background that indicates the presence of independently moving physical objects.
Abstract: To provide multimedia applications with new functionalities, the new video coding standard MPEG-4 relies on a content-based representation. This requires a prior decomposition of sequences into semantically meaningful, physical objects. We formulate this problem as one of separating foreground objects from the background based on motion information. For the object of interest, a 2D binary model is derived and tracked throughout the sequence. The model points consist of edge pixels detected by the Canny operator. To accommodate rotation and changes in shape of the tracked object, the model is updated every frame. These binary models then guide the actual video object plane (VOP) extraction. Thanks to our new boundary postprocessor and the excellent edge localization properties of the Canny operator, the resulting VOP contours are very accurate. Both the model initialization and update stages exploit motion information. The main assumption underlying our approach is the existence of a dominant global motion that can be assigned to the background. Areas that do not follow this background motion indicate the presence of independently moving physical objects. Two alternative methods to identify such objects are presented. The first one employs a morphological motion filter with a new filter criterion, which measures the deviation of the locally estimated optical flow from the corresponding global motion. The second method computes a change detection mask by taking the difference between consecutive frames. The first version is more suitable for sequences with little motion, whereas the second version is better at dealing with faster moving or changing objects. Experimental results demonstrate the performance of our algorithm.

Journal ArticleDOI
TL;DR: A Gabor filter-based method for directly extracting fingerprint features from grey-level images without pre-processing is introduced and shows that the recognition rate of the k-nearest neighbour classifier using the proposed Gabor Filter-based features is 97.2%.
Abstract: A Gabor filter-based method for directly extracting fingerprint features from grey-level images without pre-processing is introduced. The proposed method is more efficient and suitable than conventional methods for a small-scale fingerprint recognition system. Experimental results show that the recognition rate of the k-nearest neighbour classifier using the proposed Gabor filter-based features is 97.2%.

Proceedings ArticleDOI
15 Mar 1999
TL;DR: A new approach to content-based video indexing using hidden Markov models (HMMs), in which one feature vector is calculated for each image of the video sequence, that allows the classification of complex video sequences.
Abstract: This paper presents a new approach to content-based video indexing using hidden Markov models (HMMs). In this approach one feature vector is calculated for each image of the video sequence. These feature vectors are modeled and classified using HMMs. This approach has many advantages compared to other video indexing approaches. The system has automatic learning capabilities. It is trained by presenting manually indexed video sequences. To improve the system we use a video model, that allows the classification of complex video sequences. The presented approach works three times faster than real-time. We tested our system on TV broadcast news. The rate of 97.3% correctly classified frames shows the efficiency of our system.

Proceedings ArticleDOI
19 Mar 1999
TL;DR: This paper presents a data mining algorithm to find association rules in 2-dimensional color images to explore the feasibility of this approach and shows that there is promise in image mining based on content.
Abstract: Our focus for data mining in the paper is concerned with knowledge discovery in image databases. We present a data mining algorithm to find association rules in 2-dimensional color images. The algorithm has four major steps: feature extraction, object identification, auxiliary image creation and object mining. Our emphasis is on data mining of image content without the use of auxiliary domain knowledge. The purpose of our experiments is to explore the feasibility of this approach. A synthetic image set containing geometric shapes was generated to test our initial algorithm implementation. Our experimental results show that there is promise in image mining based on content. We compare these results against the rules obtained from manually identifying the shapes. We analyze the reasons for discrepancies. We also suggest directions for future work.

Journal ArticleDOI
TL;DR: This paper is concerned with the automatic detection of linear features in SAR satellite data, with application to road network extraction, and uses fuzzy operators to test and compare different fusion strategies involving different fusion operators.
Abstract: This paper is concerned with the automatic detection of linear features in SAR satellite data, with application to road network extraction. After a directional prefiltering step, a morphological line detector is presented. To improve the detection performances, the results obtained on multitemporal data are fused. Different fusion strategies involving different fusion operators are then presented. Since extensions of classical set union and intersection do not lead to satisfactory results (the corresponding operators are either too indulgent or too severe), the first strategy consists of fusing the data using a compromise operator. The second strategy consists of fusing the results computed with two operators that have opposite properties, in order to obtain a final intermediate result. Thanks to the wide range of properties they provide, fuzzy operators are used to test and compare these two fusion strategies on real ERS-1 multitemporal data.

Journal ArticleDOI
TL;DR: The performance of the PNN when used in conjunction with these feature extraction and postprocessing schemes showed the potential of this neural-network-based cloud classification system.
Abstract: The problem of cloud data classification from satellite imagery using neural networks is considered. Several image transformations such as singular value decomposition (SVD) and wavelet packet (WP) were used to extract the salient spectral and textural features attributed to satellite cloud data in both visible and infrared (IR) channels. In addition, the well-known gray-level cooccurrence matrix (GLCM) method and spectral features were examined for the sake of comparison. Two different neural-network paradigms namely probability neural network (PNN) and unsupervised Kohonen self-organized feature map (SOM) were examined and their performance were also benchmarked on the geostationary operational environmental satellite (GOES) 8 data. Additionally, a postprocessing scheme was developed which utilizes the contextual information in the satellite images to improve the final classification accuracy. Overall, the performance of the PNN when used in conjunction with these feature extraction and postprocessing schemes showed the potential of this neural-network-based cloud classification system.

Journal ArticleDOI
01 Jan 1999
TL;DR: This paper introduces a new algorithm, called lane-finding in another domain (LANA), for detecting lane markers in images acquired from a forward-looking vehicle-mounted camera based on a novel set of frequency domain features that capture relevant information concerning the strength and orientation of spatial edges.
Abstract: This paper introduces a new algorithm, called lane-finding in another domain (LANA), for detecting lane markers in images acquired from a forward-looking vehicle-mounted camera. The method is based on a novel set of frequency domain features that capture relevant information concerning the strength and orientation of spatial edges. The frequency domain features are combined with a deformable template prior, in order to detect the lane markers of interest. Experimental results that illustrate the performance of this algorithm on images with varying lighting and environmental conditions, shadowing, lane occlusion(s), solid and dashed lines, etc. are presented. LANA detects lane markers well under a very large and varied collection of roadway images. A comparison is drawn between this frequency feature-based LANA algorithm and the spatial feature-based LOIS lane detection algorithm. This comparison is made from experimental, computational and methodological standpoints.

Journal ArticleDOI
TL;DR: A new approach is proposed which combines canonical space transformation (CST) based on Canonical Analysis (CA), with EST for feature extraction, which can be used to reduce data dimensionality and to optimise the class separability of different gait classes simultaneously.