scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Pattern Recognition and Machine Learning

01 Aug 2007-Technometrics (Taylor & Francis)-Vol. 49, Iss: 3, pp 366-366
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Abstract: (2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.
Citations
More filters
Journal ArticleDOI
TL;DR: The experimental results show that the proposed algorithm outperforms many state-of-the-art algorithms, and solves the inverse problem automatically-prior information on the number of clusters and the size of each cluster is unknown.

202 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...In a probabilistic, Bayesian approach, through Graphical Models (GMs) [16], latent variables are often exploited to describe the dependencies (or joint probability distributions) between observations and parameters....

    [...]

Journal ArticleDOI
TL;DR: Vehicle area networks form the backbone of future intelligent transportation systems and will be the focus of research and development for the next generation of smart cities.
Abstract: Vehicle area networks form the backbone of future intelligent transportation systems.

202 citations


Cites background from "Pattern Recognition and Machine Lea..."

  • ...Here, we list a few cases where anomaly detection can be effectively deployed to enhance VAN security or safety: Driver Profiling: Machine learning and classification techniques are required to profile the driver’s behavior.(5,43) A driver profile could be generated based on the driver’s physiological signals (for example, ECG, EEG, EOG) or vehicle information (speed, GPS routing, tire traction and stability) collected from various sensors....

    [...]

Journal ArticleDOI
TL;DR: The results presented in this paper demonstrate that a computerized medical diagnosis system based on the analysis of cytological images of fine needle biopsies to characterize theseBiopsies as either benign or malignant would be effective, providing valuable, accurate diagnostic information.
Abstract: The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information.

201 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...The Naive Bayes classifier is a probabilistic algorithm that applies Bayes’ theorem [34], [35]....

    [...]

  • ...For classification we used four classification algorithms [35], [46]: -nearest neighbor (k-NN) using , naive Bayes classifier (NB) using kernel density estimate, decision tree (DT), and support vector machine (SVM) using Gaussian radial basis function kernel with scaling factor ....

    [...]

  • ...Then five different classifiers were tested: k-nearest neighbor [33], naive Bayes classifier [34], [35], decision trees [36], support vector machines [32], and neural networks [37]....

    [...]

Journal ArticleDOI
TL;DR: This letter investigates a variety of fusion techniques to blend multiple DCNN land cover classifiers into a single aggregate classifier, and uses DCNN cross-validation results for the input densities of fuzzy integrals followed by evolutionary optimization to produce state-of-the-art classification results.
Abstract: Deep convolutional neural networks (DCNNs) have recently emerged as a dominant paradigm for machine learning in a variety of domains. However, acquiring a suitably large data set for training DCNN is often a significant challenge. This is a major issue in the remote sensing domain, where we have extremely large collections of satellite and aerial imagery, but lack the rich label information that is often readily available for other image modalities. In this letter, we investigate the use of DCNN for land–cover classification in high-resolution remote sensing imagery. To overcome the lack of massive labeled remote-sensing image data sets, we employ two techniques in conjunction with DCNN: transfer learning (TL) with fine-tuning and data augmentation tailored specifically for remote sensing imagery. TL allows one to bootstrap a DCNN while preserving the deep visual feature extraction learned over an image corpus from a different image domain. Data augmentation exploits various aspects of remote sensing imagery to dramatically expand small training image data sets and improve DCNN robustness for remote sensing image data. Here, we apply these techniques to the well-known UC Merced data set to achieve the land–cover classification accuracies of 97.8 ± 2.3%, 97.6 ± 2.6%, and 98.5 ± 1.4% with CaffeNet, GoogLeNet, and ResNet, respectively.

201 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...Ensemble methods, such as boosting and bagging [7], are known to produce improved results by combining multiple learning machines into a single classifier....

    [...]

Proceedings ArticleDOI
TL;DR: In this article, the authors derive a new formulation that finds the best alignment between two congruent $K$-dimensional sets of points by selecting the best subset of eigenfunctions of the Laplacian matrix.
Abstract: Matching articulated shapes represented by voxel-sets reduces to maximal sub-graph isomorphism when each set is described by a weighted graph. Spectral graph theory can be used to map these graphs onto lower dimensional spaces and match shapes by aligning their embeddings in virtue of their invariance to change of pose. Classical graph isomorphism schemes relying on the ordering of the eigenvalues to align the eigenspaces fail when handling large data-sets or noisy data. We derive a new formulation that finds the best alignment between two congruent $K$-dimensional sets of points by selecting the best subset of eigenfunctions of the Laplacian matrix. The selection is done by matching eigenfunction signatures built with histograms, and the retained set provides a smart initialization for the alignment problem with a considerable impact on the overall performance. Dense shape matching casted into graph matching reduces then, to point registration of embeddings under orthogonal transformations; the registration is solved using the framework of unsupervised clustering and the EM algorithm. Maximal subset matching of non identical shapes is handled by defining an appropriate outlier class. Experimental results on challenging examples show how the algorithm naturally treats changes of topology, shape variations and different sampling densities.

201 citations