scispace - formally typeset
Search or ask a question

Showing papers on "Convolutional neural network published in 2000"


01 Jan 2000
TL;DR: An automated technique to optimally select the neural network architecture using the simulated annealing algorithm is investigated, based on the area A/sub z/ under the receiver operating characteristic (ROC) curve of the Neural network.
Abstract: Many computer-aided diagnosis (CAD) systems use neural networks for either detection or classification of abnormalities on medical images. In this work, the authors investigate an automated technique to optimally select the neural network architecture using the simulated annealing algorithm. The optimization is based on the area A/sub z/ under the receiver operating characteristic (ROC) curve of the neural network. Studies are performed to select the architecture of a convolution neural network designed for the classification of true and false microcalcifications detected on digitized mammograms.

8 citations


Proceedings ArticleDOI
23 Jul 2000
TL;DR: In this paper, the authors investigated an automated technique to optimally select the neural network architecture using the simulated annealing algorithm, which is based on the area A/sub z/ under the receiver operating characteristic (ROC) curve of the network.
Abstract: Many computer-aided diagnosis (CAD) systems use neural networks for either detection or classification of abnormalities on medical images. In this work, the authors investigate an automated technique to optimally select the neural network architecture using the simulated annealing algorithm. The optimization is based on the area A/sub z/ under the receiver operating characteristic (ROC) curve of the neural network. Studies are performed to select the architecture of a convolution neural network designed for the classification of true and false microcalcifications detected on digitized mammograms.

1 citations


Proceedings ArticleDOI
23 Jul 2000
TL;DR: A new method to incorporate biologically inspired receptive fields in a feedforward neural network to enhance pattern recognition performance and show a strong correlation between the neural network performance and the receptive field geometry and orientation is proposed.
Abstract: Proposes a new method to incorporate biologically inspired receptive fields in a feedforward neural network to enhance pattern recognition performance. Based on a genetic algorithm the method determines the receptive field geometry, orientation, bias, and the number of planes per layer that maximize the pattern recognition performance of the network. The method is tested in the handwritten digit problem. The basic architecture of the neural network is inspired on the Neocognitron model. Resulting network architectures were ranked based on the fitness criterion: best generalization performance on a testing set. Results show a strong correlation between the neural network performance and the receptive field geometry and orientation. Results were compared with those of a fully connected perceptron neural network that does not incorporate receptive fields. Results reached by several networks with receptive field configuration determined by the genetic algorithm outperformed those of the perceptron model.

Book ChapterDOI
01 Jan 2000
TL;DR: Three complete recognizers have been developed, and they are labelled as modular neural network (MNN), learning vector quantization (LVQ), and convolutional neuralnetwork (CNN).
Abstract: A number of learning algorithm based automatic target recognizers have been developed. The approaches differ in how features for recognition are extracted and in the architecture of the recognizer. Features are either extracted automatically by a multilayer convolutional neural network, or chosen by the designer based on experiment and previous experience. Recognizer complexity is kept low by decomposing the learning tasks using modular components or imposing an architecture that is not fully connected. Three complete recognizers have been developed, and we label them as modular neural network (MNN), learning vector quantization (LVQ), and convolutional neural network (CNN). MNN uses modular neural networks operating on local directional variances of the image. LVQ uses the Haar wavelet decomposition of the input images as features, clusters training features into templates using the K-means algorithm, and then enhances the recognition capability of the templates using learning vector quantization. CNN operates directly on input images without any preliminary feature extraction stage. The multilayer convolutional neural network simultaneously learns features and how to classify them. The performance of the recognizers are compared by probability of recognition and computational complexity.