scispace - formally typeset
Journal ArticleDOI

Traffic Sign Recognition With Hinge Loss Trained Convolutional Neural Networks

Reads0
Chats0
TLDR
The details of the model's architecture for TSR are described and a hinge loss stochastic gradient descent (HLSGD) method to train convolutional neural networks (CNNs) is suggested.
Abstract
Traffic sign recognition (TSR) is an important and challenging task for intelligent transportation systems. We describe the details of our model's architecture for TSR and suggest a hinge loss stochastic gradient descent (HLSGD) method to train convolutional neural networks (CNNs). Our CNN consists of three stages (70-110-180) with 1162 284 trainable parameters. The HLSGD is evaluated on the German Traffic Sign Recognition Benchmark, which offers a faster and more stable convergence and a state-of-the-art recognition rate of 99.65%. We write a graphics processing unit package to train several CNNs and establish the final classifier in an ensemble way.

read more

Citations
More filters
Posted Content

Object Detection in 20 Years: A Survey

TL;DR: This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019), and makes an in-deep analysis of their challenges as well as technical improvements in recent years.
Journal ArticleDOI

Identification of rice diseases using deep convolutional neural networks

TL;DR: A novel rice diseases identification method based on deep convolutional neural networks (CNNs) techniques, trained to identify 10 common rice diseases with much higher accuracy than conventional machine learning model.
Proceedings ArticleDOI

Traffic-Sign Detection and Classification in the Wild

TL;DR: A large traffic-sign benchmark from 100000 Tencent Street View panoramas is created, going beyond previous benchmarks, and it is demonstrated how a robust end-to-end convolutional neural network (CNN) can simultaneously detect and classify trafficsigns.
Journal ArticleDOI

Beyond Sharing Weights for Deep Domain Adaptation

TL;DR: This work introduces a two-stream architecture, where one operates in the source domain and the other in the target domain, and demonstrates that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings.
Posted Content

Perceptual Generative Adversarial Networks for Small Object Detection

TL;DR: This work addresses the small object detection problem by developing a single architecture that internally lifts representations of small objects to super-resolved ones, achieving similar characteristics as large objects and thus more discriminative for detection.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images that can then be used to reliably match objects in diering images.
Related Papers (5)