scispace - formally typeset
Book ChapterDOI

Heavy Vehicle Detection Using Fine-Tuned Deep Learning

16 May 2018-pp 1903-1910
TL;DR: Experiments show that vehicle’s make and model can be recognized from transportation images effectively by using the proposed fine-tuned detection system, and demonstrate that the proposed detection system performs accurately with other simple and complex scenarios in detecting heavy vehicles in comparison with past vehicle detection systems.
Abstract: Heavy vehicles develop technical snag and traffic jam on streets. Accidents between heavy vehicle and road users, for example, pedestrians often result in severe injuries of the weaker street users. The highway safety and traffic jams can be secured with detection of heavy and overloaded vehicles on the highway to facilitate light motor vehicles like cars, scooters. A model for heavy vehicle detection using fine-tuned based on deep learning is proposed to deal with entangled transportation scene. This model comprises two parts, vehicle detection model and vehicle fine-grained detection. This step provides data for the next classification model. Experiments show that vehicle’s make and model can be recognized from transportation images effectively by using our method. Experimental results demonstrate that the proposed detection system performs accurately with other simple and complex scenarios in detecting heavy vehicles in comparison with past vehicle detection systems.
References
More filters

Proceedings ArticleDOI
03 Oct 2011-
TL;DR: This work applies Convolutional Networks (ConvNets) to the task of traffic sign classification as part of the GTSRB competition, and yields the 2nd-best accuracy above the human performance.
Abstract: We apply Convolutional Networks (ConvNets) to the task of traffic sign classification as part of the GTSRB competition. ConvNets are biologically-inspired multi-stage architectures that automatically learn hierarchies of invariant features. While many popular vision approaches use hand-crafted features such as HOG or SIFT, ConvNets learn features at every level from data that are tuned to the task at hand. The traditional ConvNet architecture was modified by feeding 1st stage features in addition to 2nd stage features to the classifier. The system yielded the 2nd-best accuracy of 98.97% during phase I of the competition (the best entry obtained 98.98%), above the human performance of 98.81%, using 32×32 color input images. Experiments conducted after phase 1 produced a new record of 99.17% by increasing the network capacity, and by using greyscale images instead of color. Interestingly, random features still yielded competitive results (97.33%).

612 citations


8


Journal ArticleDOI
TL;DR: A stochastic multiclass vehicle classification system which classifies a vehicle (given its direct rear-side view) into one of four classes: sedan, pickup truck, SUV/minivan, and unknown is presented.
Abstract: Vehicle classification has evolved into a significant subject of study due to its importance in autonomous navigation, traffic analysis, surveillance and security systems, and transportation management. While numerous approaches have been introduced for this purpose, no specific study has been conducted to provide a robust and complete video-based vehicle classification system based on the rear-side view where the camera's field of view is directly behind the vehicle. In this paper, we present a stochastic multiclass vehicle classification system which classifies a vehicle (given its direct rear-side view) into one of four classes: sedan, pickup truck, SUV/minivan, and unknown. A feature set of tail light and vehicle dimensions is extracted which feeds a feature selection algorithm to define a low-dimensional feature vector. The feature vector is then processed by a hybrid dynamic Bayesian network to classify each vehicle. Results are shown on a database of 169 videos for four classes.

198 citations


22


Proceedings ArticleDOI
16 Oct 2015-
TL;DR: A systematic framework of learning a deep CNN that addresses the challenges from two new perspectives by identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources is proposed.
Abstract: Deep convolutional neural networks (CNN) have seen tremendous success in large-scale generic object recognition. In comparison with generic object recognition, fine-grained image classification (FGIC) is much more challenging because (i) fine-grained labeled data is much more expensive to acquire (usually requiring domain expertise); (ii) there exists large intra-class and small inter-class variance. Most recent work exploiting deep CNN for image recognition with small training data adopts a simple strategy: pre-train a deep CNN on a large-scale external dataset (e.g., ImageNet) and fine-tune on the small-scale target data to fit the specific classification task. In this paper, beyond the fine-tuning strategy, we propose a systematic framework of learning a deep CNN that addresses the challenges from two new perspectives: (i) identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources (e.g., image search engines), and formulating the problem into multitask learning; (ii) a novel learning model by exploiting a regularization between the fine-grained recognition model and the hyper-class recognition model. We demonstrate the success of the proposed framework on two small-scale fine-grained datasets (Stanford Dogs and Stanford Cars) and on a large-scale car dataset that we collected.

183 citations


Book ChapterDOI
06 Sep 2014-
TL;DR: This work proposes to optimize 3D model fitting and fine-grained classification jointly, demonstrating the method outperforms several state-of-the-art approaches and conducting a series of analyses to explore the dependence between fine-Grained classification performance and 3D models.
Abstract: 3D object modeling and fine-grained classification are often treated as separate tasks. We propose to optimize 3D model fitting and fine-grained classification jointly. Detailed 3D object representations encode more information (e.g., precise part locations and viewpoint) than traditional 2D-based approaches, and can therefore improve fine-grained classification performance. Meanwhile, the predicted class label can also improve 3D model fitting accuracy, e.g., by providing more detailed class-specific shape models. We evaluate our method on a new fine-grained 3D car dataset (FG3DCar), demonstrating our method outperforms several state-of-the-art approaches. Furthermore, we also conduct a series of analyses to explore the dependence between fine-grained classification performance and 3D models.

135 citations


Proceedings ArticleDOI
25 Oct 2012-
TL;DR: This paper presents a system for vehicle detection, tracking and classification from roadside CCTV, using a combination of a vehicle silhouette and intensity-based pyramid HOG features extracted following background subtraction, classifying foreground blobs with majority voting.
Abstract: This paper presents a system for vehicle detection, tracking and classification from roadside CCTV. The system counts vehicles and separates them into four categories: car, van, bus and motorcycle (including bicycles). A new background Gaussian Mixture Model (GMM) and shadow removal method have been used to deal with sudden illumination changes and camera vibration. A Kalman filter tracks a vehicle to enable classification by majority voting over several consecutive frames, and a level set method has been used to refine the foreground blob. Extensive experiments with real world data have been undertaken to evaluate system performance. The best performance results from training a SVM (Support Vector Machine) using a combination of a vehicle silhouette and intensity-based pyramid HOG features extracted following background subtraction, classifying foreground blobs with majority voting. The evaluation results from the videos are encouraging: for a detection rate of 96.39%, the false positive rate is only 1.36% and false negative rate 4.97%. Even including challenging weather conditions, classification accuracy is 94.69%.

120 citations


Network Information