scispace - formally typeset
Search or ask a question
Topic

Histogram of oriented gradients

About: Histogram of oriented gradients is a research topic. Over the lifetime, 2037 publications have been published within this topic receiving 55881 citations. The topic is also known as: HOG.


Papers
More filters
Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel DL network with histogram of oriented gradient feature fusion (HOG-ShipCLSNet) for SAR ship classification, and four mechanisms are proposed to ensure superior classification accuracy.
Abstract: Ship classification in synthetic aperture radar (SAR) images is a fundamental and significant step in ocean surveillance. Recently, with the rise of deep learning (DL), modern abstract features from convolutional neural networks (CNNs) have hugely improved SAR ship classification accuracy. However, most existing CNN-based SAR ship classifiers overly rely on abstract features, but uncritically abandon traditional mature hand-crafted features, which may incur some challenges for further improving accuracy. Hence, this article proposes a novel DL network with histogram of oriented gradient (HOG) feature fusion (HOG-ShipCLSNet) for preferable SAR ship classification. In HOG-ShipCLSNet, four mechanisms are proposed to ensure superior classification accuracy, that is, 1) a multiscale classification mechanism (MS-CLS-Mechanism); 2) a global self-attention mechanism (GS-ATT-Mechanism); 3) a fully connected balance mechanism (FC-BAL-Mechanism); and 4) an HOG feature fusion mechanism (HOG-FF-Mechanism). We perform sufficient ablation studies to confirm the effectiveness of these four mechanisms. Finally, our experimental results on two open SAR ship datasets (OpenSARShip and FUSAR-Ship) jointly reveal that HOG-ShipCLSNet dramatically outperforms both modern CNN-based methods and traditional hand-crafted feature methods.

38 citations

Proceedings ArticleDOI
01 Nov 2013
TL;DR: A real-time obstacle recognition framework designed to alert the visually impaired people/blind of their presence and to assist humans to navigate safely, in indoor and outdoor environments, by handling a Smartphone device is introduced.
Abstract: In this paper we introduce a real-time obstacle recognition framework designed to alert the visually impaired people/blind of their presence and to assist humans to navigate safely, in indoor and outdoor environments, by handling a Smartphone device. Static and dynamic objects are detected using interest points selected based on an image grid and tracked using the multiscale Lucas-Kanade algorithm. Next, we activated an object classification methodology. We incorporate HOG (Histogram of Oriented Gradients) descriptor into the BoVW (Bag of Visual Words) retrieval framework and demonstrate how this combination may be used for obstacle classification in video streams. The experimental results performed on various challenging scenes demonstrate that our approach is effective in image sequence with important camera movement, including noise and low resolution data and achieves high accuracy, while being computational efficient.

38 citations

Proceedings Article
01 Jan 2010
TL;DR: The results show that the 3D HOG implementation provides competitive retrieval performance, and is able to boost the performance of one of the best existing 3D object descriptors when used in a combined descriptor.
Abstract: 3D object retrieval has received much research attention during the last years. To automatically determine the similarity between 3D objects, the global descriptor approach is very popular, and many competing methods for extracting global descriptors have been proposed to date. However, no single descriptor has yet shown to outperform all other descriptors on all retrieval benchmarks or benchmark classes. Instead, combinations of different descriptors usually yield improved performance over any single method. Therefore, enhancing the set of candidate descriptors is an important prerequisite for implementing effective 3D object retrieval systems. Inspired by promising recent results from image processing, in this paper we adapt the Histogram of Oriented Gradients (HOG) 2D image descriptor to the 3D domain. We introduce a concept for transferring the HOG descriptor extraction algorithm from 2D to 3D. We provide an implementation framework for extracting 3D HOG features from 3D mesh models, and present a systematic experimental evaluation of the retrieval effectiveness of this novel 3D descriptor. The results show that our 3D HOG implementation provides competitive retrieval performance, and is able to boost the performance of one of the best existing 3D object descriptors when used in a combined descriptor.

38 citations

Journal ArticleDOI
TL;DR: This paper adopts the conventional camera approach that uses sliding windows and histogram of oriented gradients (HOG) features, and describes how the feature extraction step of the conventional approach should be modified for a theoretically correct and effective use in omnidirectional cameras.
Abstract: In this paper, we present an omnidirectional vision-based method for object detection. We first adopt the conventional camera approach that uses sliding windows and histogram of oriented gradients (HOG) features. Then, we describe how the feature extraction step of the conventional approach should be modified for a theoretically correct and effective use in omnidirectional cameras. Main steps are modification of gradient magnitudes using Riemannian metric and conversion of gradient orientations to form an omnidirectional sliding window. In this way, we perform object detection directly on the omnidirectional images without converting them to panoramic or perspective images. Our experiments, with synthetic and real images, compare the proposed approach with regular (unmodified) HOG computation on both omnidirectional and panoramic images. Results show that the proposed approach should be preferred.

38 citations

Journal ArticleDOI
TL;DR: An efficient automated method for facial expression recognition based on the histogram of oriented gradient (HOG) descriptor, which is higher than the recognition rates for almost all other single-image- or video-based methods for facial emotion recognition.
Abstract: This article proposes an efficient automated method for facial expression recognition based on the histogram of oriented gradient (HOG) descriptor. This subject-independent method was designed for recognizing six prototyping emotions. It recognizes emotions by calculating differences on a level of feature descriptors between a neutral expression and a peak expression of an observed person. The parameters for the HOG descriptor were determined by using a genetic algorithm. Support vector machines (SVM) were applied during the recognition phase, whereat one SVM classifier was trained for one emotion. Each classifier was trained using difference vectors obtained by subtraction of HOG feature vectors calculated for the neutral and apex emotion subjects image. The proposed method was tested by using a leave-one-subject-out validation strategy for 106 subjects on 1232 images from the Cohn Kanade, and for 10 subjects on 192 images from the JAFFE database. A mean recognition rate of 95.64 % was obtained using the Cohn Kanade database, which is higher than the recognition rates for almost all other single-image- or video-based methods for facial emotion recognition.

38 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022181
2021116
2020189
2019179
2018240