scispace - formally typeset
Search or ask a question
Topic

Histogram of oriented gradients

About: Histogram of oriented gradients is a research topic. Over the lifetime, 2037 publications have been published within this topic receiving 55881 citations. The topic is also known as: HOG.


Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results show that the performance of HSEOM outperforms that of state-of-the-art orientation-based methods (e.g., Gabor filter, histogram of oriented gradients, and local directional code) and has advantages of low feature dimensionality and fast implementation for a real-time finger vein recognition system.
Abstract: Finger vein images are rich in orientation and edge features. Inspired by the edge histogram descriptor proposed in MPEG-7, this paper presents an efficient orientation-based local descriptor, named histogram of salient edge orientation map (HSEOM). HSEOM is based on the fact that human vision is sensitive to edge features for image perception. For a given image, HSEOM first finds oriented edge maps according to predefined orientations using a well-known edge operator and obtains a salient edge orientation map by choosing an orientation with the maximum edge magnitude for each pixel. Then, subhistograms of the salient edge orientation map are generated from the nonoverlapping submaps and concatenated to build the final HSEOM. In the experiment of this paper, eight oriented edge maps were used to generate a salient edge orientation map for HSEOM construction. Experimental results on our available finger vein image database, MMCBNU_6000, show that the performance of HSEOM outperforms that of state-of-the-art orientation-based methods (e.g., Gabor filter, histogram of oriented gradients, and local directional code). Furthermore, the proposed HSEOM has advantages of low feature dimensionality and fast implementation for a real-time finger vein recognition system.

18 citations

Book ChapterDOI
12 Dec 2016
TL;DR: A rebar localization algorithm, which can accurately locate the pixel locations of rebar within a GPR scan image, which uses image classification and statistical methods to locate hyperbola signatures within the image.
Abstract: Automated rebar detection in images from ground-penetrating radar (GPR) is a challenging problem and difficult to perform in real-time as a result of relatively low contrast images and the size of the images. This paper presents a rebar localization algorithm, which can accurately locate the pixel locations of rebar within a GPR scan image. The proposed algorithm uses image classification and statistical methods to locate hyperbola signatures within the image. The proposed approach takes advantage of adaptive histogram equalization to increase the visual signature of rebar within the image despite low contrast. A Naive Bayes classifier is used to approximately locate rebar within the image with histogram of oriented gradients feature vectors. In addition, a histogram based method is applied to more precisely locate individual rebar in the image, and then the proposed methods are validated using existing GPR data and data collected during the course of the research for this paper.

18 citations

Journal ArticleDOI
27 May 2022-Sensors
TL;DR: Four multi-methodologies were developed and all the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases.
Abstract: Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of patients. Additionally, removing benign tumors in their early stages has more risks than benefits. Video endoscopy technology is essential for imaging the GI tract and identifying disorders such as bleeding, ulcers, polyps, and malignant tumors. Videography generates 5000 frames, which require extensive analysis and take a long time to follow all frames. Thus, artificial intelligence techniques, which have a higher ability to diagnose and assist physicians in making accurate diagnostic decisions, solve these challenges. In this study, many multi-methodologies were developed, where the work was divided into four proposed systems; each system has more than one diagnostic method. The first proposed system utilizes artificial neural networks (ANN) and feed-forward neural networks (FFNN) algorithms based on extracting hybrid features by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and fuzzy color histogram (FCH) algorithms. The second proposed system uses pre-trained CNN models which are the GoogLeNet and AlexNet based on the extraction of deep feature maps and their classification with high accuracy. The third proposed method uses hybrid techniques consisting of two blocks: the first block of CNN models (GoogLeNet and AlexNet) to extract feature maps; the second block is the support vector machine (SVM) algorithm for classifying deep feature maps. The fourth proposed system uses ANN and FFNN based on the hybrid features between CNN models (GoogLeNet and AlexNet) and LBP, GLCM and FCH algorithms. All the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases. All systems produced promising results; the FFNN classifier based on the hybrid features extracted by GoogLeNet, LBP, GLCM and FCH achieved an accuracy of 99.3%, precision of 99.2%, sensitivity of 99%, specificity of 100%, and AUC of 99.87%.

18 citations

Journal ArticleDOI
01 Oct 2020
TL;DR: The core of this paper is the template protection via a cancelable biometric scheme without significantly affecting the recognition performance, and has used the bio-convolving approach to enhance the user’s privacy and ensure the robustness against spoof attacks.
Abstract: The recent years have witnessed a dramatic shift in the way of biometric identification, authentication, and security processes. Among the essential challenges that face these processes are the online verification and authentication. These challenges lie in the complexity of such processes, the necessity of the personal real-time identifiable information, and the methodology to capture temporal information. In this paper, we present an integrated biometric recognition method to jointly recognize face, iris, palm print, fingerprint and ear biometrics. The proposed method is based on the integration of the extracted deep-learned features together with the hand-crafted ones by using a fusion network. Also, we propose a novel convolutional neural network (CNN)-based model for deep feature extraction. In addition, several techniques are exploited to extract the hand-crafted features such as histogram of oriented gradients (HOG), oriented rotated brief (ORB), local binary patterns (LBPs), scale-invariant feature transform (SIFT), and speeded-up robust features (SURF). Furthermore, for dimensional consistency between the combined features, the dimensions of the hand-crafted features are reduced using independent component analysis (ICA) or principal component analysis (PCA). The core of this paper is the template protection via a cancelable biometric scheme without significantly affecting the recognition performance. Specifically, we have used the bio-convolving approach to enhance the user’s privacy and ensure the robustness against spoof attacks. Additionally, various CNN hyper-parameters with their impact on the proposed model performance are studied. Our experiments on various datasets revealed that the proposed method achieves 96.69%, 95.59%, 97.34%, 96.11% and 99.22% recognition accuracies for face, iris, fingerprint, palm print and ear recognition, respectively.

18 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed framework for object detection and recognition in cluttered images, given a single hand-drawn example as model, can significantly improve the accuracy of object detection.

18 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022181
2021116
2020189
2019179
2018240