scispace - formally typeset
Search or ask a question

Answers from top 6 papers

More filters
Papers (6)Insight
Open accessProceedings ArticleDOI
27 Jun 2016
27.2K Citations
Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background.
This paper presents a robust and efficient ALPR system based on the state-of-the-art YOLO object detector.
Open accessProceedings ArticleDOI
01 Aug 2018
44 Citations
In our experiments, YOLO outperforms the other state-of-the-art detector, Faster R-CNN, and our multi-projection YOLO achieves the best accuracy with low-resolution input.
Proceedings ArticleDOI
01 Oct 2017
62 Citations
A new object detection method, OYOLO (Optimized YOLO), is produced, which is 1.18 times faster than YOLO, while outperforming other region-based approaches like R-CNN in accuracy.
Open accessJournal ArticleDOI
Wei Fang, Wang Lin, Ren Peiming 
01 Jan 2020-IEEE Access
152 Citations
The object detection performance is enhanced in Tinier-YOLO by using the passthrough layer that merges feature maps from the front layers to get fine-grained features, which can counter the negative effect of reducing the model size.
Proceedings ArticleDOI
Zhi Xu, Haochen Shi, Ning Li, Chao Xiang, Huiyu Zhou 
01 Nov 2018
42 Citations
Due to the One-stage detection model like YOLO having novel structure and great industrial application potential, this paper proposes a new model of detection based on YOLOv2 structure.

Related Questions

What is the current advancment in vixtims detection and YOLO?5 answersThe current advancements in object detection, particularly in victim detection, have seen significant progress with the utilization of YOLO-based algorithms. Various studies have focused on enhancing YOLO models for improved victim detection performance. For instance, the RSI-YOLO algorithm introduces channel and spatial attention mechanisms to strengthen feature fusion. Additionally, the YOLO-SWINF model incorporates a 3D-attention module to capture temporal information, enhancing detection results while maintaining real-time processing. Moreover, an auxiliary information-enhanced YOLO model enhances sensitivity and detection performance for small objects, outperforming the original YOLOv5 on challenging datasets. These advancements showcase the continuous evolution of YOLO-based algorithms in improving victim detection capabilities across various applications, including remote sensing, medical, and real-time monitoring.
How good is YOLO for object detection?5 answersYOLO (You Only Look Once) is a highly effective object detection algorithm that has shown significant advancements in the field of computer vision. Various versions of YOLO, such as YOLOv2, YOLOv5, and YOLO-Drone, have been proposed with improvements tailored to different applications. YOLO models have demonstrated superior detection accuracy, precision, recall, and Intersection over Union (IOU) metrics when compared to state-of-the-art detectors. Additionally, YOLO algorithms have been proven to outperform classical object detection methods in terms of detection performance, especially in scenarios involving small objects, remote sensing images, and UAV applications. Despite its strengths, YOLO algorithms may face challenges in handling noisy environments, which can impact their performance.
How can YOLO be used for joint detection and tracking?5 answersYOLO can be used for joint detection and tracking by combining the YOLO object detection algorithm with other tracking algorithms. For example, in the paper by Fan et al., they propose a new algorithm that combines YOLO with the CSR-DCF tracking algorithm. They also add a self-attention mechanism to address the problem of losing targets during tracking in the DCF algorithm. This algorithm demonstrates excellent performance compared to traditional CSR-DCF and DCF target tracking algorithms. Another paper by Zhao et al. modifies YOLO for joint source detection and azimuth estimation in a multi-interfering underwater acoustic environment. They use a modified version of YOLO called M-YOLO, which processes the whole frequency-beam domain sample using a single-regression neural network and directly outputs the target-existence probability and spectrum azimuth. These papers demonstrate how YOLO can be adapted and combined with other algorithms for joint detection and tracking tasks.
How can yolo output can help to reid in object tracking?5 answersThe YOLO algorithm can help with re-identification (reID) in object tracking by providing the output necessary for tracking and connecting target objects over time. YOLO performs classification and bounding box regression in one step, making it faster than most convolutional neural networks. By using YOLO's detection output, the Deep SORT algorithm can be applied for tracking and reID of objects. Deep SORT with Low Confidence Track Filtering (LCF) can filter out low average confidence detections, reducing false positive tracks. This combination of YOLO and Deep SORT with LCF has been shown to improve object tracking performance, especially in scenarios with occlusions and ID switching. Therefore, YOLO's output is crucial for the reID process in object tracking, enabling accurate and efficient tracking of objects over consecutive frames.
How does the YOLO algorithm work for detecting objects in satellite imagery?5 answersThe YOLO algorithm for object detection in satellite imagery works by analyzing the entire image using a convolutional neural network and predicting bounding boxes and class probabilities. It is known for its speed and performance. However, the original YOLO algorithm's performance is inadequate for detecting tiny objects in satellite videos due to low signal-to-noise ratio and smaller object sizes. To address this, an improved framework called HB-YOLO has been proposed. It replaces the universal convolution with an improved HorNet for higher-order spatial interactions, uses the BoTNet attention mechanism for fully fused features, adjusts anchors, integrates image segmentation, and incorporates the BoT-SORT algorithm for object tracking. Experimental results show improved recall, precision, F1-score, mean average precision, and object-tracking performance.
How YOLO works?2 answersYOLO (You Only Look Once) is an object detection model that has been customized and improved for various applications. YOLOv5, YOLO-F, and YOLOv3 are some of the modified versions of YOLO. YOLOv5 uses a Vision-Language distillation method to align image and text embeddings, achieving state-of-the-art accuracy in zero-shot object detection. YOLO-F simplifies the structure of the CSPBlock and replaces the neck part with FPNs-SE to enhance feature extraction for flame detection, resulting in improved accuracy and real-time performance. YOLOv3 is customized for face detection, achieving real-time, accurate detection of small faces in challenging environments. These customized versions of YOLO demonstrate the versatility and effectiveness of the YOLO framework in different domains.