Topic
Bounding overwatch
About: Bounding overwatch is a research topic. Over the lifetime, 966 publications have been published within this topic receiving 15156 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article , a novel multimoving object tracking method considering slow features and motion features has been proposed, named SF and motion feature-guided multiobject tracking (SFMFMOT), which realizes the continuous tracking of moving vehicles in satellite videos.
Abstract: With the development of video satellites, multimoving object tracking in satellite video is possible and has become a new challenging task. The difficulties are mainly caused by the characteristics of satellite videos: 1) small objects; 2) low contrast between objects and background; and 3) background in a state of continuous motion. These characteristics make it difficult for the advanced multiobject tracking algorithms in the natural video to give full play to their advantages, resulting in vast false alarms, missed objects, ID switches, and low-confidence bounding boxes. To tackle these problems, a novel multimoving object tracking method considering slow features (SFs) and motion features has been proposed in this research, named SF and motion feature-guided multiobject tracking (SFMFMOT), which realizes the continuous tracking of moving vehicles in satellite videos. A nonmaximum suppression (NMS) module guided by bounding box proposals based on SFs is designed to assist the object detection part by utilizing the sensitivity of SF analysis to the changed pixels. While removing a large number of static false alarms and supplementing missed objects, it improves the recall rate by increasing the confidence score of the correctly detected object bounding boxes. In order to improve the tracking performance, a set of optimization strategies based on motion features and time accumulation information are proposed to smooth the trajectory, remove static false alarms, and duplicate bounding boxes. The proposed method is evaluated in three satellite videos and its superiority is demonstrated.
3 citations
01 Jan 2008
TL;DR: This paper investigates the robustness to high bit error rates of two important secure noise resilient distance bounding protocols: the RFID protocol of Hancke and Kuhn (SECURECOMM ’05) and the noise resilient MAD protocol of Singelee and Preneel (ESAS ’07).
Abstract: Distance bounding protocols can be employed in mutual entity authentication schemes to determine an upper bound on the distance to another entity. As these protocols are conducted over noisy wireless adhoc channels, they should be designed to cope well with substantial bit error rates during the rapid single bit exchanges. This paper investigates the robustness to high bit error rates of two important secure noise resilient distance bounding protocols: the RFID protocol of Hancke and Kuhn (SECURECOMM ’05), and the noise resilient MAD protocol of Singelee and Preneel (ESAS ’07). In order to satisfy the specified design criteria, the bit error rate should not exceed a particular threshold value. The results of our paper help to compare both noise resilient distance bounding protocols in the scenario where they are employed in extremely noisy environments, and assist to choose the appropriate design parameters, such as the minimal required number of fast bit exchanges.
3 citations
•
TL;DR: In this paper, the authors derive the original Gini coefficient via the Lorenz curve to optimize the effectiveness-equity trade-off in a humanitarian location-allocation problem.
Abstract: Modeling equity in the allocation of scarce resources is a fast-growing concern in the humanitarian logistics field. The Gini coefficient is one of the most widely recognized measures of inequity and it was originally characterized by means of the Lorenz curve, which is a mathematical function that links the cumulative share of income to rank-ordered groups in a population. So far, humanitarian logistics models that have approached equity using the Gini coefficient do not actually optimize its original formulation, but use alternative definitions that do not necessarily replicate that original Gini measure. In this paper, we derive the original Gini coefficient via the Lorenz curve to optimize the effectiveness-equity trade-off in a humanitarian location-allocation problem. We also propose new valid inequalities based on an upper-bounding Lorenz curve to tighten the linear relaxation of our model and develop a clustering-based construction of the Lorenz curve that requires fewer additional constraints and variables than the original one. The computational study, based on the floods and landslides in Rio de Janeiro state, Brazil, reveals that while alternative Gini definitions have interesting properties, they can generate vastly different decisions compared to the original Gini coefficient. In addition, viewed from the perspective of the original Gini coefficient, these decisions can be significantly less equitable.
3 citations
••
TL;DR: This study proposes a new method of annotation by rectangles for IoT-based, called robust semi-automatic annotation, which combines speed and robustness, and develops an algorithm called RANGE-MBR, which determines, from the selected points on the contour of the object, a rectangle enclosing these points in a linear time.
Abstract: Object datasets used in the construction of object detectors are typically annotated with horizontal or oriented bounding rectangles for IoT-based. The optimality of an annotation is obtained by fulfilling two conditions: (i) the rectangle covers the whole object and (ii) the area of the rectangle is minimal. Building a large-scale object dataset requires annotators with equal manual dexterity to carry out this tedious work. When an object is horizontal for IoT-based, it is easy for the annotator to reach the optimal bounding box within a reasonable time. However, if the object is oriented, the annotator needs additional time to decide whether the object will be annotated with a horizontal rectangle or an oriented rectangle for IoT-based. Moreover, in both cases, the final decision is not based on any objective argument, and the annotation is generally not optimal. In this study, we propose a new method of annotation by rectangles for IoT-based, called robust semi-automatic annotation, which combines speed and robustness. Our method has two phases. The first phase consists in inviting the annotator to click on the most relevant points located on the contour of the object. The outputs of the first phase are used by an algorithm to determine a rectangle enclosing these points. To carry out the second phase, we develop an algorithm called RANGE-MBR, which determines, from the selected points on the contour of the object, a rectangle enclosing these points in a linear time. The rectangle returned by RANGE-MBR always satisfies optimality condition (i). We prove that the optimality condition (ii) is always satisfied for objects with isotropic shapes. For objects with anisotropic shapes, we study the optimality condition (ii) by simulations. We show that the rectangle returned by RANGE-MBR is quasi-optimal for the condition (ii) and that its performance increases with dilated objects, which is the case for most of the objects appearing on images collected by aerial photography.
3 citations
••
TL;DR: Zhang et al. as discussed by the authors proposed a refined feature-attentive network (RFN) to solve the inaccurate localization problem of low-contrast text areas inaccurately.
Abstract: Detecting the marking characters of industrial metal parts remains challenging due to low visual contrast, uneven illumination, corroded character structures, and cluttered background of metal part images. Affected by these factors, bounding boxes generated by most existing methods locate low-contrast text areas inaccurately. In this paper, we propose a refined feature-attentive network (RFN) to solve the inaccurate localization problem. Specifically, we design a parallel feature integration mechanism to construct an adaptive feature representation from multi-resolution features, which enhances the perception of multi-scale texts at each scale-specific level to generate a high-quality attention map. Then, an attentive refinement network is developed by the attention map to rectify the location deviation of candidate boxes. In addition, a re-scoring mechanism is designed to select text boxes with the best rectified location. Moreover, we construct two industrial scene text datasets, including a total of 102156 images and 1948809 text instances with various character structures and metal parts. Extensive experiments on our dataset and four public datasets demonstrate that our proposed method achieves the state-of-the-art performance.
3 citations