scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Patent
28 Sep 2016
TL;DR: In this article, an unbalanced data classification method based on adaptive upsampling is proposed, which includes the following steps of calculating the total of positive samples to be newly generated; calculating the probability density distribution for each positive sample by taking the Euclidean distance as the metric; determining the number of the new samples of the positive sample; generating a new positive sample and adding the newly generated positive sample points to an original unbalanced training set to make the positive and negative samples be same in number, namely, obtaining a new balance training set including n positive samples and n negative
Abstract: The invention relates to an unbalanced data classification method based on adaptive upsampling. The method includes the following steps of calculating the total of positive samples to be newly generated; calculating the probability density distribution for each positive sample by taking the Euclidean distance as the metric; determining the number of the new samples to be generated of the positive sample; generating a new positive sample and adding the newly generated positive sample points to an original unbalanced training set to make the positive and negative samples be same in number, namely, obtaining a new balance training set including n positive samples and n negative samples; and training the newly generated balance training set by means of an Adaboost algorithm and obtaining a final classification model after the iteration for T times. According to the invention, the classification performance of the unbalanced dataset is improved.

12 citations

Proceedings ArticleDOI
11 Jun 2017
TL;DR: A combined segmentation and upsampling technique that preserves the important semantical structure of the scene by means of a multilateral filter that is guided into regions of distinct objects in the segmented point cloud.
Abstract: We present a novel technique for fast and accurate reconstruction of depth images from 3D point clouds acquired in urban and rural driving environments. Our approach focuses entirely on the sparse distance and reflectance measurements generated by a LiDAR sensor. The main contribution of this paper is a combined segmentation and upsampling technique that preserves the important semantical structure of the scene. Data from the point cloud is segmented and projected onto a virtual camera image where a series of image processing steps are applied in order to reconstruct a fully sampled depth image. We achieve this by means of a multilateral filter that is guided into regions of distinct objects in the segmented point cloud. Thus, the gains of the proposed approach are two-fold: measurement noise in the original data is suppressed and missing depth values are reconstructed to arbitrary resolution. Objective evaluation in an automotive application shows state-of-the-art accuracy of our reconstructed depth images. Finally, we show the qualitative value of our images by training and evaluating a RGBD pedestrian detection system. By reinforcing the RGB pixels with our reconstructed depth values in the learning stage, a significant increase in detection rates can be realized while the model complexity remains comparable to the baseline.

12 citations

Journal ArticleDOI
TL;DR: EFDNet as discussed by the authors proposes an efficient foreground detection algorithm based on deep spatial features extracted from an RGB input image using VGG-16 convolutional neural networks (CNN) and concatenated residual (CR) blocks to recover lost feature information due to several convolution operations.
Abstract: Deep learning-based algorithms showed promising prospects in the computer vision domain. However, their deployment in real-time systems is challenging due to their computational complexity, high-end hardware prerequisites, and the amount of annotated data for training. This paper proposes an efficient foreground detection (EFDNet) algorithm based on deep spatial features extracted from an RGB input image using VGG-16 convolutional neural networks (CNN). The VGG-16 CNN is modified by concatenated residual (CR) blocks to learn better global contextual features and recover lost feature information due to several convolution operations. A new upsampling network is designed using bilinear interpolation sandwiched between $3\times 3$ convolutions to upsample and refine feature maps for pixel-wise prediction. This helps to propagate loss errors from the upsampling network during backpropagation. The experiments showed the effectiveness of the EFDNet in outperforming top-ranked foreground detection algorithms. EFDNet trains faster on low-end hardware and demonstrated promising results with a minimum of 50 training frames with binary ground-truth.

12 citations

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A new dedicated dynamic super-resolution that is capable to accurately super-resolve a depth sequence containing one or multiple moving objects without strong constraints on their shape or motion is proposed, clearly outperforming any existing super- resolution techniques that perform poorly on depth data.
Abstract: We enhance the resolution of depth videos acquired with low resolution time-of-flight cameras. To that end, we propose a new dedicated dynamic super-resolution that is capable to accurately super-resolve a depth sequence containing one or multiple moving objects without strong constraints on their shape or motion, thus clearly outperforming any existing super-resolution techniques that perform poorly on depth data and are either restricted to global motions or not precise because of an implicit estimation of motion. The proposed approach is based on a new data model that leads to a robust registration of all depth frames after a dense upsampling. The textureless nature of depth images allows to robustly handle sequences with multiple moving objects as confirmed by our experiments.

12 citations

Journal ArticleDOI
TL;DR: An efficient supervised foreground detection (SFDNet) algorithm based on atrous deep spatial features that showed better performance than high-ranked foreground detection algorithms on the three standard databases.
Abstract: Camera-based surveillance systems largely perform an intrusion detection task for sensitive areas. The task may seem trivial but is quite challenging due to environmental changes and object behaviors such as those due to night-time, sunlight, IR camera, camouflage, and static foreground objects, etc. Convolutional neural network based algorithms have shown promise in dealing with these challenges. However, they are exclusively focused on accuracy. This article proposes an efficient supervised foreground detection (SFDNet) algorithm based on atrous deep spatial features. The features are extracted using atrous convolution kernels to enlarge the field-of-view of a kernel mask, thereby encoding rich context features without increasing the number of parameters. The network further benefits from a residual dense block strategy that mixes the mid and high-level features to retain the foreground information lost in low-resolution high-level features. The extracted features are expanded using a novel pyramid upsampling network. The feature maps are upsampled using bilinear interpolation and pass through a 3x3 convolutional kernel. The expanded feature maps are concatenated with the corresponding mid and low-level feature maps from an atrous feature extractor to further refine the expanded feature maps. The SFDNet showed better performance than high-ranked foreground detection algorithms on the three standard databases. The testing demo can be found at https://drive.google.com/file/d/1z_zEj9Yp7GZeM2gSIwYKvSzQlxMAiarw/view?usp=sharing .

12 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236