scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Proceedings ArticleDOI
09 Jun 1994
TL;DR: The Frequency domain Replication and Downsampling (FReD) algorithm is discussed, which enables the acquisition of data at normal spotlight-mode rates and which does not require the computation of FFTs any larger than those required for normal Spotlight-mode processing.
Abstract: Migration processing exactly accounts for the wavefront curvature over the imaged scene. Migration processing is therefore capable of forming high resolution SAR images when the data is acquired over a large synthetic aperture collection angle. Because migration processing requires phase compensation to a line corresponding to the nominal SAR flight path, the phase history is chirped over a very large bandwidth, requiring a very high sample rate to prevent aliasing in the frequency spectrum. The sample rate is determined by the size of the synthetic aperture collection angle. When migration processing is applied to a spotlight-mode SAR, this sampling rate can be much higher than that required for normal spotlight-mode processing. In this latter case, the phase history is motion compensated to scene center and the sample rate is determined by the spot size. Higher sampling rates result in large FFTs and may cause range ambiguity problems. ERIM has pursued the development of a variation on migration processing, which we call the Frequency domain Replication and Downsampling (FReD) algorithm, which enables the acquisition of data at normal spotlight-mode rates and which does not require the computation of FFTs any larger than those required for normal spotlight-mode processing. The FReD algorithm is based on the fact that when a discrete, aliased spectrum is replicated a sufficient number of times, the resultant spectrum will contain the desired signal spectrum. What we call the FReD algorithm is discussed in two articles by Prati, et al. Subsequent processing steps extract the desired portion of the spectrum to form an image. This paper will review migration processing, discuss the FReD algorithm and present expressions for the number of operations required for its implementation. Migration-processed, spotlight-mode SAR imagery derived from airborne collected data and demonstrating the utility of FReD are presented.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

13 citations

Proceedings ArticleDOI
TL;DR: A partial di erential equations (PDE) based approach is proposed to perform the interpolation and to upsample the 3D point cloud onto a uniform grid by using interpolation techniques.
Abstract: Airborne laser scanning light detection and ranging (LiDAR) systems are used for remote sensing topology and bathymetry. The most common data collection technique used in LiDAR systems employs a linear mode scanning. The resulting scanning data form a non-uniformly sampled 3D point cloud. To interpret and further process the 3D point cloud data, these raw data are usually converted to digital elevation models (DEMs). In order to obtain DEMs in a uniform and upsampled raster format, the elevation information from the available non-uniform 3D point cloud data are mapped onto the uniform grid points. After the mapping is done, the grid points with missing elevation information are lled by using interpolation techniques. In this paper, partial di erential equations (PDE) based approach is proposed to perform the interpolation and to upsample the 3D point cloud onto a uniform grid. Due to the desirable e ects of using higher order PDEs, smoothness is maintained over homogeneous regions, while sharp edge information in the scene well preserved. The proposed algorithm reduces the draping e ects near the edges of distinctive objects in the scene. Such annoying draping e ects are commonly associated with existing point cloud rendering algorithms. Simulation results are presented in this paper to illustrate the advantages of the proposed algorithm.

13 citations

Journal ArticleDOI
TL;DR: Inspired by unsharp masking, a classical technique for edge enhancement that requires only a single coefficient, a new and simplified formulation of the guided filter was proposed in this article, which enjoys a filtering prior from a low-pass filter.
Abstract: The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering by means of an additional guidance image. Where classical guided filters transfer structures using hand-designed functions, recent guided filters have been considerably advanced through parametric learning of deep networks. The state-of-the-art leverages deep networks to estimate the two core coefficients of the guided filter. In this work, we posit that simultaneously estimating both coefficients is suboptimal, resulting in halo artifacts and structure inconsistencies. Inspired by unsharp masking, a classical technique for edge enhancement that requires only a single coefficient, we propose a new and simplified formulation of the guided filter. Our formulation enjoys a filtering prior from a low-pass filter and enables explicit structure transfer by estimating a single coefficient. Based on our proposed formulation, we introduce a successive guided filtering network, which provides multiple filtering results from a single network, allowing for a trade-off between accuracy and efficiency. Extensive ablations, comparisons and analysis show the effectiveness and efficiency of our formulation and network, resulting in state-of-the-art results across filtering tasks like upsampling, denoising, and cross-modality filtering. Code is available at \url{this https URL}.

13 citations

Patent
05 Oct 2007
TL;DR: In this paper, a video encoding method, in which a video signal consisting of two or more signal elements is targeted to be encoded, includes a step of setting a downsampling ratio is set for a specific signal element in a frame, in accordance with the characteristics in the frame.
Abstract: A video encoding method, in which a video signal consisting of two or more signal elements is targeted to be encoded, includes a step of setting a downsampling ratio is set for a specific signal element in a frame, in accordance with the characteristics in the frame; and a step of generating a target video signal to be encoded, by subjecting the specific signal element in the frame to downsampling in accordance with the set downsampling ratio. The frame may be divided into partial areas in accordance with localized characteristics in the frame; and a downsampling ratio for a specific signal element in these partial areas may be set in accordance with the characteristics in each partial area.

13 citations

Journal ArticleDOI
TL;DR: A model is proposed, which applies an image super-resolution method to an algorithm that classifies emotions from facial expressions, and it is shown that the results obtained improve the reliability and validity of the emotional analysis.
Abstract: Image upsampling and noise removal are important tasks in digital image processing. Single-image upsampling and denoising influence the quality of the resulting images. Image upsampling is known as super-resolution (SR) and referred to as the restoration of a higher-resolution image from a given low-resolution image. In facial expression analysis, the resolution of the original image directly affects the reliability and validity of the emotional analysis. Hence, optimization of the resolution of the extracted original image during emotion analysis is important. In this study, a model is proposed, which applies an image super-resolution method to an algorithm that classifies emotions from facial expressions.

13 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236