scispace - formally typeset
Search or ask a question

Showing papers on "Upsampling published in 2013"


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This work formulate a convex optimization problem using higher order regularization for depth image up sampling, and derives a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second.
Abstract: In this work we present a novel method for the challenging problem of depth image up sampling. Modern depth cameras such as Kinect or Time-of-Flight cameras deliver dense, high quality depth measurements but are limited in their lateral resolution. To overcome this limitation we formulate a convex optimization problem using higher order regularization for depth image up sampling. In this optimization an an isotropic diffusion tensor, calculated from a high resolution intensity image, is used to guide the up sampling. We derive a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second. We show that this novel up sampling clearly outperforms state of the art approaches in terms of speed and accuracy on the widely used Middlebury 2007 datasets. Furthermore, we introduce novel datasets with highly accurate ground truth, which, for the first time, enable to benchmark depth up sampling methods using real sensor data.

538 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: A novel approximation algorithm is developed whose complexity grows linearly with the image size and achieve realtime performance and is well suited for upsampling depth images using binary edge maps, an important sensor fusion application.
Abstract: We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution one. Though this is closely related to the all-pair-shortest-path problem which has O(n2 log n) complexity, we develop a novel approximation algorithm whose complexity grows linearly with the image size and achieve realtime performance. We compare our algorithm with the state of the art on the benchmark dataset and show that our approach provides more accurate depth upsampling with fewer artifacts. In addition, we show that the proposed algorithm is well suited for upsampling depth images using binary edge maps, an important sensor fusion application.

249 citations


Journal ArticleDOI
TL;DR: This paper presents, for the first time, a unified blind method for multi-image super-resolution (MISR or SR), single-image blur deconvolution (SIBD), and multi- image blur deconVolution (MIBD) of low-resolution images degraded by linear space-invariant blur, aliasing, and additive white Gaussian noise.
Abstract: This paper presents, for the first time, a unified blind method for multi-image super-resolution (MISR or SR), single-image blur deconvolution (SIBD), and multi-image blur deconvolution (MIBD) of low-resolution (LR) images degraded by linear space-invariant (LSI) blur, aliasing, and additive white Gaussian noise (AWGN). The proposed approach is based on alternating minimization (AM) of a new cost function with respect to the unknown high-resolution (HR) image and blurs. The regularization term for the HR image is based upon the Huber-Markov random field (HMRF) model, which is a type of variational integral that exploits the piecewise smooth nature of the HR image. The blur estimation process is supported by an edge-emphasizing smoothing operation, which improves the quality of blur estimates by enhancing strong soft edges toward step edges, while filtering out weak structures. The parameters are updated gradually so that the number of salient edges used for blur estimation increases at each iteration. For better performance, the blur estimation is done in the filter domain rather than the pixel domain, i.e., using the gradients of the LR and HR images. The regularization term for the blur is Gaussian (L2 norm), which allows for fast noniterative optimization in the frequency domain. We accelerate the processing time of SR reconstruction by separating the upsampling and registration processes from the optimization procedure. Simulation results on both synthetic and real-life images (from a novel computational imager) confirm the robustness and effectiveness of the proposed method.

94 citations


Journal ArticleDOI
TL;DR: A new upsampling method that synergistically combines the median and bilateral filters thus it better preserves the depth edges and is more robust to noise.
Abstract: We present a new upsampling method to enhance the spatial resolution of depth images. Given a low-resolution depth image from an active depth sensor and a potentially high-resolution color image from a passive RGB camera, we formulate it as an adaptive cost aggregation problem and solve it using the bilateral filter. The formulation synergistically combines the median and bilateral filters thus it better preserves the depth edges and is more robust to noise. Numerical and visual evaluations on a total of 37 Middlebury data sets demonstrate the effectiveness of our method. A real-time high-resolution depth capturing system is also developed using commercial active depth sensor based on the proposed upsampling method.

87 citations


Reference BookDOI
01 Jan 2013
TL;DR: The proposed approach is a transform Domain-Based Learning of the Initial HR Estimate Experimental Results that aims to reduce the Computational Cost of NLM-Based Methods and improve the efficiency of Super Resolution Restoration.
Abstract: Image Denoising: Past, Present, and Future, X. Li Historical Review of Image Denoising First Episode: Local Wiener Filtering Second Episode: Understanding Transient Events Third Generation: Understanding Nonlocal Similarity Conclusions and Perspectives Fundamentals of Image Restoration, B.K. Gunturk Linear Shift-Invariant Degradation Model Image Restoration Methods Blind Image Restoration Other Methods of Image Restoration Super Resolution Image Restoration Regularization Parameter Estimation Beyond Linear Shift-Invariant Imaging Model Restoration in the Presence of Unknown Spatially Varying Blur, M. Sorel and F. Sroubek Blur models Space-Variant Super Resolution Image Denoising and Restoration Based on Nonlocal Means, P. van Beek, Y. Su, and J. Yang Image Denoising Based on the Nonlocal Means Image Deblurring Using Nonlocal Means Regularization Recent Nonlocal and Sparse Modeling Methods Reducing Computational Cost of NLM-Based Methods Sparsity-Regularized Image Restoration: Locality and Convexity Revisited, W. Dong and X. Li Historical Review of Sparse Representations From Local to Nonlocal Sparse Representations From Convex to Nonconvex Optimization Algorithms Reproducible Experimental Results Conclusions and Connections Resolution Enhancement Using Prior Information, H.M. Shieh, C.L. Byrne, and M.A. Fiddy Fourier Transform Estimation and Minimum L2-Norm Solution Minimum Weighted L2-Norm Solution Solution Sparsity and Data Sampling Minimum L1-Norm and Minimum Weighted L1-Norm Solutions Modification with Nonuniform Weights Summary and Conclusions Transform Domain-Based Learning for Super Resolution Restoration, P.P. Gajjar, M.V. Joshi, and K.P. Upla Introduction to Super Resolution Related Work Description of the Proposed Approach Transform Domain-Based Learning of the Initial HR Estimate Experimental Results Conclusions and Future Research Work Super Resolution for Multispectral Image Classification, F. Li, X. Jia, D. Fraser, and A. Lambert Methodology Experimental Results Color Image Restoration Using Vector Filtering Operators, R. Lukac Color Imaging Basics Color Space Conversions Color Image Filtering Color Image Quality Evaluation Document Image Restoration and Analysis as Separation of Mixtures of Patterns: From Linear to Nonlinear Models, A. Tonazzini, I. Gerace, and F. Martinelli Linear Instantaneous Data Model Linear Convolutional Data Model Nonlinear Convolutional Data Model for the Recto-Verso Case Conclusions and Future Prospects Correction of Spatially Varying Image and Video Motion Blur Using a Hybrid Camera, Y.-W. Tai and M.S. Brown Related Work Hybrid Camera System Optimization Framework Deblurring of Moving Objects Temporal Upsampling Results and Comparisons

72 citations


Book ChapterDOI
13 Dec 2013
TL;DR: The proposed algorithm is quantitatively evaluated on the Middlebury stereo dataset and is applied to inpaint Kinect data and upsample Lidar's range data, showing that the algorithm is competent.
Abstract: In this paper, we propose to conduct inpainting and upsampling for defective depth maps when aligned color images are given. These tasks are referred to as the guided depth enhancement problem. We formulate the problem based on the heat diffusion framework. The pixels with known depth values are treated as the heat sources and the depth enhancement is performed via diffusing the depth from these sources to unknown regions. The diffusion conductivity is designed in terms of the guidance color image so that a linear anisotropic diffusion problem is formed. We further cast the steady state problem of this diffusion into the famous random walk model, by which the enhancement is achieved efficiently by solving a sparse linear system. The proposed algorithm is quantitatively evaluated on the Middlebury stereo dataset and is applied to inpaint Kinect data and upsample Lidar's range data. Comparisons to the commonly used bilateral filter and Markov Random Field based methods are also presented, showing that our algorithm is competent.

65 citations


Patent
24 Apr 2013
TL;DR: In this paper, multiple phase-control coils are utilized, simultaneously data in an K space central area is collected in a high-density mode, Gaussian distribution is utilized to conduct random downsampling K space data of the periphery of the Kspace central area, the k space data which is collected by each coil conducts Fourier transform and is transformed into an image space, according to sensitive information of each coil, signals of the image space conduct linear fitting, a reconstructed spinning density image p is formed, and a frequency domain signal after postero-medial rotatory instability
Abstract: The invention provides an image processing method based on sparse sampling magnetic resonance imaging. Multiple phase-control coils are utilized, simultaneously data in an K space central area is collected in a high-density mode, Gaussian distribution is utilized to conduct random downsampling K space data of the periphery of the K space central area, the K space data which is collected by each coil conducts Fourier transform and is transformed into an image space, according to sensitive information of each coil, signals of the image space conduct linear fitting, a reconstructed spinning density image p is formed, and a frequency domain signal after postero-medial rotatory instability (PMRI) downsampling is utilized to obtain a reference image. Therefore, information which is based on local noise variance conducts ration and calculating for a regular bound term to achieve precise image reconstruction.

34 citations


Proceedings ArticleDOI
24 Oct 2013
TL;DR: This work uses circulant structures to present a new framework for multiresolution analysis and processing of graph signals, and designs two-channel, critically-sampled, perfect-reconstruction, orthogonal lattice-filter structures to process signals defined oncirculant graphs.
Abstract: We use circulant structures to present a new framework for multiresolution analysis and processing of graph signals. Among the essential features of circulant graphs is that they accommodate fundamental signal processing operations, such as linear shift-invariant filtering, downsampling, upsampling, and reconstruction-features that offer substantial advantage. We design two-channel, critically-sampled, perfect-reconstruction, orthogonal lattice-filter structures to process signals defined on circulant graphs. To extend our reach to noncirculant graphs, we present a method to decompose a connected, undirected graph into a combination of circulant graphs. To evaluate our proposed framework, we offer examples of synthetic and real-world graph signal data and their multiscale decompositions.

31 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed techniques significantly reduce the bit rate while achieving a better quality of the synthesized view in terms of both objective and subjective measures.
Abstract: This paper proposes efficient techniques to compress a depth video by taking into account coding artifacts, spatial resolution, and dynamic range of the depth data. Due to abrupt signal changes on object boundaries, a depth video compressed by conventional video coding standards often introduces serious coding artifacts over object boundaries, which severely affect the quality of a synthesized view. We suppress the coding artifacts by proposing an efficient postprocessing method based on a weighted mode filtering and utilizing it as an in-loop filter. In addition, the proposed filter is also tailored to efficiently reconstruct the depth video from the reduced spatial resolution and the low dynamic range. The down/upsampling coding approaches for the spatial resolution and the dynamic range are used together with the proposed filter in order to further reduce the bit rate. We verify the proposed techniques by applying them to an efficient compression of multiview-plus-depth data, which has emerged as an efficient data representation for 3-D video. Experimental results show that the proposed techniques significantly reduce the bit rate while achieving a better quality of the synthesized view in terms of both objective and subjective measures.

30 citations


Patent
27 Sep 2013
TL;DR: In this paper, the upsampling filter information and the encoded enhancement layer pictures may be sent in an outpui video bitstream by an encoder, which is performed by an algorithm that determines whether knowledge of a category related to the video sequence exists.
Abstract: Systems, methods, and instrumentalities are disclosed for adaptive upsampling for multi¬ layer video coding. A method of communicating video data may involve applying an upsampling filter to a video sequence to create encoded enhancement layer pictures. The upsampling filter may be applied at a sequence level of the video sequence to create the enhancement layer bitstream. The upsampling filter may be selected from a plurality of candidate upsampling filters, for example, by determining whether knowledge of a category related to the video sequence exists and selecting a candidate upsampling filter that is designed for the category related to the video sequence. Upsampling filter information may be encoded. The encoded upsampling information may comprise a plurality of coefficients of the upsampling filter. The encoded upsampling filter information and the encoded enhancement layer pictures may be sent in an outpui video bitstream. The method may be performed, for example, by an encoder.

30 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: Wang et al. as mentioned in this paper proposed a joint trilateral filtering (JTF) algorithm for solving depth map super-resolution problems, which utilizes and preserves edge information from the associated high-resolution (HR) image by taking spatial and range information of local pixels.
Abstract: Depth map super-resolution is an emerging topic due to the increasing needs and applications using RGB-D sensors. Together with the color image, the corresponding range data provides additional information and makes visual analysis tasks more tractable. However, since the depth maps captured by such sensors are typically with limited resolution, it is preferable to enhance its resolution for improved recognition. In this paper, we present a novel joint trilateral filtering (JTF) algorithm for solving depth map super-resolution (SR) problems. Inspired by bilateral filtering, our JTF utilizes and preserves edge information from the associated high-resolution (HR) image by taking spatial and range information of local pixels. Our proposed further integrates local gradient information of the depth map when synthesizing its HR output, which alleviates textural artifacts like edge discontinuities. Quantitative and qualitative experimental results demonstrate the effectiveness and robustness of our approach over prior depth map upsampling works.

Proceedings ArticleDOI
26 May 2013
TL;DR: This paper presents a learning-based depth map super-resolution framework by solving a MRF labeling optimization problem and exhibits the capability of preserving the edges of range data while suppressing the artifacts of texture copying due to color discontinuities.
Abstract: The use of time-of-flight sensors enables the record of full-frame depth maps at video frame rate, which benefits a variety of 3D image or video processing applications. However, such depth maps are typically corrupted by noise and with limited resolution. In this paper, we present a learning-based depth map super-resolution framework by solving a MRF labeling optimization problem. With the captured depth map and the associated high-resolution color image, our proposed method exhibits the capability of preserving the edges of range data while suppressing the artifacts of texture copying due to color discontinuities. Quantitative and qualitative experimental results demonstrate the effectiveness and robustness of our approach over prior depth map upsampling works.

Journal ArticleDOI
01 Nov 2013
TL;DR: A Boundary Element Method (BEM) for rendering diffusion curve images with smooth interpolation and gradient constraints, which generates a solved boundary element image representation that is compact and offers advantages in scenarios where solved image representations are transmitted to devices for rendering and where PDE solving at the device is undesirable due to time or processing constraints.
Abstract: There is currently significant interest in freeform, curve-based authoring of graphic images. In particular, "diffusion curves" facilitate graphic image creation by allowing an image designer to specify naturalistic images by drawing curves and setting colour values along either side of those curves. Recently, extensions to diffusion curves based on the biharmonic equation have been proposed which provide smooth interpolation through specified colour values and allow image designers to specify colour gradient constraints at curves. We present a Boundary Element Method (BEM) for rendering diffusion curve images with smooth interpolation and gradient constraints, which generates a solved boundary element image representation. The diffusion curve image can be evaluated from the solved representation using a novel and efficient line-by-line approach. We also describe "curve-aware" upsampling, in which a full resolution diffusion curve image can be upsampled from a lower resolution image using formula evaluated orrections near curves. The BEM solved image representation is compact. It therefore offers advantages in scenarios where solved image representations are transmitted to devices for rendering and where PDE solving at the device is undesirable due to time or processing constraints.

Journal ArticleDOI
TL;DR: A systematic evaluation of eight standard interpolation techniques for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner.
Abstract: Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit ofmotion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. Themean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

Patent
01 Feb 2013
TL;DR: In this article, a joint geodesic upsampling to a high-resolution image was proposed to obtain a higher-resolution depth image by interpolating the depths in the low resolution depth image using the geodeic distance map, which can be obtained by any type of depth sensor.
Abstract: A resolution of a low resolution depth image is increased by applying joint geodesic upsampling to a high resolution image to obtain a geodesic distance map. Depths in the low resolution depth image are interpolated using the geodesic distance map to obtain a high resolution depth image. The high resolution image can be a gray scale or color image, or a binary boundary map. The low resolution depth image can be acquired by any type of depth sensor.

Journal ArticleDOI
TL;DR: Novel non-linear methods to down and upsample depth maps are presented and bitrate comparison of synthesized views, including texture and depth map bitstreams, is presented against a conventional linear resampling algorithm.
Abstract: Depth-enhanced 3-D video coding includes coding of texture views and associated depth maps. It has been observed that coding of depth map at reduced resolution provides better rate-distortion performance on synthesized views comparing to utilization of full resolution (FR) depth maps in many coding scenarios based on the Advanced Video Coding (H.264/AVC) standard. Conventional techniques for down and upsampling do not take typical characteristics of depth maps, such as distinct edges and smooth regions within depth objects, into account. Hence, more efficient down and upsampling tools, capable of preserving edges better, are needed. In this letter, novel non-linear methods to down and upsample depth maps are presented. Bitrate comparison of synthesized views, including texture and depth map bitstreams, is presented against a conventional linear resampling algorithm. Objective results show an average bitrate reduction of 5.29% and 3.31% for the proposed down and upsampling methods with ratio ½, respectively, comparing to the anchor method. Moreover, a joint utilization of the proposed down and upsampling brings up to 20% and on average 7.35% bitrate reduction.

Patent
05 Jun 2013
TL;DR: In this article, a video coder for coding video data includes a processor and a memory, and the processor selects a filter set from a multiple filter sets for upsampling reference layer video data.
Abstract: In one embodiment, a video coder for coding video data includes a processor and a memory. The processor selects a filter set from a multiple filter sets for upsampling reference layer video data based at least on a prediction operation mode for enhanced layer video data and upsamples the reference layer video data using the selected filter set. Some of the multiple filter sets have some different filter characteristics from one another, and the upsampled reference layer video data has the same spatial resolution as the enhanced layer video data. The processor further codes the enhanced layer video data based at least on the upsampled reference layer video data and the prediction operation mode. The memory stores the upsampled reference layer video data.

Journal ArticleDOI
TL;DR: In this article, a linear combination method was proposed to reduce the effective pixel size and maintain the detector field of view of shifted diffraction patterns, which can be applied to any diffraction imaging technique where the resolution is compromised by a large pixel size.
Abstract: Coherent X-ray diffraction imaging is a lensless imaging technique where an iterative phase-retrieval algorithm is applied to the speckle pattern, the far-field diffraction pattern produced by an isolated object. To ensure convergence to a unique solution, the diffraction pattern must be oversampled by a factor of two or more. Since the resolution in real space depends on the maximum wave vector where the intensity is detected, i.e. on the detector field of view, there is a practical limitation on oversampling in reciprocal space and resolution in real space that is ultimately determined by the number of pixels. This work shows that it is possible to reduce the effective pixel size and maintain the detector field of view by applying a linear combination method to shifted diffraction patterns. The feasibility of the method is demonstrated by reconstructing the images of test objects from diffraction patterns oversampled in each dimension by factors of 1.3 and 1.8 only. The described approach can be applied to any diffraction or imaging technique where the resolution is compromised by a large pixel size.

Journal ArticleDOI
TL;DR: A fine coregistration method for synthetic aperture radar (SAR) image processing, such as in InSAR interferogram generation, that displays superior accuracy for images with near homogeneous fractal behavior.
Abstract: This paper presents a fine coregistration method for synthetic aperture radar (SAR) image processing, such as in InSAR interferogram generation. Under the assumption that SAR images are properly modeled as fractional Brownian motion, relative subpixel offsets between two images can be derived from the statistics of their increments. The method does not require upsampling or cross-correlation, thus allowing for an accurate offset estimation with less computational load. Implemented as a local coregistration procedure, it also provides a nonrigid geometric alignment that nicely follows the topography of the area. Experimental results show that the method gives comparable results to the conventional method, in terms of the accuracy of the generated digital elevation models. In particular, it displays superior accuracy for images with near homogeneous fractal behavior.

Patent
11 Sep 2013
TL;DR: In this paper, the downsampled image is filtered in combination with the upsampling to form a predictor image, and the weights of a spatial weight matrix are based on a spatial scaling ratio.
Abstract: Apparatus and methods are provided to process a downsampled image. The downsampled image is encoded. The downsampled image is upsampled. The downsampled image is filtered in combination with the upsampling to form predictor image. Weights of a spatial weight matrix are based on a spatial scaling ratio.

Book ChapterDOI
01 Jan 2013
TL;DR: This work proposes a super-resolution framework using the graphics processing unit, which enables interactive frame rates and improves the root-mean-square error of the super-resolved surface with respect to ground truth data.
Abstract: In the field of image-guided surgery, Time-of-Flight (ToF) sensors are of interest due to their fast acquisition of 3-D surfaces. However, the poor signal-to-noise ratio and low spatial resolution of today’s ToF sensors require preprocessing of the acquired range data. Superresolution is a technique for image restoration and resolution enhancement by utilizing information from successive raw frames of an image sequence. We propose a super-resolution framework using the graphics processing unit. Our framework enables interactive frame rates, computing an upsampled image from 10 noisy frames of 200 × 200 px with an upsampling factor of 2 in 109 ms. The root-mean-square error of the super-resolved surface with respect to ground truth data is improved by more than 20 % relative to a single raw frame.

Proceedings ArticleDOI
TL;DR: A partial di erential equations (PDE) based approach is proposed to perform the interpolation and to upsample the 3D point cloud onto a uniform grid by using interpolation techniques.
Abstract: Airborne laser scanning light detection and ranging (LiDAR) systems are used for remote sensing topology and bathymetry. The most common data collection technique used in LiDAR systems employs a linear mode scanning. The resulting scanning data form a non-uniformly sampled 3D point cloud. To interpret and further process the 3D point cloud data, these raw data are usually converted to digital elevation models (DEMs). In order to obtain DEMs in a uniform and upsampled raster format, the elevation information from the available non-uniform 3D point cloud data are mapped onto the uniform grid points. After the mapping is done, the grid points with missing elevation information are lled by using interpolation techniques. In this paper, partial di erential equations (PDE) based approach is proposed to perform the interpolation and to upsample the 3D point cloud onto a uniform grid. Due to the desirable e ects of using higher order PDEs, smoothness is maintained over homogeneous regions, while sharp edge information in the scene well preserved. The proposed algorithm reduces the draping e ects near the edges of distinctive objects in the scene. Such annoying draping e ects are commonly associated with existing point cloud rendering algorithms. Simulation results are presented in this paper to illustrate the advantages of the proposed algorithm.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A new dedicated dynamic super-resolution that is capable to accurately super-resolve a depth sequence containing one or multiple moving objects without strong constraints on their shape or motion is proposed, clearly outperforming any existing super- resolution techniques that perform poorly on depth data.
Abstract: We enhance the resolution of depth videos acquired with low resolution time-of-flight cameras. To that end, we propose a new dedicated dynamic super-resolution that is capable to accurately super-resolve a depth sequence containing one or multiple moving objects without strong constraints on their shape or motion, thus clearly outperforming any existing super-resolution techniques that perform poorly on depth data and are either restricted to global motions or not precise because of an implicit estimation of motion. The proposed approach is based on a new data model that leads to a robust registration of all depth frames after a dense upsampling. The textureless nature of depth images allows to robustly handle sequences with multiple moving objects as confirmed by our experiments.

Proceedings ArticleDOI
22 Mar 2013
TL;DR: Optimized architecture and implementation aspects of decimator and interpolator using CIC filter are analyzed and comparison between the results in hardware and simulations are compared.
Abstract: Cascaded Integrator Comb (CIC) filters are extensively used in Multirate signal processing as a filter for both decimation and interpolation processes. This paper analyzes optimized architecture and implementation aspects of decimator and interpolator using CIC filter and comparison between the results in hardware and simulations. The hardware is synthesized in FPGA and verified with Modelsim and Matlab simulation results. CIC filters function as efficient anti-aliasing filters before downsampling of signals in decimation process and as anti-imaging filters after upsampling of signals in interpolation process. This paper also discusses about pipelining, throughput and area reduction techniques and performance analysis with respect to the number of stages (N) and rate change factor (R) of the filter.

Proceedings ArticleDOI
15 Jul 2013
TL;DR: This work proposes a joint de-noising and data fusion approach where the fused modalities come from conventional high-resolution photo or video camera and low-resolution range sensor of Time-of-flight (ToF) type, operating in restricted conditions of low-emitting power and low number of sensor elements.
Abstract: We propose a joint de-noising and data fusion approach where the fused modalities come from conventional high-resolution photo or video camera and low-resolution range sensor of Time-of-flight (ToF) type, operating in restricted conditions of low-emitting power and low number of sensor elements. Our approach includes identifying the various noise sources and suggesting suitable remedies at particular stages of data sensing and fusion. More specifically, fixed pattern noise and system noise are treated at a preliminary denoising stage working on range data only. In contrast to other 2D video/depth fusion approaches, which suggest working in planar coordinates, our approach includes additional denoising refinement in the space of 3D world coordinate system (i.e. point cloud space). Furthermore, the high-resolution grid resampling is performed as an iterative non-uniform to uniform resampling based on the Richardson method. This improves the performance compared to approaches based on low-to-high grid upsampling and subsequent refinement. We report experimental results where the achieved quality of fused data is the same as if the ToF sensor was operating in normal (low-noise) sensing mode.

Proceedings ArticleDOI
TL;DR: This paper benchmarks state-of-the-art filter based depth upsampling methods on depth accuracy and interpolation quality by conducting a parameter space search to find the optimum set of parameters for various upscale factors and noise levels.
Abstract: High quality 3D content generation requires high quality depth maps. In practice, depth maps generated by stereo-matching, depth sensingcameras, or decoders, have a low resolution and suffer from unreliable estimates and noise. Therefore depth post-processing is necessary. In this paper we benchmark state-of-the-art filter based depth upsampling methods on depth accuracy and interpolation quality by conducting a parameter space search to find the optimum set of parameters for various upscale factors and noise levels. Additionally, we analyze each method’s computational complexity with the big O notation and we measure the runtime of the GPU implementation that we built for each method. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

29 Oct 2013
TL;DR: Some guidelines to handle globally these estimation errors of the instantaneous angular speed of a rotating machine are given, and a calibration procedure to correct the geometrical error is proposed.
Abstract: The instantaneous angular speed (IAS) of a rotating machine is a crucial information to understand the machine operation and to diagnose potential faults There are several difficulties related to the assessment of the IAS, which is generally estimated using an angle encoder First of all the IAS is never really instantaneous data, it is always averaged between two increments of the encoder, leading to a spectral aliasing error The second potential source of estimation error is linked to the manner the time delays between angle increments are measured A first possibility is to use a high frequency counting approach: a high frequency pulse signal is used as reference, typically several tens of MHz, and an electronic device is used to count the number of pulses of the high frequency clock between two events of the angle encoder signal A second possibility is to use a standard analog to digital converter with an anti-aliasing filter to acquire the angle encoder signal, using a lower sampling rate (typically several tens of kHz), and to determine event’s times of the encoder by numerical processing like upsampling or interpolation For both approaches, the counting error is related to the uncertainty of the estimated time between two pulses of the angle encoder A third potential cause of IAS estimation error is related to the angle encoder itself The encoder is made of a rotating device linked to the shaft, on which a given number of marks are distributed on the whole circumference (the angle between two marks being the resolution of the encoder) These marks can be either teeth or holes, or alterning sectors, depending on the technology (magnetic, optical, ) In all cases, the angle between consecutive marks is never perfectly the same all around the encoder These imperfections are the cause of geometrical errors on the estimated IAS The aim of this paper is to give some guidelines to handle globally these estimation errors, and to propose a calibration procedure to correct the geometrical error Some examples will be given on both numerical and experimental illustrations

Patent
Vadim Seregin1, Jianle Chen1, Li Xiang1, Krishnakanth Rapaka1, Marta Karczewicz1, Ying Chen1 
26 Sep 2013
TL;DR: In this paper, a processor derives predication mode data associated with one of the plurality of sub-blocks based at least on a selection rule, upsamples the derived prediction mode data and the first layer block, and associates the upsampled prediction modes data with each upsampling sub-block of the upsamspled first-layer block.
Abstract: In one embodiment, an apparatus configured to code video data includes a processor and a memory unit. The memory unit stores video data associated with a first layer having a first spatial resolution and a second layer having a second spatial resolution. The video data associated with the first layer includes at least a first layer block and first layer prediction mode information associated with the first layer block, and the first layer block includes a plurality of sub-blocks where each sub-block is associated with respective prediction mode data of the first layer prediction mode information. The processor derives the predication mode data associated with one of the plurality of sub-blocks based at least on a selection rule, upsamples the derived prediction mode data and the first layer block, and associates the upsampled prediction mode data with each upsampled sub-block of the upsampled first layer block.

Patent
17 Jul 2013
TL;DR: In this paper, a data-assisted satellite-borne AIS signal synchronization parameter estimation method is proposed, where an antenna receives a radio frequency signal emitted by an earth ship AIS system and sends the radiofrequency signal to a receiver; the receiver demodulates the received radio frequency signals into a baseband signal and inputs the demodulated signals into an FPGA (field programmable gate array) data acquisition module.
Abstract: A data-assisted satellite-borne AIS signal synchronization parameter estimation method comprises that an antenna receives a radio frequency signal emitted by an earth ship AIS system and sends the radio frequency signal to a receiver; the receiver demodulates the received radio frequency signal into a baseband signal and inputs the demodulated radio frequency signal into an FPGA (field programmable gate array) data acquisition module; the FPGA data acquisition module subjects the received baseband signal to analog-digital conversion to obtain a digital baseband signal and inputs the digital baseband signal into a signal processing module; the signal processing module processes the received digital baseband signal to obtain correct AIS ship information and to send the correct AIS ship information into a data storage module; and the data storage module stores the received AIS ship information. The data-assisted satellite-borne AIS signal synchronization parameter estimation method has the advantages that under signal downsampling and by introducing the self-correlation operation and the maximum likelihood operation, the performance of large estimation range and small estimation error of frequency offset of the satellite-borne AIS signal can be achieved; and through the accumulation argument operation and the accumulation module value operation, high-accuracy timing estimation, phase shift estimation and amplitude estimation are achieved.

Proceedings ArticleDOI
12 Sep 2013
TL;DR: A novel image upsampling algorithm that preserves the image structure, particularly around the edges, better than previous techniques is proposed, which computes a spline approximation to the edges in the image via bitmap tracing.
Abstract: Traditional upsampling methods apply spatial-invariant filtering across the entire image for efficiency, but often produce unsatisfactory artifacts around the edges. We propose a novel image upsampling algorithm that preserves the image structure, particularly around the edges, better than previous techniques. Our method computes a spline approximation to the edges in the image via bitmap tracing. A scaled version of the traced image is then used as a guide for joint bilateral up-sampling to produce the final result. Our method is easy to implement and the visual fidelity and quality of our results compare favorably to state-of-the-art techniques.