scispace - formally typeset
Search or ask a question

Showing papers on "Upsampling published in 2009"


Proceedings ArticleDOI
20 Jun 2009
TL;DR: This work presents a novel image deconvolution algorithm that deblurs and denoises an image given a known shift-invariant blur kernel in a unified framework that can be used for deblurring, denoising, and upsampling.
Abstract: Image blur and noise are difficult to avoid in many situations and can often ruin a photograph. We present a novel image deconvolution algorithm that deblurs and denoises an image given a known shift-invariant blur kernel. Our algorithm uses local color statistics derived from the image as a constraint in a unified framework that can be used for deblurring, denoising, and upsampling. A pixel's color is required to be a linear combination of the two most prevalent colors within a neighborhood of the pixel. This two-color prior has two major benefits: it is tuned to the content of the particular image and it serves to decouple edge sharpness from edge strength. Our unified algorithm for deblurring and denoising out-performs previous methods that are specialized for these individual applications. We demonstrate this with both qualitative results and extensive quantitative comparisons that show that we can out-perform previous methods by approximately 1 to 3 DB.

195 citations


Proceedings ArticleDOI
01 Dec 2009
TL;DR: Compared with the existing SIFT FPGA implementation, which requires 33 milliseconds for an image of 320×240 pixels, a significant improvement has been achieved for the proposed architecture.
Abstract: This paper has proposed an architecture of optimised SIFT (Scale Invariant Feature Transform) feature detection for an FPGA implementation of an image matcher. In order for SIFT based image matcher to be implemented on an FPGA efficiently, in terms of speed and hardware resource usage, the original SIFT algorithm has been significantly optimised in the following aspects: 1) Upsampling has been replaced with downsampling to save the interpolation operation. 2) Only four scales with two octaves are needed for our image matcher with moderate degradation of matching performance. 3) The total dimension of the feature descriptor has been reduced to 72 from 128 of the original SIFT, which leads to significantly simplify the image matching operation. With the optimisation above, the proposed FPGA implementation is able to detect the features of a typical image of 640×480 pixels within 31 milliseconds. Therefore, compared with the existing SIFT FPGA implementation, which requires 33 milliseconds for an image of 320×240 pixels, a significant improvement has been achieved for our proposed architecture.

119 citations


Patent
Lars Risbo1
04 Mar 2009
TL;DR: In this paper, different algorithms are applied for the upsampling and downsampling cases, and the FIR coefficients of the fractional delay FIR filter are calculated by evaluation of polynomial expressions over intervals of the filter impulse response, at times corresponding to the input sample points.
Abstract: Asynchronous sample rate conversion for use in a digital audio receiver is disclosed. Different algorithms are applied for the upsampling and downsampling cases. In the upsampling case, the input signal is upsampled and filtered, before the application of a finite impulse response (FIR) filter. In the downsampling case, the input signal is filtered by an FIR filter, and then filtered and downsampled. The FIR coefficients of the fractional delay FIR filter are calculated by evaluation of polynomial expressions over intervals of the filter impulse response, at times corresponding to the input sample points.

106 citations


Proceedings ArticleDOI
18 Jan 2009
TL;DR: These experiments show that the dual image resolution range function alleviates the aliasing artifacts and therefore improves the temporal stability of the output depth map.
Abstract: Depth maps are used in many applications, eg 3D television, stereo matching, segmentation, etc Often, depth maps are available at a lower resolution compared to the corresponding image data For these applications, depth maps must be upsampled to the image resolution Recently, joint bilateral filters are proposed to upsample depth maps in a single step In this solution, a high-resolution output depth is computed as a weighted average of surrounding low-resolution depth values, where the weight calculation depends on spatial distance function and intensity range function on the related image data Compared to that, we present two novel ideas Firstly, we apply anti-alias prefiltering on the high-resolution image to derive an image at the same low resolution as the input depth map The upsample filter uses samples from both the high-resolution and the low-resolution images in the range term of the bilateral filter Secondly, we propose to perform the upsampling in multiple stages, refining the resolution by a factor of 2×2 at each stage We show experimental results on the consequences of the aliasing issue, and we apply our method to two use cases: a high quality ground-truth depth map and a real-time generated depth map of lower quality For the first use case a relatively small filter footprint is applied; the second use case benefits from a substantially larger footprint These experiments show that the dual image resolution range function alleviates the aliasing artifacts and therefore improves the temporal stability of the output depth map On both use cases, we achieved comparable or better image quality with respect to upsampling with the joint bilateral filter in a single step On the former use case, we feature a reduction of a factor of 5 in computational cost, whereas on the latter use case, the cost saving is a factor of 50

78 citations


01 Jan 2009
TL;DR: This work presents an efficient and scalable method to compute global illumination solutions at interactive rates for complex and dynamic scenes, based on parallel final gathering running entirely on the GPU and demonstrates the applicability of the method to interactive global illumination, the simulation of multiple indirect bounces, and to final gathering from photon maps.
Abstract: Recent approaches to global illumination for dynamic scenes achieve interactive frame rates by using coarse approximations to geometry, lighting, or both, which limits scene complexity and rendering quality. High-quality global illumination renderings of complex scenes are still limited to methods based on ray tracing. While conceptually simple, these techniques are computationally expensive. We present an efficient and scalable method to compute global illumination solutions at interactive rates for complex and dynamic scenes. Our method is based on parallel final gathering running entirely on the GPU. At each final gathering location we perform micro-rendering: we traverse and rasterize a hierarchical point-based scene representation into an importance-warped micro-buffer, which allows for BRDF importance sampling. The final reflected radiance is computed at each gathering location using the micro-buffers and is then stored in image-space. We can trade quality for speed by reducing the sampling rate of the gathering locations in conjunction with bilateral upsampling. We demonstrate the applicability of our method to interactive global illumination, the simulation of multiple indirect bounces, and to final gathering from photon maps.

62 citations


Proceedings ArticleDOI
29 May 2009
TL;DR: This work post-processes automatically generated depth maps from stereo or monoscopic video using a novel method to first downsample the input depth map followed by filtering and upsampling applying joint-bilateral filters to produce image aligned depth maps at full resolution.
Abstract: Automatically generated depth maps from stereo or monoscopic video are usually not aligned with the objects in the original image and may suffer from large area outliers. We post-process these depth maps using a novel method to first downsample the input depth map followed by filtering and upsampling applying joint-bilateral filters to produce image aligned depth maps at full resolution. Unlike a known method of applying joint-bilateral filters at full resolution depth maps, our method performs better in suppressing object details in the filtered depth map while properly aligning the object boundaries. Moreover, our method has low computational cost which enables usage in consumer products.

49 citations


Journal ArticleDOI
TL;DR: In this article, an oversampled subband equaliser for chromatic dispersion of singlemode optical fiber is proposed. But the signal of each subband is equalised individually and delayed to align with other subbands.
Abstract: An oversampled subband equaliser is designed for chromatic dispersion of singlemode optical fibre. The signal of each subband is equalised individually and delayed to align with other subbands. From simulation, the overall number of equaliser taps is approximately reduced by the upsampling ratio.

43 citations


Proceedings ArticleDOI
07 Nov 2009
TL;DR: Test results show that as much as 1.2 dB gain in free-viewpoint video quality can be achieved with the utilization of the proposed method compared to the scheme that uses the linear MPEG re-sampling filter.
Abstract: In this paper we propose a novel video object edge adaptive upsampling scheme for application in video-plus-depth and Multi-View plus Depth (MVD) video coding chains with reduced resolution. Proposed scheme is for improving the rate-distortion performance of reduced-resolution depth map coders taking into account the rendering distortion induced in free-viewpoint videos. The inherent loss in fine details due to downsampling, particularly at video object boundaries causes significant visual artefacts in rendered free-viewpoint images. The proposed edge adaptive upsampling filter allows the conservation and better reconstruction of such critical object boundaries. Furthermore, the proposed scheme does not require the edge information to be communicated to the decoder, as the edge information used in the adaptive upsampling is derived from the reconstructed colour video. Test results show that as much as 1.2 dB gain in free-viewpoint video quality can be achieved with the utilization of the proposed method compared to the scheme that uses the linear MPEG re-sampling filter. The proposed approach is suitable for video-plus-depth as well as MVD applications, in which it is critical to satisfy bandwidth constraints while maintaining high free-viewpoint image quality.

38 citations


Patent
Scott Cohen1
27 Feb 2009
TL;DR: In this article, the iteratively re-weighted least squares procedure is used to minimize the objective function and generate improved candidate solutions from an initial solution, which are stored as a higher-resolution version of the input image in memory, and made available to subsequent operations in an image editing application or other graphics application.
Abstract: Systems and methods for upsampling input images may evaluate potential upsampling solutions with respect to an objective function that is dependent on a sparse derivative prior on second derivative(s) of the potential upsampling solutions to identify an acceptable higher-resolution output image. The objective function may also be dependent on fidelity term(s) and/or sparse derivative prior(s) on first derivative(s) of potential upsampling solutions. The methods may include applying the iteratively re-weighted least squares procedure in minimizing the objective function and generating improved candidate solutions from an initial solution. The identified solution may be stored as a higher-resolution version of the input image in memory, and made available to subsequent operations in an image editing application or other graphics application. The methods may produce sharp results that are also smooth along edges. The methods may be implemented as program instructions stored on computer-readable storage media, executable by a CPU and/or GPU.

30 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: This paper proposes an efficient and adaptive image resizing algorithm that preserves the content and image structure as best as possible and can provide resized images with higher image quality and faster speed than that in [13].
Abstract: Several different methods have been proposed for image/video retargeting while retaining the content. However, they sometimes produce some artifacts, such as ridge or structure twist. In this paper, we present a structure-preserving image resizing technique for the image retargeting applications. Based on the warping-based retargeting technique proposed by Wolf et al.[13], we propose an efficient and adaptive image resizing algorithm that preserves the content and image structure as best as possible. We first downsample the size of the original image by using bilinear interpolation. In order to preserve the content, we introduce the structure constraints derived from the line detection into the large linear system. Then, the mapping matrices are enlarged to the original size by joint-bilateral upsampling and the resized image can be produced to preserve the content and structure as best as possible. Most of the computation is on the low-resolution layer and therefore it can be very efficient. From our experiments, the proposed method can provide resized images with higher image quality and faster speed than that in [13].

29 citations


Patent
Lewis Johnson1
02 Oct 2009
TL;DR: In this paper, the authors present a method to generate an image with an enhanced range of brightness levels by adjusting pixel data and/or using predicted values of luminance, for example, at different resolutions.
Abstract: Embodiments of the invention relate generally to generating images with an enhanced range of brightness levels, and more particularly, to facilitating high dynamic range imaging by adjusting pixel data and/or using predicted values of luminance, for example, at different resolutions. In at least one embodiment, a method generates an image with an enhanced range of brightness levels. The method can include accessing a model of backlight that includes data representing values of luminance for a number of first samples. The method also can include inverting the values of luminance, as well as upsampling inverted values of luminance to determine upsampled values of luminance. Further, the method can include scaling pixel data for a number of second samples by the upsampled values of luminance to control a modulator to generate an image.

Journal ArticleDOI
TL;DR: A novel and efficient shape adaptive filter is presented for upsampling depth map videos that are of lower resolution than their colour texture counterparts, based on the observation that significant transitions in depth intensity across depth map frames influence the overall quality of generated free-viewpoint videos.
Abstract: Quality enhancement of free-viewpoint videos is addressed for 3D video systems that use the colour texture video plus depth map representation format. More specifically, a novel and efficient shape adaptive filter is presented for upsampling depth map videos that are of lower resolution than their colour texture counterparts. Either measurement or estimation of depth map videos can take place at lower resolution. At the same time, depth map reconstruction takes place at low resolution if reduced resolution compression techniques are utilised. The proposed design is based on the observation that significant transitions in depth intensity across depth map frames influence the overall quality of generated free-viewpoint videos. Hence, sharpness and accuracy in the free-viewpoint videos rendered using 3D geometry via depth maps, especially across object borders, are targeted. Accordingly, significant enhancement of rendered free-viewpoint video quality is obtained when the proposed method is applied on top of MPEG spatial scalability filters.

Patent
Radoslav Petrov Nickolov1, Lutz Gerhard1, Ming Liu1, Raman Narayanan1, Drew Steedly1 
06 Jan 2009
TL;DR: In this article, a plurality of image layers having different resolutions are arranged in order of increasing resolution, and the upsampling and blending continues for each of the image layers to produce a blended image.
Abstract: Providing high frame rate image rendering using multiple image layers per frame. A plurality of image layers having different resolutions are arranged in order of increasing resolution. Beginning with the image layer having the lowest resolution, the image layer is upsampled to a resolution of a next image layer having a higher resolution. The upsampled image layer is blended with the next image layer. The upsampling and blending continues for each of the image layers to produce a blended image. The blended image is provided for display as a frame of video. Aspects of the invention produce a high-resolution composite image during animated navigation across zoom and pan states.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: It is concluded that for 3D IPTV services, while receiving full quality/resolution reference view, users should subscribe to differently scaled versions of the auxiliary view depending on their 3D display technology.
Abstract: It is well known that the human visual system can perceive high frequency content in 3D, even if that information is present in only one of the views. Then, the best 3D perception quality may be achieved by allocating the rates of the reference (right) and auxiliary (left) views asymmetrically. However the question of whether the rate reduction for the auxiliary view should be achieved by spatial resolution reduction (coding a downsampled version of the video followed by upsampling after decoding) or quality (QP) reduction is an open issue. This paper shows that which approach should be preferred depends on the 3D display technology used at the receiver. Subjective tests indicate that users prefer lower quality (larger QP) coding of the auxiliary view over lower resolution coding if a “full spatial resolution” 3D display technology (such as polarized projection) is employed. On the other hand, users prefer lower resolution coding of the auxiliary view over lower quality coding if a “reduced spatial resolution” 3D display technology (such as parallax barrier - autostereoscopic) is used. Therefore, we conclude that for 3D IPTV services, while receiving full quality/resolution reference view, users should subscribe to differently scaled versions of the auxiliary view depending on their 3D display technology. We also propose an objective 3D video quality measure that takes the 3D display technology into account.

Patent
24 Mar 2009
TL;DR: In this article, a technique for eliminating from or reducing the complexity of an upsampler/interpolator of a transmit system is presented. But the upsampling from the first sampling rate towards the sampling rate of a DAC is not considered.
Abstract: A technique for eliminating from or reducing the complexity of an upsampler/interpolator of a transmit system. In general, the technique involves configuring an IFFT to perform both the conversion of a modulated signal from frequency to time domain, and at least a portion of the upsampling from the first sampling rate towards the sampling rate of a DAC. In one embodiment, the IFFT is configured to have a bandwidth substantially equal to the sampling rate of a DAC. In this embodiment, the upsampler/interpolator may be totally eliminated. In another embodiment, the IFFT is configured to have a bandwidth that is greater than the first sampling rate of the modulated signal, and lower than the sampling rate of the DAC. In this embodiment, a simpler upsampler/interpolator may be employed to perform the remaining upsampling from the IFFT bandwidth to the sampling rate of the DAC.

Patent
07 Jul 2009
TL;DR: An image coding method includes transforming a color space of a color image from a first color space to a second color space, removing part of samples included in the color space transformed color image to generate a subsampled color image, coding the subsampling color image and determining an upsampling coefficient used for upsample as mentioned in this paper.
Abstract: An image coding method includes transforming a color space of a color image from a first color space to a second color space to generate a color space transformed color image (S12), removing part of samples included in the color space transformed color image to generate a subsampled color image (S13), coding the subsampled color image to generate a coded color image (S14), determining an upsampling coefficient used for upsampling (S16), determining a color space inverse transform coefficient for inversely transforming the color space from the second color space to the first color space (S17), and outputting the coded color image, the upsampling coefficient, and the color space inverse transform coefficient (S19).

Patent
04 Jun 2009
TL;DR: In this article, a multi-rate Digital Phase-Locked Loop (DPLL) is proposed to reduce quantization noise by downsampling the first stream into a second stream and the second stream is supplied to a phase detecting summer of the DPLL such that a control portion can switch at a lower rate to reduce power consumption.
Abstract: A Digital Phase-Locked Loop (DPLL) involves a Time-to-Digital Converter (TDC) that receives a DCO output signal and a reference clock and outputs a first stream of digital values. Quantization noise is reduced by clocking the TDC at a high rate. Downsampling circuitry converts the first stream into a second stream. The second stream is supplied to a phase detecting summer of the DPLL such that a control portion of the DPLL can switch at a lower rate to reduce power consumption. The DPLL is therefore referred to as a multi-rate DPLL. A third stream of digital tuning words output by the control portion is upsampled before being supplied to the DCO so that the DCO can be clocked at the higher rate, thereby reducing digital images. In a receiver application, no upsampling is performed and the DCO is clocked at the lower rate, thereby further reducing power consumption.

Patent
Nils Kokemohr1
25 Sep 2009
TL;DR: In this paper, a method for filtering a digital image, comprising segmenting the digital image into a plurality of tiles, computing tile histograms corresponding to each of the plurality, is presented.
Abstract: A method for filtering a digital image, comprising segmenting the digital image into a plurality of tiles; computing tile histograms corresponding to each of the plurality of tiles; deriving a plurality of tile transfer functions from the tile histograms preferably using 1D convolutions; interpolating a tile transfer function from the plurality of tile transfer functions; and filtering the digital image with the interpolated tile transfer function. Many filters otherwise difficult to conceive or to implement are possible with this method, including an edge-preserving smoothing filter, HDR tone mapping, edge invariant gradient or entropy detection, image upsampling, and mapping coarse data to fine data.

Proceedings Article
01 Jan 2009
TL;DR: This paper proposes a frequency-domain upsampling on an optimal BodyCentered Cubic (BCC) lattice, which provides similar quality as the most popular cubic filters, but for a significantly lower computational overhead.
Abstract: In volume-rendering applications, an appropriate resampling filter can be chosen by making a compromise between quality and efficiency. For realtime volume visualization, usually the trilinear filter is used, since its evaluation is directly supported by the recent GPUs. Although higher-order filters (e.g. quadratic or cubic filters) ensure much higher image quality, due to their larger support, they are significantly more expensive to evaluate even if a GPU acceleration is applied. Instead of higher-order filtering, in this paper, we propose a frequency-domain upsampling on an optimal BodyCentered Cubic (BCC) lattice. The obtained BCCsampled representation is rendered by using a simple GPU-accelerated trilinear B-spline reconstruction. Although this approach doubles the storage requirements, it provides similar quality as the most popular cubic filters, but for a significantly lower computational overhead.

Proceedings ArticleDOI
24 May 2009
TL;DR: A new ILIP method is proposed by introducing the adaptive signal processing techniques to generate a prediction signal with minimum error energy in the scalable video coding standard.
Abstract: In the scalable video coding (SVC) standard, interlayer intra prediction (ILIP) is one of the most fundamental coding tools used to reduce bit rate. The prediction block of macroblock (MB) in enhancement layer (EL) is obtained by upsampling the co-located base layer (BL) block. In this paper, we propose a new ILIP method by introducing the adaptive signal processing techniques. Adaptive Wiener filters, which are calculated for each frame independently, are used to generate a prediction signal with minimum error energy. On average, a coding gain of 3.6% reduction of bit rate is obtained for QCIF-CIF scenario. Up to 9% bit rate reduction is achieved for CIF-4CIF scenario.

Proceedings ArticleDOI
29 May 2009
TL;DR: This work performed algorithmic and DSP specific optimizations to achieve the real-time implementation on an embedded DSP processor, TM3270, while preserving high quality results.
Abstract: Automatically generated depth maps from video are usually not aligned with the objects in the original image and produced at lower resolutions. We propose to apply a joint-bilateral filter to smoothen the depth map within the objects and upsample it to the original image resolution while keeping object edges in the depth map aligned with the original image. We performed algorithmic and DSP specific optimizations to achieve the real-time implementation on an embedded DSP processor, TM3270, while preserving high quality results. Upsampling 90×72@50Hz depth maps to 720×576@50Hz, requires 69% to 86% of the TM3270 cycle budget.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: An image coding method based on adaptive downsampling which not only uses the pixel redundancy but also considers visual redundancy is proposed which outperforms JPEG2000, SPECK, SPIHT, and LT+SPECK at low bit rates.
Abstract: This paper proposes an image coding method based on adaptive downsampling which not only uses the pixel redundancy but also considers visual redundancy. At the encoder side, codec adaptively chooses some smooth regions of the original image to downsample, and then overlapped transform with selectivity, block DCT and adaptive-shape DCT (SA-DCT) are used against the image after being downsampled. For the incomplete transformed image, OB-SPECK is adopted to code. At the decoder side, in order to reduce the computational complexity, we use the simple cubic interpolation which not only is very suitable to the downsampled regions but also enhances greatly the real time of this coding system. Experimental results shows the proposed method outperforms JPEG2000, SPECK, SPIHT, and LT+SPECK at low bit rates.

Patent
05 Aug 2009
TL;DR: In this paper, a method for automatic upsampling of a demosaicked image using edge-orientation maps formed in the demosaicing process is presented. But the method is limited to full-color images.
Abstract: Automatically resizing demosaicked full-color images using edge-orientation maps formed in the demosaicking process. In a first example embodiment, a method for automatic upsampling of a demosaicked image includes several acts. First, a demosaicked image and an edge-orientation map that was created during the creation of the demosaicked image are received. Next, pixels of the demosaicked image are filled into an upsampled image. Then, edge-orientation values of pixels of the edge-orientation map are filled into an upsampled edge-orientation map. Next, an interpolation direction is determined for each pixel in which upsampling of the demosaicked image should be performed using the upsampled edge-orientation map. Finally, missing pixels in the upsampled image are estimated by performing interpolation along the interpolation direction using available pixels surrounding each missing pixel location.

Patent
08 Sep 2009
TL;DR: In this paper, a method and system for fractional rate conversion for a transmit path of a transceiver is presented. But the method is not suitable for the case of large numbers of transceivers.
Abstract: A method and system for fractionally converting sample rates. Fractional rate conversion for a transmit path of a transceiver is achieved by upsampling an input signal having a first sample rate by a first integer factor, removing aliasing resulting from the upconversion process, and then downsampling the intermediate signal by a second integer factor to provide a final signal having a second sample rate. The first factor and the second factor are selected to obtain a desired output sample rate that is a fraction of the sample rate of the input signal.

Journal ArticleDOI
Chun-Su Park1, Seung-Jin Baek1, Seung-Won Jung1, Hye-Soo Kim1, Sung-Jea Ko1 
TL;DR: This work proposes an improved ILIP method by generalizing the original one adopted in the SVC standard, and shows that the proposed algorithm can reduce the bit rate by 1.91% to 6.44%, as compared with the conventional ILIP.
Abstract: The scalable video coding (SVC) standard adopts a simple interlayer intra prediction (ILIP) method for encoding scalable video sequences. In the conventional ILIP, a prediction signal for the macroblock (MB) of the enhancement layer (EL) is obtained by simply upsampling the colocated block of the base layer (BL). We propose an improved ILIP method by generalizing the original one adopted in the SVC standard. In the proposed ILIP method, the MB of the EL is predicted using all MBs of the BL. Experimental results show that the proposed algorithm can reduce the bit rate by 1.91% to 6.44%, as compared with the conventional ILIP, while the average PSNR is not decreased.

Patent
22 Jun 2009
TL;DR: In this article, the authors proposed a time-spatial downsampling processor for detecting motion vectors between frame images, and preferentially thinning a motion vector having a smaller size from the obtained motion vectors to perform down-sampling in time frequency space.
Abstract: PROBLEM TO BE SOLVED: To provide a time-spatial downsampling processor, a time-spatial upsampling processor, an encoding apparatus, a decoding apparatus, and a program, wherein the information quantity of an image frame to be transmitted can be reduced and compression encoding can be highly efficiently performed.SOLUTION: The encoding apparatus 1 includes: a time downsampling processor 2 for detecting motion vectors between frame images and preferentially thinning a motion vector having a smaller size from the obtained motion vectors to perform downsampling in time frequency space; and a spatial downsampling processor 3 for calculating an average value of all elements in an area of each frequency component decomposed by one-stage discrete wavelet decomposition, applying previously regulated band limitation to each of high spatial frequency components, executing one-stage discrete wavelet reconfiguration by using low spatial frequency components and the high spatial frequency components subjected to the band limitation, and applying pixel thinning in horizontal and vertical directions at a rate of 1:2 to generate a contracted image signal.

Patent
10 Dec 2009
TL;DR: In this paper, the caching structures and apparatus for use in block-based video are described, and the system comprises an upsampling circuit, a first circuit, and a second circuit.
Abstract: Presented herein are caching structures and apparatus for use in block based video. In one embodiment, there is described a system receiving lower resolution frames and generating higher resolution frames. The system comprises an upsampling circuit, a first circuit, and a second circuit. The upsampling circuit upsamples a particular lower resolution frame, thereby resulting in an upsampled frame. The first circuit maps frames that are proximate to the particular frame, to the particular frame. The second circuit simultaneously updates the upsampled frame with two or more blocks from at least one of the frames that are proximate to the particular frame.

Journal ArticleDOI
TL;DR: An image-space acceleration technique that allows real-time direct volume rendering of large unstructured volumes and can be used as a post-process for a wide range of volume rendering algorithms and volumetric datasets is presented.
Abstract: We present an image-space acceleration technique that allows real-time direct volume rendering of large unstructured volumes. Our algorithm operates as a simple post-process and can be used to improve the performance of any existing volume renderer that is sensitive to image size. Using a joint bilateral upsampling filter, images can be rendered efficiently at a fraction of their original size, then upsampled at a high quality using properties that can be quickly computed from the volume. We show how our acceleration technique can be efficiently implemented with current GPUs and used as a post-process for a wide range of volume rendering algorithms and volumetric datasets.

Patent
22 Jan 2009
TL;DR: In this paper, a low pass filter based on a prevention band frequency was used to reduce the sample number of the image data by 1/M, which resulted in the high quality compressed data image and reproduced image that do not greatly damage the original signal harmonic component.
Abstract: PROBLEM TO BE SOLVED: To provide a device for carrying out data conversion and reproduction to prevent the image quality of moving image data from being deteriorated, and a method. SOLUTION: In carrying out downsampling for reducing the sample number of the image data by 1/M, a band restriction is conducted to the moving image data by a low pass filter based on a prevention band frequency fs whose harmonic component of a degree of K or higher out of harmonic components generated by down sampling is not superimposed with an original signal component. This results in generating compressed data by carrying out the downsampling to the data to which the band restriction is conducted. This configuration not only prevents folding distortion generated by a high degree harmonic but also generates the high quality compressed data image and reproduced image that do not greatly damage the original signal harmonic component. COPYRIGHT: (C)2009,JPO&INPIT

Book ChapterDOI
15 Dec 2009
TL;DR: Experimental results show that the proposed scheme for video coding with block adaptive downsampling and upsampling with super-resolution is very promising for high resolution coding.
Abstract: Super resolution technique was first proposed for enhancing the image resolution, and then it was expanded to video sequence for obtaining a higher resolution video from low resolution input. Recently, super-resolution based video coding has emerged as an important research topic as the image resolution increases rapidly and the downsampling coding is very efficient for bit rate reduction. With the super-resolution algorithm, we can encode the input video with low resolution at lower bitrate and reconstruct a high resolution video efficiently at the decoder side. In this paper, a block adaptive super resolution based coding framework is proposed for video coding. In the proposed scheme, block adaptive downsampling and upsampling with super-resolution is selected based on the rate-distortion cost decision, where the distortion caused by super-resolution algorithm in the reconstruction process is also included. Experimental results show that the proposed scheme is very promising for high resolution coding.