scispace - formally typeset
Search or ask a question

Showing papers on "Upsampling published in 2007"


Proceedings ArticleDOI
29 Jul 2007
TL;DR: It is demonstrated that in cases, such as those above, the available high resolution input image may be leveraged as a prior in the context of a joint bilateral upsampling procedure to produce a better high resolution solution.
Abstract: Image analysis and enhancement tasks such as tone mapping, colorization, stereo depth, and photomontage, often require computing a solution (e.g., for exposure, chromaticity, disparity, labels) over the pixel grid. Computational and memory costs often require that a smaller solution be run over a downsampled image. Although general purpose upsampling methods can be used to interpolate the low resolution solution to the full resolution, these methods generally assume a smoothness prior for the interpolation. We demonstrate that in cases, such as those above, the available high resolution input image may be leveraged as a prior in the context of a joint bilateral upsampling procedure to produce a better high resolution solution. We show results for each of the applications above and compare them to traditional upsampling methods.

1,185 citations


Proceedings ArticleDOI
29 Jul 2007
TL;DR: A new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts, based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images.
Abstract: In this paper we propose a new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts. The method is based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images. While other solutions assume some form of smoothness, we rely on this distinctive edge dependency as our prior knowledge in order to increase image resolution. In addition to this relation we require that intensities are conserved; the output image must be identical to the input image when downsampled to the original resolution. Altogether the method consists of solving a constrained optimization problem, attempting to impose the correct edge relation and conserve local intensities with respect to the low-resolution input image. Results demonstrate the visual importance of having such edge features properly matched, and the method's capability to produce images in which sharp edges are successfully reconstructed.

480 citations


Journal ArticleDOI
TL;DR: In this article, a new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts is proposed, which is based on a statistical edg...
Abstract: In this paper we propose a new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts. The method is based on a statistical edg...

224 citations


Patent
Gary J. Sullivan1
08 Jan 2007
TL;DR: In this article, the authors present techniques and tools for high accuracy position calculation for picture resizing in applications such as spatially-scalable video coding and decoding, which is performed according to a resampling scale factor.
Abstract: Techniques and tools for high accuracy position calculation for picture resizing in applications such as spatially-scalable video coding and decoding are described. In one aspect, resampling of a video picture is performed according to a resampling scale factor. The resampling comprises computation of a sample value at a position i, j in a resampled array. The computation includes computing a derived horizontal or vertical sub-sample position x or y in a manner that involves approximating a value in part by multiplying a 2n value by an inverse (approximate or exact) of the upsampling scale factor. The approximating can be a rounding or some other kind of approximating, such as a ceiling or floor function that approximates to a nearby integer. The sample value is interpolated using a filter.

142 citations


Patent
27 Jun 2007
TL;DR: In this article, the upsampled bit-depth data are used to predict the collocated EL, which is used to improve the scalability of the H.264/AVC scalability extension SVC.
Abstract: A scalable video bitstream may have an H.264/AVC compatible base layer and a scalable enhancement layer, where scalability refers to color bit depth. The H.264/AVC scalability extension SVC provides also other types of scalability, e.g. spatial scalability where the number of pixels in BL and EL are different. According to the invention, BL information is upsampled in two logical steps, one being texture upsampling and the other being bit depth upsampling. Texture upsampling is a process that increases the number of pixels, and bit depth upsampling is a process that increases the number of values that each pixel can have, corresponding to the pixels color intensity. The upsampled BL data are used to predict the collocated EL. The BL information is upsampled at the encoder side and in the same manner at the decoder side, wherein the upsampling refers to spatial and bit depth characteristics.

82 citations


Patent
13 Jul 2007
TL;DR: In this paper, a multi-carrier receiver capable of receiving one or multiple frequency channels simultaneously is described, which includes a single radio frequency (RF) receive chain, an analog-to-digital converter (ADC), and at least one processor.
Abstract: A multi-carrier receiver capable of receiving one or multiple frequency channels simultaneously is described. In one design, the multi-carrier receiver includes a single radio frequency (RF) receive chain, an analog-to-digital converter (ADC), and at least one processor. The RF receive chain processes a received RF signal and provides an analog baseband signal comprising multiple signals on multiple frequency channels. The ADC digitizes the analog baseband signal. The processor(s) digitally processes the samples from the ADC to obtain an input sample stream. This digital processing may include digital filtering, DC offset cancellation, I/Q mismatch compensation, coarse scaling, etc. The processor(s) digitally downconverts the input sample stream for each frequency channel to obtain a downconverted sample stream for that frequency channel. The processor(s) then digitally processes each downconverted sample stream to obtain a corresponding output sample stream. This digital processing may include digital filtering, downsampling, equalization filtering, upsampling, sample rate conversion, fine scaling, etc.

75 citations


Journal ArticleDOI
TL;DR: A region-based framework for intentionally introducing downsampling of the high resolution (HR) image sequences before compression and then utilizing super resolution (SR) techniques for generating an HR video sequence at the decoder is proposed.
Abstract: Every user of multimedia technology expects good image and video visual quality independently of the particular characteristics of the receiver or the communication networks employed. Unfortunately, due to factors like processing power limitations and channel capabilities, images or video sequences are often downsampled and/or transmitted or stored at low bitrates, resulting in a degradation of their final visual quality. In this paper, we propose a region-based framework for intentionally introducing downsampling of the high resolution (HR) image sequences before compression and then utilizing super resolution (SR) techniques for generating an HR video sequence at the decoder. Segmentation is performed at the encoder on groups of images to classify their blocks into three different types according to their motion and texture. The obtained segmentation is used to define the downsampling process at the encoder and it is encoded and provided to the decoder as side information in order to guide the SR process. All the components of the proposed framework are analyzed in detail. A particular implementation of it is described and tested experimentally. The experimental results validate the usefulness of the proposed method.

65 citations


Journal ArticleDOI
TL;DR: An alias removal technique by designing an alias-free upsampling scheme is proposed by minimizing the total variation of the interpolant subject to the constraint that part of alias free spectral components in the low resolution observation are known precisely and under the assumption of sparsity in the data.
Abstract: In this paper we study the usefulness of different local and global, learning-based, single-frame image super-resolution reconstruction techniques in handling three specific tasks, namely, de-blurring, de-noising and alias removal. We start with the global, iterative Papoulis---Gerchberg method for super-resolving a scene. Next we describe a PCA-based global method which faithfully reproduces a super-resolved image from a blurred and noisy low resolution input. We also study several multi-resolution processing schemes for super-resolution where the best edges are learned locally from an image database. We show that the PCA-based global method is efficient in handling blur and noise in the data. The local methods are adept in capturing the edges properly. However, both local and global approaches cannot properly handle the aliasing present in the low resolution observation. Hence we propose an alias removal technique by designing an alias-free upsampling scheme. Here the unknown high frequency components of the given partially aliased (low resolution) image is generated by minimizing the total variation of the interpolant subject to the constraint that part of alias free spectral components in the low resolution observation are known precisely and under the assumption of sparsity in the data.

30 citations


Journal ArticleDOI
TL;DR: Some applications involving a nonlinear filter, an upsampler, and/or a downsampler are discussed to demonstrate the utility of the new approach to multirate nonlinear signal processing.
Abstract: This paper proposes a polyphase representation for nonlinear filters, especially for Volterra filters. To derive the new realizations the well-known linear polyphase theory is extended to the nonlinear case. Both the upsampling and downsampling cases are considered. As in the linear case (finite-impulse response filters), neither the input signal nor the Volterra kernels must fulfil constraints in order to be realized in polyphase form. The computational complexity can be reduced significantly because of two reasons. On the one hand, all operations are performed at the low sampling rate and, on the other hand, a new null identity allows to remove many coefficients in the polyphase representation. Furthermore, some applications involving a nonlinear filter, an upsampler, and/or a downsampler are discussed to demonstrate the utility of the new approach to multirate nonlinear signal processing

27 citations


Patent
27 Nov 2007
TL;DR: In this article, a scalable video bitstream may have an H264/AVC compatible base layer (BL) and a scalable enhancement layer (EL), where scalability refers to color bit depth.
Abstract: A scalable video bitstream may have an H264/AVC compatible base layer (BL) and a scalable enhancement layer (EL), where scalability refers to color bit depth The H264/AVC scalability extension SVC provides also other types of scalability, eg spatial scalability where the number of pixels in BL and EL are different According to the invention, BL information is upsampled (TUp,BDUp) in two logical steps in adaptive order, one being texture upsampling and the other being bit depth upsampling Texture upsampling is a process that increases the number of pixels, and bit depth upsampling is a process that increases the number of values that each pixel can have, corresponding to the pixels color intensity The upsampled BL data are used to predict the collocated EL A prediction order indication is transferred so that the decoder can upsample BL information in the same manner as the encoder, wherein the upsampling refers to spatial and bit depth characteristics

26 citations


Patent
Lowell L. Winger1
24 Sep 2007
TL;DR: In this article, a method for reducing memory utilization in a digital video codec is proposed, which generally includes the steps of generating a second reference picture by downsampling a first reference picture using a pattern.
Abstract: A method for reducing memory utilization in a digital video codec. The method generally includes the steps of (A) generating a second reference picture by downsampling a first reference picture using a pattern, wherein the pattern (i) comprises a two-dimensional grid and (ii) is unachievable by performing a vertical downsampling and separately performing a horizontal downsampling, (B) generating a third reference picture by upsampling the second reference picture and (C) processing an image in a video signal using the third reference picture.

Patent
Edward Arthur Keehr1
10 Jan 2007
TL;DR: In this article, the authors describe techniques for performing ∑Δ modulation with offset in order to reduce out-of-band quantization noise, which is called oversampling.
Abstract: Techniques for performing ∑Δ modulation with offset in order to reduce out-of-band quantization noise are described. In an exemplary oversampling DAC that implements ∑Δ modulation with offset, an interpolation filter performs upsampling and interpolation filtering on data samples to generate input samples. A summer adds an offset to the input samples to generate intermediate samples. The offset alters the characteristics of the quantization noise from a ∑Δ modulator and may be selected to obtain the desired quantization noise characteristics, to retain as much dynamic range as possible, and to simplify the removal of the offset. The ∑Δ modulator performs upsampling and noise shaping on the intermediate samples and provides output samples. An offset removal unit removes at least a portion of the offset from the output samples in the digital or analog domain. A DAC converts the output samples to analog.

Patent
James D. Johnston1
27 Nov 2007
TL;DR: In this article, a stereo image can also be widened by receiving a stereo signal, converting the stereo signal into a sum-difference signal, applying HRTF processing to only the difference channel, upsampling the sumdifference signals, applying distortion, downsampling, and converting the sum difference signals into stereo signals.
Abstract: A stereo image can be widened by converting a stereo audio signal into a sum-difference audio signal, applying HRTF processing to the difference channel, and producing an output stereo audio signal. A stereo image can also be widened by receiving a stereo signal, converting the stereo signal into a sum-difference signal, applying HRTF processing to only the difference channel, upsampling the sum-difference signal, applying distortion, downsampling the sum-difference signal, and converting the sum-difference signal into a stereo signal. A system for widening a stereo image can comprise an input module configured to convert a stereo audio signal into a sum-difference audio signal, an HRTF module configured to apply HRTF processing to only the difference channel, a distortion module configured to apply a first distortion to the sum channel and a second different distortion to the difference channel, and an output module configured to produce an output stereo audio signal.

Patent
05 Sep 2007
TL;DR: In this article, the channel length estimation in a pilot-aided OFDM system is performed by using the estimated channel carrier function vectors at the scattered pilot positions by inserting zeros in between estimated scattered pilot position, and filtering the upsampled vectors using a finite impulse response filter.
Abstract: A receiver for use in a pilot-aided OFDM system and a method of performing channel length estimation of a channel in a wireless communication system includes using transmitted and received wireless signals to estimate a channel carrier function vector at continuous and scattered pilot positions of consecutive OFDM symbols; performing time-domain interpolation by (i) upsampling the estimated the channel carrier function vectors at the scattered pilot positions by inserting zeros in between estimated scattered pilot positions, and (ii) filtering the upsampled vectors using a finite impulse response filter comprising a filter bank comprising a plurality of filters; mapping the channel carrier function vector to only one of the filters in the filter bank located in the finite impulse response filter, wherein the mapping causes noise reduction and enhanced channel estimation thereby increasing a maximum Doppler frequency in the channel.

Patent
16 Jan 2007
TL;DR: In this paper, a method for detecting the presence of a television signal embedded in a received signal including the television signal and noise is disclosed, where either first-order or second-order cyclostationary property of the signals may be used for their detection.
Abstract: A method for detecting the presence of a television signal embedded in a received signal including the television signal and noise is disclosed. Either first-order or second order cyclostationary property of the signals may be used for their detection. When the first-order cyclostationary property is used, the following method is used, the method comprising the steps of upsampling the received signal by a factor of N, performing a synchronous averaging of a set of M segments of the upsampled received signal, performing an autocorrelation of the signal; and detecting the presence of peaks in the output of the autocorrelation function. When the second order cyclostationary property of the signal is used, the method comprising the steps of delaying the received signal by a fixed delay (symbol time), multiplying the received signal with the delayed version, looking for a tone (single frequency) in the output.

Patent
03 Jan 2007
TL;DR: In this paper, a fullband signal is first splitted, with downsampling, into wide frequency subband (WFS) signals, and then the NFS signals are synthesized into processed WFS signals, which are recombined into an output signal.
Abstract: A method for multifunctional processing of signals in frequency subbands performs subband decomposition and signal processing in two stages. A fullband signal is first splitted, with downsampling, into wide frequency subband (WFS) signals. Processing algorithms not requiring a high frequency resolution but benefiting from downsampling (such as subband acoustic echo cancellation), are applied to the WFS signals by wide subband processing blocks. Processed WFS signals are splitted, preferably without downsampling, into groups of narrow frequency subband (NFS) signals. The NFS signals are processed using processing algorithms (noise suppression, etc.) requiring a higher resolution. Processed NFS signals are synthesized into processed WFS signals, which are recombined into an output signal. Two-stage processing makes it possible to optimize signal processing, while keeping computational costs at low level and avoiding undesirable time delays. Preferred embodiments of the inventions are intended for use as an echo canceller/noise suppressor in voice communication terminals.

Patent
25 Jan 2007
TL;DR: In this paper, SVC introduces IntraBL mode to reduce the redundancy between the reconstructed low resolution frame and the original high resolution frame by employing adaptive 2D non-separable or 1D separable upsampling filters.
Abstract: SVC introduces IntraBL mode to reduce the redundancy between the reconstructed low resolution frame and the original high resolution frame. Currently the AVC 6-tap Wiener interpolation filter is used for the upsampling. The invention provides an improvement of the coding efficiency of the enhancement layer, especially the coding efficiency of the intra coded frames, by employing adaptive 2D non-separable or 1D separable upsampling filters. Optimization techniques such as least-square fitting give the optimal solution based on the SSD (Sum of Square Differences). The filters are recorded into the bit-stream and give better prediction of the high-resolution pictures, judged by distortion.

Patent
05 Oct 2007
TL;DR: In this paper, a video encoding method, in which a video signal consisting of two or more signal elements is targeted to be encoded, includes a step of setting a downsampling ratio is set for a specific signal element in a frame, in accordance with the characteristics in the frame.
Abstract: A video encoding method, in which a video signal consisting of two or more signal elements is targeted to be encoded, includes a step of setting a downsampling ratio is set for a specific signal element in a frame, in accordance with the characteristics in the frame; and a step of generating a target video signal to be encoded, by subjecting the specific signal element in the frame to downsampling in accordance with the set downsampling ratio. The frame may be divided into partial areas in accordance with localized characteristics in the frame; and a downsampling ratio for a specific signal element in these partial areas may be set in accordance with the characteristics in each partial area.

Proceedings Article
01 Sep 2007
TL;DR: This work outlines three approaches to decimate non-uniformly sampled signals, which are all based on interpolation, and indicates that the second approach is particularly useful.
Abstract: Decimating a uniformly sampled signal a factor D involves low-pass anti-alias filtering with normalized cut-off frequency 1/D followed by picking out every Dth sample. Alternatively, decimation can be done in the frequency domain using the fast Fourier transform (FFT) algorithm, after zero-padding the signal and truncating the FFT. We outline three approaches to decimate non-uniformly sampled signals, which are all based on interpolation. The interpolation is done in different domains, and the inter-sample behavior does not need to be known. The first one interpolates the signal to a uniformly sampling, after which standard decimation can be applied. The second one interpolates a continuous-time convolution integral, that implements the anti-alias filter, after which every Dth sample can be picked out. The third frequency domain approach computes an approximate Fourier transform, after which truncation and IFFT give the desired result. Simulations indicate that the second approach is particularly useful. A thorough analysis is therefore performed for this case, using the assumption that the non-uniformly distributed sampling instants are generated by a stochastic process.

Patent
Yoo-sun Bang1, Edward J. Delp1, Fengqing Zhu1, Ho-Young Lee1, Heui-Keun Choh1 
18 Jul 2007
TL;DR: In this article, an apparatus to enhance the resolution of a video frame is presented, which includes a frame extraction unit which extracts a key frame and one or more neighboring frames of the key frame from a video sequence; an upsampling unit which upsamples the key frames and the neighboring frames; a motion-vector search unit which calculates a motion vector of the upsampled key frame using the upsamspled neighboring frames as reference frames; and a key-frame estimation unit which enhances quality of the upsampled key frames using temporal information obtained from the motion vector and spatial
Abstract: Provided is a technology which can prevent deterioration of image quality when enhancing resolution of a predetermined key frame in a video sequence. Specifically, an apparatus to enhance resolution of a video frame is provided. The apparatus includes a frame extraction unit which extracts a key frame and one or more neighboring frames of the key frame from a video sequence; an upsampling unit which upsamples the key frame and the neighboring frames; a motion-vector search unit which calculates a motion vector of the upsampled key frame using the upsampled neighboring frames as reference frames; and a key-frame estimation unit which enhances quality of the upsampled key frame using temporal information obtained from the motion vector and spatial information in the key frame.

Patent
24 Jan 2007
TL;DR: In this paper, SVC introduces IntraBL mode to reduce the redundancy between the reconstructed low resolution frame and the original high resolution frame by employing adaptive 2D non-separable or 1D separable upsampling filters.
Abstract: SVC introduces IntraBL mode to reduce the redundancy between the reconstructed low resolution frame and the original high resolution frame. Currently the AVC 6-tap Wiener interpolation filter is used for the upsampling. The invention provides an improvement of the coding efficiency of the enhancement layer, especially the coding efficiency of the intra coded frames, by employing adaptive 2D non-separable or 1D separable upsampling filters. Optimization techniques such as least-square fitting give the optimal solution based on the SSD (Sum of Square Differences). The filters are recorded into the bit-stream and give better prediction of the high-resolution pictures, judged by distortion.

Journal ArticleDOI
TL;DR: Conditions under which the sampling lattice for a filter bank can be replaced without loss of perfect reconstruction are presented, the generalization of common knowledge that removing up/downsampling will not lose perfect reconstruction.
Abstract: This paper presents conditions under which the sampling lattice for a filter bank can be replaced without loss of perfect reconstruction. This is the generalization of common knowledge that removing up/downsampling will not lose perfect reconstruction. The results provide a simple way of building over- sampled filter banks. If the original filter banks are orthogonal, these oversampled banks construct tight frames of l2(Z n) when iterated. As an example, a quincunx lattice is used to replace the rectangular one of the standard wavelet transform. This replacement leads to a tight frame that has a higher sampling in both time and frequency. The frame transform is nearly shift invariant and has intermediate scales. An application of the transform to image fusion is also presented.

Patent
03 Oct 2007
TL;DR: In this paper, waveform descriptor information is used to request data to perform unary or filtering operations, or to perform one or more processes that involve at least one additional waveform.
Abstract: Methods for processing waveforms may include using temporal descriptor information about an input waveform to selectively request a segment of the input waveform that, when processed by a filter, produces a segment of an output waveform. In an illustrative example, waveform descriptor information may be used to request data to perform unary or filtering operations, or to perform one or more processes that involve at least one additional waveform. In combination with a filter descriptor that identifies, for example, upsampling factor, delay samples, and startup samples, complex waveform operations may be processed by selectively pulling input waveform segment data to generate a segment of the output waveform. In embodiments that process sequential waveform segments, filter tap states may be initialized using state information from processing of a previous waveform segment.

Proceedings ArticleDOI
12 Apr 2007
TL;DR: A practical and reliable method for correction of EPI distortions caused by susceptibility-induced field inhomogeneity is proposed, which integrates two existing algorithms, PLACE and SPHERE, and improves the correction by embedding an upsampling scheme into the algorithms.
Abstract: A practical and reliable method for correction of EPI distortions caused by susceptibility-induced field inhomogeneity is proposed. Our method integrates two existing algorithms, PLACE and SPHERE. We further improve the correction by embedding an upsampling scheme into the algorithms. The upsampling ratio is optimized using simulations. The number of field maps to be averaged in order to reduce noise is also investigated. The proposed method was applied to images from a phantom and from diffusion tensor imaging of the brain from four normal subjects. The normalized mutual information (NMI) between reference anatomical T1-weighted images and T2-weighted images before and after correction was calculated and compared. The improved NMI was averaged slicewise across the four subjects to demonstrate the algorithm robustness. Color-coded fractional anisotropy maps and white matter fiber tracking results were also compared visually

Journal ArticleDOI
TL;DR: The proposed method enables tradeoffs between memory utilization and computational efficiency for the construction of translation-invariant representations and is useful in resource-constrained TIDWP-based applications of digital signal compression, image segmentation and detection of transients.
Abstract: This correspondence presents a novel method for the construction of translation-invariant discrete wavelet packet (TIDWP) transforms for any decomposition level k, starting from any phase of a critically sampled discrete wavelet-packet representation of level k. The process is performed by phase shifting, i.e., the direct recovering of the wavelet coefficients omitted by the downsampling operations of each decomposition level without reconstructing the input signal. The proposed method enables tradeoffs between memory utilization and computational efficiency for the construction of translation-invariant representations. Hence, it is useful in resource-constrained TIDWP-based applications of digital signal compression, image segmentation and detection of transients

Journal ArticleDOI
TL;DR: This paper proposes to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques and integrates two structures into the JSVM 4.0 codec with suitable modifications in the prediction modes.
Abstract: Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC) standard is the well-known Laplacian pyramid (LP). An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

Patent
07 Jun 2007
TL;DR: In this paper, the vertical LPF-vertical upsampling circuit is used to extract the harmonic component in the same direction (vertical direction) as that of a vertical contour correction signal.
Abstract: PROBLEM TO BE SOLVED: To make it possible to produce a high-quality image by eliminating a folded component of a harmonic frequency generated by nonlinear processes applied to a contour correction signal. SOLUTION: The harmonic component in the same direction (vertical direction) as that of a vertical contour correction signal produced is extracted in a vertical upsampling circuit 102 from an image signal input from an image signal input terminal 41. Then, the nonlinear process is applied to the extracted harmonic frequency components in nonlinear processing circuits 105, 106. Then, in a vertical LPF-vertical upsampling circuit 107, the harmonic frequency components, subjected to nonlinear processes in each of the nonlinear processing circuits 105, 106, are limited in bandwidth in the vertical direction. Based on the harmonic frequency components, limited in bandwidth, a vertical contour correction signal is produced. Then, in an adding circuit 109, the image signal that has been input is processed, by using the vertical contour correction signal produced in the vertical LPF-vertical upsampling circuit 107. COPYRIGHT: (C)2009,JPO&INPIT

Patent
16 Jan 2007
TL;DR: Method for detecting the presence of said television signal embedded in a received signal including the television signal and noise is disclosed and secondary periodic steadiness of the signal may be used for the detection.
Abstract: Method for detecting the presence of said television signal embedded in a received signal including the television signal and noise is disclosed. Primary or secondary periodic steadiness of the signal may be used for the detection. If the primary periodic steadiness is used, the following method is used, the method comprising the steps of upsampling the received signal N times, the M segments of the upsampled received signal a performing a synchronous averaging of a set, and performing an autocorrelation of the signal, and detecting the presence of a peak at the output of the autocorrelation function. If the secondary periodic steadiness of the signal is used, the method comprising the step of multiplying the steps of only a fixed delay (symbol time) for delaying the received signal, the version that the received signal is the delayed and a step to search for tone (single frequency) at its output.

Proceedings ArticleDOI
Y. Abe1, Youji Iiguni1
15 Apr 2007
TL;DR: Computer simulations show that the proposed method from a down-sampled low resolution image by using the discrete cosine transform (DCT) is superior to the cubic spline interpolation in the high resolution image restoration performance.
Abstract: The high resolution image restoration method from a down-sampled low resolution image by using the discrete cosine transform (DCT) is proposed. The downsampling process is modeled in matrix form and the similar transformation by the DCT matrix makes the downsampling matrix to be a sparse matrix. Then the restored high resolution image can be expressed in a scalar form by using the similar transformation and efficiently computed from the low resolution image. Computer simulations show that the proposed method is superior to the cubic spline interpolation in the high resolution image restoration performance.

Proceedings ArticleDOI
13 Dec 2007
TL;DR: The quantitative evaluation of 24 different interpolation functions to upsample the k-space data for Fourier EMR image reconstruction shows that at the expense of a slight increase in computing time, the reconstructed images from upsampled data are closer to reference image with lesser distortion.
Abstract: Electron magnetic resonance imaging (EMRI) is an emerging non-invasive imaging technology for mapping free radicals in biological systems. Unlike MRI, it is implemented as a pure phase-phase encoding technique. The fast bio-clearance of the imaging agent and the requirement to reduce radio frequency power deposition dictate collection of reduced k-space samples, compromising the quality and resolution of the EMR images. The present work evaluates various interpolation kernels to generate larger k-space samples for image reconstruction, from the acquired reduced k-space samples. Using k-space EMR data sets, acquired for phantom as well as live mice the proposed technique is critically evaluated by computing quality metrics viz. signal-to-noise ratio (SNR), standard deviation (STD), root mean square error (RMSE), peak signal to noise ratio (PSNR), contrast to noise ratio (CNR) and Lui 's error function (F(I)). The quantitative evaluation of 24 different interpolation functions (including piecewise polynomial functions and many windowed sine functions) to upsample the k-space data for Fourier EMR image reconstruction shows that at the expense of a slight increase in computing time, the reconstructed images from upsampled data, produced using spline-sine, Welch-sine and Gaussian-sine kernels are closer to reference image with lesser distortion.