scispace - formally typeset
Search or ask a question

Showing papers on "Reconstruction filter published in 2011"


Journal ArticleDOI
TL;DR: The proposed depth boundary reconstruction filter is designed considering occurrence frequency, similarity, and closeness of pixels and is useful for efficient depth coding as well as high-quality 3-D rendering.
Abstract: A depth image is 3-D information used for virtual view synthesis in 3-D video system. In depth coding, the object boundaries are hard to compress and severely affect the rendering quality since they are sensitive to coding errors. In this paper, we propose a depth boundary reconstruction filter and utilize it as an in-loop filter to code the depth video. The proposed depth boundary reconstruction filter is designed considering occurrence frequency, similarity, and closeness of pixels. Experimental results demonstrate that the proposed depth boundary reconstruction filter is useful for efficient depth coding as well as high-quality 3-D rendering.

93 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a class of approximate reconstruction methods from non-uniform samples based on the use of time-invariant lowpass filtering, i.e., sinc interpolation.
Abstract: It is well known that a bandlimited signal can be uniquely recovered from nonuniformly spaced samples under certain conditions on the nonuniform grid and provided that the average sampling rate meets or exceeds the Nyquist rate. However, reconstruction of the continuous-time signal from nonuniform samples is typically more difficult to implement than from uniform samples. Motivated by the fact that sinc interpolation results in perfect reconstruction for uniform sampling, we develop a class of approximate reconstruction methods from nonuniform samples based on the use of time-invariant lowpass filtering, i.e., sinc interpolation. The methods discussed consist of four cases incorporated in a single framework. The case of sub-Nyquist sampling is also discussed and nonuniform sampling is shown as a possible approach to mitigating the impact of aliasing.

65 citations


Journal ArticleDOI
TL;DR: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.
Abstract: Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequency content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter,more » and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less

50 citations


Journal ArticleDOI
TL;DR: The use of larger image matrices, thinner slices, and a wide receiver bandwidth are recommended parameter adjustments when imaging patients with hardware.
Abstract: Orthopedic hardware should not be considered a contraindication to computed tomography (CT) or magnetic resonance (MR) imaging. The hardware alloy, the geometry of the hardware, and the orientation of the hardware all affect the magnitude of image artifacts. For commonly encountered alloys, the severity of image artifacts is similar for CT and MR. Cobalt chrome or stainless steel hardware produces the most artifacts; titanium hardware produces the least. In general, image artifacts are most severe adjacent to the hardware. CT image artifacts are related to incomplete X-ray projection data resulting in streaks. These can be mitigated by increasing scan technique and using a smoother reconstruction filter. Hardware with a rectangular cross-sectional shape such as a fixation plate will cause more artifacts than a radially symmetrical device such as an intramedullary nail. Image artifacts at MR are caused by the hardware magnetic susceptibility and the induction of eddy currents within the metal. A turbo spin-echo sequence yields the best results. The use of larger image matrices, thinner slices, and a wide receiver bandwidth are recommended parameter adjustments when imaging patients with hardware. This article discusses how hardware-related artifacts can be minimized by altering scan technique and image reconstruction.

44 citations


Journal ArticleDOI
TL;DR: The proposed method to achieve similar image quality using an image filtering technique in the image space, instead of a reconstruction filter in the projection space for CT imaging, has good performance and is clinically feasible in lung cancer screening.
Abstract: Purpose: While the acquisition of projection data in a computed tomography (CT) scanner is generally carried out once, the projection data is often removed from the system, making further reconstruction with a different reconstruction filter impossible. The reconstruction kernel is one of the most important parameters. To have access to all the reconstructions, either prior reconstructions with multiple kernels must be performed or the projection data must be stored. Each of these requirements would increase the burden on data archiving. This study aimed to design an effective method to achieve similar image quality using an image filtering technique in the image space, instead of a reconstruction filter in the projection space for CT imaging. The authors evaluated the clinical feasibility of the proposed method in lung cancer screening. Methods: The proposed technique is essentially the same as common image filtering, which performs processing in the spatial-frequency domain with a filter function. However, the filter function was determined based on the quantitative analysis of the point spread functions (PSFs) measured in the system. The modulation transfer functions (MTFs) were derived from the PSFs, and the ratio of the MTFs was used as the filter function. Therefore, using an image reconstructed withmore » a kernel, an image reconstructed with a different kernel was obtained by filtering, which used the ratio of the MTFs obtained for the two kernels. The performance of the method was evaluated by using routine clinical images obtained from CT screening for lung cancer in five subjects. Results: Filtered images for all combinations of three types of reconstruction kernels (''smooth,''''standard,'' and ''sharp'' kernels) showed good agreement with original reconstructed images regarded as the gold standard. On the filtered images, abnormal shadows suspected as being lung cancers were identical to those on the reconstructed images. The standard deviations (SDs) for the difference between filtered images and reconstructed images ranged from 1.9 to 23.5 Hounsfield units for all kernel combinations; these SDs were much smaller than the noise SDs in the reconstructed images. Conclusions: The proposed method has good performance and is clinically feasible in lung cancer screening. This method can be applied to images reconstructed on any scanner by measuring the PSFs in each system.« less

38 citations


Journal ArticleDOI
TL;DR: The conspicuity of the detection of small calcifications may be improved, under certain imaging conditions, by delivering higher dose toward the central views of a tomosynthesis scan, while also reducing the dose at peripheral angles to keep total administered radiation dose equivalent.
Abstract: Purpose: Substantial effort has been devoted to the clinical development of digital breast tomosynthesis (DBT). DBT is a three-dimensional (3D) x-ray imaging modality that reconstructs a number of thin image slices parallel to a stationary detector plane. Preliminary clinical studies have shown that the removal of overlapping breast tissue reduces image clutter and increases detectability of large, low contrast lesions. However, some studies, as well as anecdotal evidence, suggested decreased conspicuity of small, high contrast objects such as microcalcifications. Several investigators have proposed alternative imaging methods for improving microcalcification detection by delivering half of the total dose to the central view in addition to a separate DBT scan. Preliminary observer studies found possible improvement by either viewing the central projection alone or combining all views with a reconstruction algorithm.Methods: In this paper, we developed a generalized imaging theory based on a cascaded linear-system model for DBT to calculate the effect of variable angular dose distribution on the 3D modulation transfer function (MTF) and noise power spectrum (NPS). Using the ideal observer signal-to-noise ratio (SNR), d′, as a figure-of-merit (FOM) for a signal embedded in a uniform background, we compared the detectability of objects with different sizes under different imaging conditions (e.g., angular dose distribution and reconstruction filters). Experimental investigation was conducted for three different angular dose schemes (ADS) using a Siemens NovationTOMO prototype unit.Results: Our results show excellent agreement between modeled and experimental measurements of 3D NPS with different angular dose distribution. The ideal observer detectability index for the detection of Gaussian objects with different angular dose distributions depends strongly on the applied reconstruction filter as well as the imaging task. For detection tasks of small calcifications with reconstruction filters used typically in a clinical setting, variable angular dose distribution with more dose delivered to the central views may lead to higher d′ than a uniform angular dose distribution.Conclusions: The conspicuity of the detection of small calcifications may be improved, under certain imaging conditions, by delivering higher dose toward the central views of a tomosynthesis scan, while also reducing the dose at peripheral angles to keep total administered radiation dose equivalent. The degree of improvement depends on the choice of reconstruction filters as well as the imaging task. The improvement is more substantial for high-frequency imaging tasks and when an aggressive slice-thickness (ST) filter is applied to reduced the high-frequency noise at peripheral angles.

34 citations


Journal ArticleDOI
TL;DR: In this article, the human visual system is used as an additional reconstruction filter to quantify the potential for aliasing and is called the spurious response, which is defined as the worst case analysis assuming that the scene contains all spatial frequencies with equal amplitudes.
Abstract: Point-and-shoot, TV studio broadcast, and thermal infrared imaging cameras have significantly different applications. A parameter that applies to all imaging systems is Fλ/d, where F is the focal ratio, λ is the wavelength, and d is the detector size. Fλ/d uniquely defines the shape of the camera modulation transfer function. When Fλ/d<2, aliased signal corrupts the imagery. Mathematically, the worst case analysis assumes that the scene contains all spatial frequencies with equal amplitudes. This quantifies the potential for aliasing and is called the spurious response. Digital data cannot be seen; it resides in a computer. Cathode ray tubes, flat panel displays, and printers convert the data into an analog format and are called reconstruction filters. The human visual system is an additional reconstruction filter. Different displays and variable viewing distance affect the perceived image quality. Simulated imagery illustrates different Fλ/d ratios, displays, and sampling artifacts. Since the human visual system is primarily sensitive to intensity variations, aliasing (a spatial frequency phenomenon) is not considered bothersome in most situations.

21 citations


Journal ArticleDOI
TL;DR: An efficient adaptive depth‐of‐field rendering algorithm that achieves noise‐free results using significantly fewer samples and uses a novel multiscale reconstruction filter to dramatically reduce the noise in the defocused areas where the sampled radiance has high variance.
Abstract: Depth-of-field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect is also extremely costly. It can take hundreds to thousands of samples to achieve noise-free results using Monte Carlo integration. This paper introduces an efficient adaptive depth-of-field rendering algorithm that achieves noise-free results using significantly fewer samples. Our algorithm consists of two main phases: adaptive sampling and image reconstruction. In the adaptive sampling phase, the adaptive sample density is determined by a ‘blur-size’ map and ‘pixel-variance’ map computed in the initialization. In the image reconstruction phase, based on the blur-size map, we use a novel multiscale reconstruction filter to dramatically reduce the noise in the defocused areas where the sampled radiance has high variance. Because of the efficiency of this new filter, only a few samples are required. With the combination of the adaptive sampler and the multiscale filter, our algorithm renders near-reference quality depth-of-field images with significantly fewer samples than previous techniques.

20 citations


Journal ArticleDOI
TL;DR: The approach is based on formal reconstruction of the continuous time signal using Shannon's interpolation theorem and numerical solving of the differential equation corresponding to the analog filter to achieve accurate discretization of linear analog filters.
Abstract: This paper is concerned with accurate discretization of linear analog filters such that the frequency response of the discrete time filter accurately matches that of the continuous time filter. The approach is based on formal reconstruction of the continuous time signal using Shannon's interpolation theorem and numerical solving of the differential equation corresponding to the analog filter. When the formal continuous time system is sampled, the resulting filter reduces to discrete linear filter, which can be realized either as a state space model or as an infinite impulse response (IIR) filter. The proposed methodology is applied to design of filters for parametric equalizers.

17 citations


Patent
27 Jul 2011
TL;DR: In this article, a real-time frequency tracking and harmonic measuring method for the AC sampling of a power system is presented, which is characterized by comprising the steps: an AC sampling input is made to enter an analog-to-digital converter at a fixed sampling rate, and the analog to digital converter at the fixed sampled rate respectively outputs a first segment of digital signals with the length of N points and a second segment of signals with length of n points to enter a first fast Fourier module, and sends the phase angles into a phase discriminator one after another; the phase
Abstract: The invention discloses a real-time frequency tracking and harmonic measuring method for the AC sampling of a power system, which is characterized by comprising the steps: an AC sampling input is made to enter an analog-to-digital converter at a fixed sampling rate; the analog-to-digital converter at the fixed sampling rate respectively outputs a first segment of digital signals with the length of N points and a second segment of digital signals with the length of N points to enter a first fast Fourier module; the first fast Fourier module outputs phase angles of the highest frequency components of the first segment of digital signals with the length of N points and the second segment of digital signals with the length of N points, and sends the phase angles into a phase discriminator one after another; the phase discriminator outputs a correction frequency, and sends the correction frequency into a digital signal reconstruction filter; the digital signal reconstruction filter performs digital reconstruction sampling on the first segment of digital output signals with the length of N points, of the analog-to-digital converter at the fixed sampling rate, according to the correction frequency, and sends an obtained N-point reconstruction sampling signal into a second fast Fourier module; the phase angle output of the highest frequency component of the second fast Fourier module is sent into the phase discriminator; the third segment of digital signals with the length of N points, of the analog-to-digital converter, enter the first fast Fourier module; and the phase discriminator judges the phase angle based on accuracy requirements, and outputs a correction frequency to a digital signal reconstruction filter. After the iterations of the process reach M, the output of the second fast Fourier module is the values of respective subharmonic components of the AC sampling signal, and the output frequency of the phase discriminator is the frequency of the AC sampling signal.

13 citations


Proceedings Article
07 Apr 2011
TL;DR: It is proposed that the Exponential window provides better side-lobe roll-off ratio than Kaiser window which is very useful for some applications such as beam forming, filter design, and speech processing, and the design of digital nonrecursive Finite Impulse Response (FIR) filter by using Exponential Window is proposed.
Abstract: It has been proposed that the Exponential window provides better side-lobe roll-off ratio than Kaiser window which is very useful for some applications such as beam forming, filter design, and speech processing. In this paper the second application i.e. design of digital nonrecursive Finite Impulse Response (FIR) filter by using Exponential window is proposed. The far-end stopband attenuation is most significant parameter when the signal to be filtered has great concentration of spectral energy. In a sub-band coding, the filter is intended to separate out various frequency bands for independent processing. In case of speech, e.g. the far-end rejection of the energy in the stopband should be more so that the energy leakage from one band to another is minimum. Therefore, the filter should be designed in such a way so that it can provide better far-end stopband attenuation (amplitude of last ripple in stopband). Digital FIR filter designed by Kaiser window has a better far-end stopband attenuation than filter designed by the other previously well known adjustable windows such as Dolph-Chebyshev and Saramaki, which are special cases of Ultraspherical windows, but obtaining a digital filter which performs higher far-end stopband attenuation than Kaiser window will be useful. In this paper, the design of nonrecursive digital FIR filter has been proposed by using Exponential window. It provides better far-end stopband attenuation than filter designed by well known Kaiser window, which is the advantage of filter designed by Exponential window over filter designed by Kaiser window. The proposed schemes were simulated on commercially available software and the results show the close agreement with proposed theory.

Proceedings ArticleDOI
07 Apr 2011
TL;DR: A 2nd-order anti-aliasing Sinc filter (Sinc2 filter) is proposed that provides double the rejection ratio of a Sincfilter and can be embedded in a VCO, resulting in little overhead.
Abstract: One of the recent trends in multimode multiband (MMMB) receivers is to remove the analog filter or variable-gain amplifier (VGA) in the receiver chain and employ a wide-dynamic-range ADC directly after the mixer or include the mixer in the ADC [1,2] While such architecture provides ease of programmability once the signals are digitized, it puts a large burden on the ADC and anti-alias filter Hence, ADCs typically use high-performance analog circuits for wide dynamic range, even though it is difficult to implement these circuits using low-voltage nanoscale CMOS processes A promising ADC architecture for an MMMB receiver is the VCO-based ADC, since it offers 1st-order noise-shaping from its open-loop digital-intensive nature, thus allowing high sampling rate and high SNR [3] Furthermore, the VCO-based ADC provides an inherent anti-aliasing 1st-order Sinc filter due to the innate integrating ability of the VCO [4] Unfortunately, for multiband receivers that do not have an RF pre-filter, a Sinc filter alone does not provide enough out-of-band rejection and hence higher-order anti-aliasing filters are required In order to solve this problem, we propose a 2nd-order anti-aliasing Sinc filter (Sinc2 filter) that provides double the rejection ratio of a Sinc filter Furthermore, the proposed technique is highly digital and can be embedded in a VCO, resulting in little overhead

Journal ArticleDOI
TL;DR: An improved half-covered helical cone-beam CT reconstruction algorithm based on localized reconstruction filter is developed that well solves the truncation error of the half- covered helical FDK algorithm, improves the quality of the reconstruction image, and can suppress noise and get better results.
Abstract: Traditional helical cone-beam Computed Tomography (CT) is based on the assumption that the entire cross-section of the scanned object is covered by x-rays at each view angle. Because of the size limitation of planar detector, the traditional helical cone-beam CT scanning is restricted when the cross-section of the object is larger than the field of view (FOV) of the CT system. The helical cone-beam CT scanning based on FOV half-covered can almost double the FOV, whose mechanism is simple and the scanning efficiency is the same as that of traditional helical cone-beam CT. During reconstruction, the extended helical cone-beam FDK algorithm (called half-covered helical FDK for short) is developed, and the computational efficiency of this algorithm is high. But the reconstruction image has truncation error. Regarding this problem, this paper extends the idea of 2D local reconstruction to 3D half-covered helical cone-beam CT, and develops an improved half-covered helical cone-beam CT reconstruction algorithm based on localized reconstruction filter. Experimental results indicate that the presented algorithm well solves the truncation error of the half-covered helical FDK algorithm, improves the quality of the reconstruction image. And for the noise projection data, the presented algorithm can suppress noise and get better results. Moreover, the reconstruction time is much less.

Journal ArticleDOI
TL;DR: The present work deals with 12-bit Nyquist current-steering CMOS digital-to-analog converter (DAC) which is an essential part in baseband section of wireless transmitter circuits and uses oversampling ratio (OSR) for the proposed DAC leads to avoid use of an active analog reconstruction filter.
Abstract: The present work deals with 12-bit Nyquist current-steering CMOS digital-to-analog converter (DAC) which is an essential part in baseband section of wireless transmitter circuits. Using oversampling ratio (OSR) for the proposed DAC leads to avoid use of an active analog reconstruction filter. The optimum segmentation (75%) has been used to get the best DNL and reduce glitch energy. This segmentation ratio guarantees the monotonicity. Higher performance is achieved using a new 3-D thermometer decoding method which reduces the area, power consumption and the number of control signals of the digital section. Using two digital channels in parallel, helps reach 1-GSample/s frequency. Simulation results show that the spurious- free-dynamic-range (SFDR) in Nyquist rate is better than 64 dB for sampling frequency up to 1-GSample/s. The analog voltage supply is 3.3 V while the digital part of the chip operates with only 2.4 V. Total power consumption in Nyquist rate measurement is 144.9 mW. The chip has been processed in a standard 0.35 µm CMOS technology. Active area of chip is 1.37 mm2.

Journal ArticleDOI
TL;DR: An analog circuit for the weighted least-squares (WLS) design of FIR Nyquist filters by using a Hopfield neural network (HNN) based on formulating the error function in the optimization of theFIR Nyquist filter as a Lyapunov energy function to find the Hopfield related parameters.

Proceedings ArticleDOI
TL;DR: The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric, and impact of the non-stationarity noise effect may need further investigations.
Abstract: The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

Proceedings ArticleDOI
TL;DR: This paper proposes a novel approach to filter evolution that instead of using a wavelet filter or evolving a second filter for reconstruction, the reconstruction filter is computed as the biorthogonal inverse of the evolved compression filter.
Abstract: Wavelets provide an attractive method for efficient image compression. For transmission across noisy or bandwidth limited channels, a signal may be subjected to quantization in which the signal is transcribed onto a reduced alphabet in order to save bandwidth. Unfortunately, the performance of the discrete wavelet transform (DWT) degrades at increasing levels of quantization. In recent years, evolutionary algorithms (EAs) have been employed to optimize wavelet-inspired transform filters to improve compression performance in the presence of quantization. Wavelet filters consist of a pair of real-valued coefficient sets; one set represents the compression filter while the other set defines the image reconstruction filter. The reconstruction filter is defined as the biorthogonal inverse of the compression filter. Previous research focused upon two approaches to filter optimization. In one approach, the original wavelet filter is used for image compression while the reconstruction filter is evolved by an EA. In the second approach, both the compression and reconstruction filters are evolved. In both cases, the filters are not biorthogonally related to one another. We propose a novel approach to filter evolution. The EA optimizes a compression filter. Rather than using a wavelet filter or evolving a second filter for reconstruction, the reconstruction filter is computed as the biorthogonal inverse of the evolved compression filter. The resulting filter pair retains some of the mathematical properties of wavelets. This paper compares this new approach to existing filter optimization approaches to determine its suitability for the optimization of image filters appropriate for defense applications of image processing.

Journal ArticleDOI
TL;DR: A 12-bit Nyquist current-steering digital-to-analog converter (DAC) is implemented using TSMC 0.35 μm standard CMOS process technology and the optimum segmentation has been used to get the best DNL and reduce glitch energy.
Abstract: In this paper a 12-bit Nyquist current-steering digital-to-analog converter (DAC) is implemented using TSMC 0.35 μm standard CMOS process technology. The proposed DAC is an essential part in baseband section of wireless transmitter circuits. Using oversampling ratio (OSR) for it leads to avoid use of an active analog reconstruction filter. The optimum segmentation (75%) has been used to get the best DNL and reduce glitch energy. This segmentation ratio guarantees the monotonicity. Higher performance is achieved using a new 3D thermometer decoding method which reduces the area, power consumption and the number of control signals of the digital section. Using two digital channels in parallel, helps reach 1 GHz sampling frequency. Simulations indicate that the DAC has an accuracy better than 10.7-bit for upcoming higher data rate standards (IEEE 802.16 and 802.11n), and a spurious-free-dynamic-range (SFDR) higher than 64 dB in whole Nyquist frequency band. The post layout four corner Monte-Carlo simulated INL is better than 0.74 LSB while simulated DNL is better than 0.49 LSB. The analog voltage supply is 3.3 V while the digital part of the chip operates with only 2.4 V. Total power consumption in Nyquist rate measurement is 144.9 mW. Active area of chip is 1.37 mm2.

Patent
10 Jun 2011
TL;DR: In this paper, a reconstruction filter with a built-in balun was proposed, where balanced signals within a filter bandwidth are transmitted from the first and second input nodes to the output node and balanced signals outside the filter bandwidth were substantially shorted to ground.
Abstract: The present invention provides a filter that may be used as a reconstruction filter with a built-in balun. One embodiment of the filter includes first and second input nodes for receiving balanced radiofrequency signals and an inductive-capacitive (LC) circuit coupled between the first and second input nodes and first and second intermediate nodes. This embodiment of the filter also includes a coupling circuit that couples the first and second intermediate nodes to an output node. Balanced signals within a filter bandwidth are transmitted from the first and second input nodes to the output node and balanced signals outside the filter bandwidth are substantially shorted to ground.

Patent
11 May 2011
TL;DR: In this paper, a self-adaptive calibrating device for mismatch errors of a time-interleaved analog-to-digital converter, which comprises an M passage Time-Interleaved Analog-To-Digital converter (TIADC), a signal recombiner, a digital reference-signal memory, a simulated reference-Signal generator, a selfadaptive reconstruction filter group, a clock generation circuit and a subtractor, is presented.
Abstract: The utility model relates to a self-adaptive calibrating device for mismatch errors of a time-interleaved analog-to-digital converter, which comprises an M passage time-interleaved analog-to-digital converter (TIADC), a signal recombiner, a digital reference-signal memory, a simulated reference-signal generator, a self-adaptive reconstruction filter group, a clock generation circuit and a subtractor; each passage is calibrated through signals after passage recombination instead of being individually calibrated on each passage, and the problem that time errors can not be calibrated due to aliasing when the bandwidth of an input signal is larger than the nyquist frequency of each passage analog-to-digital converter (ADC) is solved. A self-adaptive reconstruction filter is split into a plurality of subfilters for parallel working, the requirements for the processing speed of a self-adaptive calibrating filter can not be improved while the effect of signal recombination is achieved, and the hardware realizability of the structure disclosed by the invention is ensured. A digital reference signal is built-in and is used as an optimization target for self-adaptive calibration without preliminarily measuring or computing the size of passage mismatch errors or distinguishing the sources of the errors, and various mismatch errors can be calibrated.

Proceedings ArticleDOI
05 Jun 2011
TL;DR: In this article, a reconstruction filter for a class-S power amplifier is presented, and a comparison of the doubly and singly terminated filters in terms of a symbolic algorithm is presented.
Abstract: This paper presents the design of a reconstruction filter for a class-S power amplifier. Previously, a few doubly terminated filters for a switched-mode amplifier system were presented, but measurement results showed significant ripples up to 3 dB in output power. This effect can be explained by the current mode final stage acting like a current source with infinite internal impedance, thus demanding a singly terminated filter. This paper presents a comprehensive comparison of the doubly and singly terminated filters in terms of a symbolic algorithm and, moreover, proves mathematically that only singly terminated filters owing to their constant input impedance can fulfil the requirements of current-mode switching amplifiers.

Patent
10 Feb 2011
TL;DR: In this paper, the amplitude of the received signal is measured in spatial coordinates on each position of the antenna and the discrete Fourier transformation is calculated for the set of amplitude values and then processed using a reconstruction filter.
Abstract: FIELD: physics. ^ SUBSTANCE: when monitoring a surface or air environment with a scanning antenna, the antenna is successively moved on the azimuth and angle of elevation by the value of a discretisation element line-by-line in the scanned area. The amplitude of the received signal is measured in spatial coordinates on each position of the antenna. The discrete Fourier transformation is calculated for the set of amplitude values and then processed using a reconstruction filter. One-dimensional discrete Fourier transformation is used in form of a two-step procedure: first, the image is processed with a one-dimensional reconstruction filter along rows of the radar image and then along columns. ^ EFFECT: fast reconstruction of radar images owing to performance of reconstruction operations in form of a two-step procedure, taking into account the property of division of characteristics of the antenna directional pattern on variables.

Proceedings ArticleDOI
01 Oct 2011
TL;DR: The proposed design is a memory less system and is able to overcome the intersampling behavior when applied in the closed loop system and does not require the input and output of the reconstruction filter to be of different dimensions.
Abstract: A realizable reconstruction filter is presented. Since the standard reconstruction filters are not realizable, our aim is to produce a reconstruction filter as an impulsive system that contains the dynamics of the reference signal. The proposed design is a memory less system and is able to overcome the intersampling behavior when applied in the closed loop system. As compared to other techniques such as generalized sampled data hold devices being used this design does not require the input and output of the reconstruction filter to be of different dimensions.

Dissertation
01 Jan 2011
TL;DR: This thesis treats the use of single-bit quantization in conjunction with a method called Noise-Shaped Coding (NSC), as an enabler for these parameters, foremost in terms of energy efficiency.
Abstract: Three parameters that drive the research and development of future RF transmitter technologies for high speed wireless communication today are energy efficiency, flexibility and reduction of the physical footprint. This thesis treats the use of single-bit quantization in conjunction with a method called Noise-Shaped Coding (NSC), as an enabler for these parameters, foremost in terms of energy efficiency. The first part of the thesis provides a short introduction to the common Radio Frequency Power Amplifier (RFPA) power efficiency enhancement techniques. The pulsed RF transmitter is introduced in which the RFPA is used as a switch, modulated by a single-bit quantized signal which allows it to operate solely at its two most efficient states. The second part of the thesis provides an introduction to the concept of NSC and the underlying idea of how high signal quality can be achieved with one bit quantization of the signal amplitude. A particular method of implementing NSC, namely the ΣΔ-modulator, is introduced and some common methods for design and analysis are discussed. An optimization-based approach to ΣΔ-modulator design is proposed and benchmarked against conventional methods in terms of its ability to shape the power spectral density of the quantization noise according to a given reconstruction filter response, minimizing the reconstructed error metric. The third and final part of the thesis focuses specifically on the application of ΣΔ-modulation in a pulsed RF transmitter context. The concepts of band-pass and baseband ΣΔ-modulation are introduced. A few important challenges related to the use of ΣΔ-modulation in a pulsed RF-transmitter context are identified. A ΣΔ-modulator topology which handles a complex input signal is investigated in great detail and advantages compared to conventional methods for using ΣΔ-modulation are unveiled by means of theoretical analysis and simulations. A method for suppressing the quantization noise within a frequency band surrounding the modulated RF carrier, enabling the use of more wideband reconstruction filter and moderate pulse-rates, is also presented. A detailed theoretical analysis reveals how optimized Noise-Shaped Coding, as provided by the optimization method introduced in the second part, can be deployed in order to improve the system performance. Finally, the method is validated by experimental measurements on two different high efficiency RFPAs at 1 and 3.5 GHz respectively, showing promising results.

Proceedings ArticleDOI
13 Oct 2011
TL;DR: A dedicated low-pass/band-pass ΣΔ modulation scheme is proposed that limits spreading of the low-frequency quantization noise by ADC under test that tends to obstruct the test measurements at high frequencies.
Abstract: Application of the ΣΔ modulation technique to the on-chip spectral test for high-speed A/D converters is presented. The harmonic HD2/HD3 and intermodulation IM2/IM3 test is obtained with one-bit ΣΔ sequence stored in a cyclic memory or generated on line, and applied to an ADC under test through a driving buffer and a simple reconstruction filter. To achieve a dynamic range (DR) suitable for high-performance spectral measurements a frequency plan is used taking into account the type of ΣΔ modulation (low-pass and band-pass) including the FFT processing gain. Higher order modulation schemes are avoided to manage the ΣΔ quantization noise without resorting to a more complicated filter. For spectral measurements up to the Nyquist frequency, we propose a dedicated low-pass/band-pass ΣΔ modulation scheme that limits spreading of the low-frequency quantization noise by ADC under test that tends to obstruct the test measurements at high frequencies. Correction technique for NRTZ encoding suitable for ADCs with very high clock frequencies is put in perspective. The presented technique is illustrated by simulation examples of a Nyquist-rate ADC under test.

Journal ArticleDOI
TL;DR: This work presents a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework, which incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks.
Abstract: We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.

Proceedings ArticleDOI
15 Apr 2011
TL;DR: This paper presents an efficient way to implement an interpolation filter in 18bit Σ-ΔDAC, based on multiterm interpolation principle, to over-sample the audio singal with the sample rate of 48kHz and the resolution of 18 bit.
Abstract: This paper presents an efficient way to implement an interpolation filter in 18bit Σ-ΔDAC. The filter based on multiterm interpolation principle, to over-sample (128×)the audio singal with the sample rate of 48kHz and the resolution of 18 bit. After system simulated in the matlab, implemented with VerilogHDL, the circuit is designed and simulated with Quartus II. The simulation result illustrates it could gain performance satisfactorily.

Yang, Chen, Xiangdong, Wang, Jin, Jeong, Jechang 
01 Nov 2011
TL;DR: In this paper, a region adaptive interpolation filter design was proposed for the smooth and regular edge region of interlaced video to improve the deinterlacing performance of progressive scanning.
Abstract: In order to convert interlaced video into progressive scanning format, this paper proposed a high performance de-interlacing algorithm based on region adaptive interpolation filter design. Specifically, usage of the 6-tap filter is only for the most complex region, but for the smooth and regular edge region, much more correlated filter such as 2-tap or 4-tap filter should be used instead. According to the experimental results, the proposed algorithm has achieved noticeably good performance.

01 Dec 2011
TL;DR: The localized nature of wavelets is used to capture the apriori knowledge of these local features and a wavelet based representation of the time-varying TRC is proposed and the usefulness of this approach is demonstrated from the reconstruction of a time-sequentially sampled TRC.
Abstract: The tone reproduction curve (TRC) is a representation of a printer’s input-output mapping for each primary color. It is a two dimensional signal with both temporal and tonal characteristics. With an appropriate signal model that represents the TRC, the entire time-varying TRC can be reconstructed from measurements of a few time-sequential scheduled print patches. The reconstructed TRC can then be used as feedback signal for control systems to compensate for any TRC variations. In the past, signal models based on Fourier basis and principal components analysis have been proposed for the design and analysis of the sampling sequence and the reconstruction filter. However, the tone reproduction has localized features in that it variation is less at some tones than at others. These features have not been exploited in previous signal models but can potentially improve the effectiveness of sampling and reconstruction algorithms. In this paper, the localized nature of wavelets is used to capture the apriori knowledge of these local features and a wavelet based representation of the time-varying TRC is proposed. The wavelet based model is obtained from a track of experimentally obtained time-varying TRC data. The usefulness of this approach is demonstrated from the reconstruction of a time-sequentially sampled TRC. Introduction The tone reproduction curve (TRC) gives the printer’s input to output tones mapping for each CMYK primary color separation. TRC changes with time due to disturbances in the print process such as temperature and humidity variations, material age etc. Therefore the time-varying TRC can be represented by a twodimensional signal with a spatial (or tonal) dimension and a temporal dimension. The output of this signal is the output tones i.e.

Proceedings ArticleDOI
20 Jul 2011
TL;DR: A Digital Down Converter (DDC) is presented based on square wave local oscillators facilitating a multiplier-less implementation with no constraints on the sampling frequency, and a pseudo multi-rate SINC low pass filter which exhibits better performance compared to the standard multi-stage sinc filter.
Abstract: A Digital Down Converter (DDC) is presented based on square wave local oscillators facilitating a multiplier-less implementation with no constraints on the sampling frequency. The DDC includes a pseudo multi-rate SINC low pass filter which exhibits better performance compared to the standard multi-stage sinc filter. The pseudo multi-rate SINC filter can be implemented with a unique cascaded integrator comb (CIC) filter to obtain the same improved performance. A 90nm CMOS design with 8 bit inputs clocked at 400MHz demonstrates a flexible, very low power/size DDC architecture for single chip digital receiver applications.