scispace - formally typeset
Search or ask a question

Showing papers on "Sampling (signal processing) published in 2011"


Posted Content
TL;DR: In this paper, the authors consider the case of 1-bit CS measurements and provide a lower bound on the best achievable reconstruction error, and show that the same class of matrices that provide almost optimal noiseless performance also enable a robust mapping.
Abstract: The Compressive Sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1-bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1-bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that i.i.d. random Gaussian matrices describe measurement mappings achieving, with overwhelming probability, nearly optimal error decay. Next, we consider reconstruction robustness to measurement errors and noise and introduce the Binary $\epsilon$-Stable Embedding (B$\epsilon$SE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide almost optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.

461 citations


Journal ArticleDOI
TL;DR: This is the first reported hardware that performs sub-Nyquist sampling and reconstruction of wideband signals, and the circuit realises the recently proposed modulated wideband converter, which is a flexible platform for sampling signals according to their actual bandwidth occupation.
Abstract: The authors present a sub-Nyquist analog-to-digital converter of wideband inputs. The circuit realises the recently proposed modulated wideband converter, which is a flexible platform for sampling signals according to their actual bandwidth occupation. The theoretical work enables, for example, a sub-Nyquist wideband communication receiver, which has no prior information on the transmitter carrier positions. The present design supports input signals with 2 GHz Nyquist rate and 120 MHz spectrum occupancy, with arbitrary transmission frequencies. The sampling rate is as low as 280 MHz. To the best of the authors' knowledge, this is the first reported hardware that performs sub-Nyquist sampling and reconstruction of wideband signals. The authors describe the various circuit design considerations, with an emphasis on the non-ordinary challenges the converter introduces: mixing a signal with a multiple set of sinusoids, rather than a single local oscillator, and generation of highly transient periodic waveforms, with transient intervals on the order of the Nyquist rate. Hardware experiments validate the design and demonstrate sub-Nyquist sampling and signal reconstruction.

418 citations


Journal ArticleDOI
TL;DR: A new procedure is designed which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems.
Abstract: Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases.

243 citations


Journal ArticleDOI
TL;DR: The design and implementation of an analog signal processor (ASP) ASIC for portable ECG monitoring systems and the proposed continuous-time electrode-tissue impedance monitoring circuit enables the monitoring of the signal integrity.
Abstract: This paper presents the design and implementation of an analog signal processor (ASP) ASIC for portable ECG monitoring systems The ASP ASIC performs four major functionalities: 1) ECG signal extraction with high resolution, 2) ECG signal feature extraction, 3) adaptive sampling ADC for the compression of ECG signals, 4) continuous-time electrode-tissue impedance monitoring for signal integrity monitoring These functionalities enable the development of wireless ECG monitoring systems that have significantly lower power consumption yet that are more capable than their predecessors The ASP has been implemented in 05 μm CMOS process and consumes 30 μW from a 2 V supply The noise density of the ECG readout channel is 85 nV/√Hz and the CMRR is better that 105 dB The adaptive sampling ADC is capable of compressing the ECG data by a factor of 7 and the heterodyne chopper readout extracts the features of the ECG signals Combination of these two features leads to a factor 4 reduction in the power consumption of a wireless ECG monitoring system Furthermore, the proposed continuous-time impedance monitoring circuit enables the monitoring of the signal integrity

227 citations


Journal ArticleDOI
TL;DR: This paper proposes a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation, and shows that the pulse stream can be recovered from the proposed minimal-rate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation.
Abstract: We consider minimal-rate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of infinite pulse streams was treated in previous works, either the rate of innovation was not achieved, or the pulse shape was limited to Diracs. In this paper we propose a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation. Our approach is based on modulating the input signal with a set of properly chosen waveforms, followed by a bank of integrators. This architecture is motivated by recent work on sub-Nyquist sampling of multiband signals. We show that the pulse stream can be recovered from the proposed minimal-rate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation. In addition, we address practical implementation issues, such as reduction of hardware complexity and immunity to failure in the sampling channels. The resulting scheme is flexible and exhibits better noise robustness than previous approaches.

225 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose to estimate the blending noise and subtract it from the blended data by iterative least square deblending, which does not need to be perfect because their procedure is iterative.
Abstract: Seismic acquisition is a trade-off between economy and quality. In conventional acquisition the time intervals between successive records are large enough to avoid interference in time. To obtain an efficient survey, the spatial source sampling is therefore often (too) large. However, in blending, or simultaneous acquisition, temporal overlap between shot records is allowed. This additional degree of freedom in survey design significantly improves the quality or the economics or both. Deblending is the procedure of recovering the data as if they were acquired in the conventional, unblended way. A simple least-squares procedure, however, does not remove the interference due to other sources, or blending noise. Fortunately, the character of this noise is different in different domains, e.g., it is coherent in the common source domain, but incoherent in the common receiver domain. This property is used to obtain a considerable improvement. We propose to estimate the blending noise and subtract it from the blended data. The estimate does not need to be perfect because our procedure is iterative. Starting with the least-squares deblended data, the estimate of the blending noise is obtained via the following steps: sort the data to a domain where the blending noise is incoherent; apply a noise suppression filter; apply a threshold to remove the remaining noise, ending up with (part of) the signal; compute an estimate of the blending noise from this signal. At each iteration, the threshold can be lowered and more of the signal is recovered. Promising results were obtained with a simple implementation of this method for both impulsive and vibratory sources. Undoubtedly, in the future algorithms will be developed for the direct processing of blended data. However, currently a high-quality deblending procedure is an important step allowing the application of contemporary processing flows

206 citations


Journal ArticleDOI
TL;DR: This paper introduces a new algorithm - restricted-step shrinkage (RSS) - to recover sparse signals from 1-bit CS measurements, which has provable convergence guarantees, is about an order of magnitude faster, and achieves higher average recovery signal-to-noise ratio.
Abstract: The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical Shannon-Nyquist rate. To date, the CS theory has assumed primarily real-valued measurements; it has recently been demonstrated that accurate and stable signal acquisition is still possible even when each measurement is quantized to just a single bit. This property enables the design of simplified CS acquisition hardware based around a simple sign comparator rather than a more complex analog-to-digital converter; moreover, it ensures robustness to gross nonlinearities applied to the measurements. In this paper we introduce a new algorithm - restricted-step shrinkage (RSS) - to recover sparse signals from 1-bit CS measurements. In contrast to previous algorithms for 1-bit CS, RSS has provable convergence guarantees, is about an order of magnitude faster, and achieves higher average recovery signal-to-noise ratio. RSS is similar in spirit to trust-region methods for nonconvex optimization on the unit sphere, which are relatively unexplored in signal processing and hence of independent interest.

191 citations


Proceedings ArticleDOI
22 May 2011
TL;DR: Simulation results suggest that compressed sensing should be considered as a plausible methodology for ECG compression because it implies a high fraction of common support between consecutive heartbeats.
Abstract: Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals that enables sampling rates significantly below the classical Nyquist rate. Based on the fact that electrocardiogram (ECG) signals can be approximated by a linear combination of a few coefficients taken from a Wavelet basis, we propose a compressed sensing-based approach for ECG signal compression. ECG signals generally show redundancy between adjacent heartbeats due to its quasi-periodic structure. We show that this redundancy implies a high fraction of common support between consecutive heartbeats. The contribution of this paper lies in the use of distributed compressed sensing to exploit the common support between samples of jointly sparse adjacent beats. Simulation results suggest that compressed sensing should be considered as a plausible methodology for ECG compression.

154 citations


Patent
05 Jan 2011
TL;DR: In this article, a photo-sensitive detection region is proposed for converting an electromagnetic wave field into an electric signal of flowing charges, and a separated demodulation region with at least two output nodes (D10, D20) and means (IG10, DG10, IG20, DG20) for sampling the charge-current signal at at least 2 different time intervals within a modulation period.
Abstract: A new pixel in semiconductor technology comprises a photo-sensitive detection region (1) for converting an electromagnetic wave field into an electric signal of flowing charges, a separated demodulation region (2) with at least two output nodes (D10, D20) and means (IG10, DG10, IG20, DG20) for sampling the charge-current signal at at least two different time intervals within a modulation period. A contact node (K2) links the detection region (1) to the demodulation region (2). A drift field accomplishes the transfer of the electric signal of flowing charges from the detection region to the contact node. The electric signal of flowing charges is then transferred from the contact node (K2) during each of the two time intervals to the two output nodes allocated to the respective time interval. The separation of the demodulation and the detection regions provides a pixel capable of demodulating electromagnetic wave field at high speed and with high sensitivity.

150 citations


Journal ArticleDOI
TL;DR: A joint TOA/AOA estimator is proposed for UWB indoor ranging under LOS operating conditions, and as expected, the estimation accuracy decreases with the pulse bandwidth.
Abstract: A joint TOA/AOA estimator is proposed for UWB indoor ranging under LOS operating conditions. The estimator employs an array of antennas, each feeding a demodulator consisting in a squarer and a low-pass filter. Signal samples taken at Nyquist rate at the filter outputs are processed to produce TOA and AOA estimates. Performance is assessed with transmitted pulses with a bandwidth of either 1.5 GHz (type-1 pulses) or 0.5 GHz (type-2 pulses), which correspond to sampling rates of 3 GHz and 1 GHz, respectively. As expected, the estimation accuracy decreases with the pulse bandwidth. Ranging errors of about 10 cm and angular errors of about 1° are achieved at SNR of practical interest with type-1 pulses and two antennas at a distance of 50 cm. With type-2 pulses the errors increase to 35 cm and 3°. Comparisons are made with other schemes discussed in literature.

149 citations


Journal ArticleDOI
TL;DR: A novel frequency-domain adaptive equalizer in digital coherent optical receivers, which can reduce computational complexity of the conventional time-domain Adaptive Equalizer based on finite-impulse-response (FIR) filters.
Abstract: We propose a novel frequency-domain adaptive equalizer in digital coherent optical receivers, which can reduce computational complexity of the conventional time-domain adaptive equalizer based on finite-impulse-response (FIR) filters. The proposed equalizer can operate on the input sequence sampled by free-running analog-to-digital converters (ADCs) at the rate of two samples per symbol; therefore, the arbitrary initial sampling phase of ADCs can be adjusted so that the best symbol-spaced sequence is produced. The equalizer can also be configured in the butterfly structure, which enables demultiplexing of polarization tributaries apart from equalization of linear transmission impairments. The performance of the proposed equalization scheme is verified by 40-Gbits/s dual-polarization quadrature phase-shift keying (QPSK) transmission experiments.

Journal ArticleDOI
TL;DR: An 8-channel 6-bit 16-GS/s time-interleaved analog-to-digital converter (TI ADC) was fabricated using a 65 nm CMOS technology.
Abstract: An 8-channel 6-bit 16-GS/s time-interleaved analog- to-digital converter (TI ADC) was fabricated using a 65 nm CMOS technology. Each analog-to-digital channel is a 6-bit flash ADC. Its comparators are latches without the preamplifiers. The input-referred offsets of the latches are reduced by digital offset calibration. The TI ADC includes a multi-phase clock generator that uses a delay-locked loop to generate 8 sampling clocks from a reference clock of the same frequency. The uniformity of the sampling intervals is ensured by digital timing-skew calibration. Both the offset calibration and the timing-skew calibration run continuously in the background. At 16 GS/s sampling rate, this ADC chip achieves a signal-to-distortion-plus-noise ratio (SNDR) of 30.8 dB. The chip consumes 435 mW from a 1.5 V supply. The ADC active area is 0.93 × 1.58 mm2.

Journal ArticleDOI
TL;DR: Experimental demonstrations for end-to-end real-time optical orthogonal frequency division multiplexing (OOFDM) transceivers incorporating three widely adopted adaptive loading techniques, namely, power loading (PL), bit loading (BL), and bit-and-power loading (BPL) indicate that PL is a preferred choice for cost-effective OOFDM transceiver design.
Abstract: Experimental demonstrations are reported for end-to-end real-time optical orthogonal frequency division multiplexing (OOFDM) transceivers incorporating three widely adopted adaptive loading techniques, namely, power loading (PL), bit loading (BL), and bit-and-power loading (BPL). In directly modulated distributed-feedback (DFB) laser-based, intensity-modulation, and direct-detection (IMDD) transmission systems consisting of up to 35-km single-mode fibers (SMFs), extensive experimental comparisons between these adaptive loading techniques are made in terms of maximum achievable signal bit rate, optical power budget, and digital signal processing (DSP) resource usage. It is shown that BPL is capable of supporting end-to-end real-time OOFDM transmission of 11.75 Gb/s over 25-km SMFs in the aforementioned systems at sampling speeds as low as 4 GS/s. In addition, experimental measurements also show that BPL (PL) offers the highest (lowest) signal bit rate, and their optical power budgets are similar. The observed signal bit rate difference between BPL and PL is almost independent of sampling speed and transmission distance. All the aforementioned key features agree very well with numerical simulations. On the other hand, BPL-consumed DSP resources are approximately three times higher than those required by PL. The results indicate that PL is a preferred choice for cost-effective OOFDM transceiver design.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a class of approximate reconstruction methods from non-uniform samples based on the use of time-invariant lowpass filtering, i.e., sinc interpolation.
Abstract: It is well known that a bandlimited signal can be uniquely recovered from nonuniformly spaced samples under certain conditions on the nonuniform grid and provided that the average sampling rate meets or exceeds the Nyquist rate. However, reconstruction of the continuous-time signal from nonuniform samples is typically more difficult to implement than from uniform samples. Motivated by the fact that sinc interpolation results in perfect reconstruction for uniform sampling, we develop a class of approximate reconstruction methods from nonuniform samples based on the use of time-invariant lowpass filtering, i.e., sinc interpolation. The methods discussed consist of four cases incorporated in a single framework. The case of sub-Nyquist sampling is also discussed and nonuniform sampling is shown as a possible approach to mitigating the impact of aliasing.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: This overview presents recent new methods in wideband signal processing using optical delay lines, including state-of-the-art results, and capabilities for high-resolution processing.
Abstract: Photonic signal processing offers the prospect of realising extremely high multi-GHz sampling frequencies, overcoming inherent electronic limitations. These processors provide new capabilities for realising high time-bandwidth operation and high-resolution performance. In-fibre signal processors are inherently compatible with fibre optic microwave systems, and can provide connectivity with in-built signal conditioning. Recent new methods in wideband signal processors including high-resolution, arbitrary response, low noise, programmable processing, beamforming, and ultra-wide continuous filter tunability, are presented.

Proceedings ArticleDOI
05 Jun 2011
TL;DR: Simulation shows that the proposed spectrum sensing algorithms can substantially reduce sampling rate with little performance loss, and is robust to the unpredictable noise uncertainty in wireless networks.
Abstract: Dynamic spectrum access has emerged as a promising paradigm for improving the Dynamic spectrum utilization efficiency of wireless networks. To enable this new paradigm, fast and accurate spectrum sensing has to be performed over very wide bandwidth in noisy channel environments under energy constraints. Cyclic feature based sensing approach works well under noise uncertainty, but requires very high sampling rates in the wideband regime, and hence incurs high energy consumption and hardware costs. This paper aims to alleviate the sampling requirements of cyclic detectors by utilizing the compressive sampling principle and exploiting the sparsity structure in the two-dimensional cyclic spectrum domain. A technical challenge lies in the fact that the compressive samples collected in the time domain does not have a direct linear relationship with the two dimensional sparse cyclic spectrum of interest, which is a major departure from existing sparse signal recovery techniques for linear sampling systems. This paper solves this challenge by reformulating the vectorized cyclic spectrum into a linear form of the autocorrelation of the compressed samples. Further, based on the recovered cyclic spectrum, new cyclic-based detectors are developed to estimate the spectrum occupancy when multiple sources are present. Simulation shows that the proposed spectrum sensing algorithms can substantially reduce sampling rate with little performance loss, and is robust to the unpredictable noise uncertainty in wireless networks.

Journal ArticleDOI
TL;DR: The temporal redundancy in videos is explored, and a block-based adaptive framework for compressed video sampling is proposed that can increase the frame rate by up to six times depending on the scene complexity and the video quality constraint.
Abstract: Compressed sensing is a novel technology to acquire and reconstruct sparse signals below the Nyquist rate. It has great potential in image and video acquisition to explore data redundancy and to significantly reduce the number of collected data. In this paper, we explore the temporal redundancy in videos, and propose a block-based adaptive framework for compressed video sampling. To address independent movement of different regions in a video, the proposed framework classifies blocks into different types depending on their inter-frame correlation, and adjusts the sampling and reconstruction strategy accordingly. Our framework also considers the diverse texture complexity of different regions, and adaptively adjusts the number of measurements collected for each region. The proposed framework also includes a frame rate selection module that selects the maximum achievable frame rate from a list of candidate frame rates under the hardware sampling rate and the perceptual quality constraints. Our simulation results show that compared to traditional raster scan, the proposed framework can increase the frame rate by up to six times depending on the scene complexity and the video quality constraint. We also observe a 1.5-7.8 dB gain in the average peak signal-to-noise ratio of the reconstructed frames when compared with prior works on compressed video sensing.

Journal ArticleDOI
TL;DR: This letter proposes a solution to the PSBS problem based on a periodic sampling procedure and a simple least squares (LS) reconstruction method that derives the lowest possible average sampling rate, which is much lower than the Nyquist rate of the signal.
Abstract: Power spectrum blind sampling (PSBS) consists of a sampling procedure and a reconstruction method that is capable of perfectly reconstructing the unknown power spectrum of a signal from the obtained samples. In this letter, we propose a solution to the PSBS problem based on a periodic sampling procedure and a simple least squares (LS) reconstruction method. For this PSBS technique, we derive the lowest possible average sampling rate, which is much lower than the Nyquist rate of the signal. Note the difference with spectrum blind sampling (SBS) where the goal is to perfectly reconstruct the spectrum and not the power spectrum of the signal, in which case sub-Nyquist rate sampling is only possible if the spectrum is sparse. In the current work, we can perform sub-Nyquist rate sampling without making any constraints on the power spectrum, because we try to reconstruct the power spectrum and not the spectrum. In many applications, such as spectrum sensing for cognitive radio, the power spectrum is of interest and estimating the spectrum is basically overkill.

Journal ArticleDOI
TL;DR: This work analyzes the clock-recovery process based on adaptive finite-impulse-response (FIR) filtering in digital coherent optical receivers to achieve an asynchronous clock mode of operation of digital coherent receivers with block processing of the symbol sequence.
Abstract: We analyze the clock-recovery process based on adaptive finite-impulse-response (FIR) filtering in digital coherent optical receivers. When the clock frequency is synchronized between the transmitter and the receiver, only five taps in half-symbol-spaced FIR filters can adjust the sampling phase of analog-to-digital conversion optimally, enabling bit-error rate performance independent of the initial sampling phase. Even if the clock frequency is not synchronized between them, the clock-frequency misalignment can be adjusted within an appropriate block interval; thus, we can achieve an asynchronous clock mode of operation of digital coherent receivers with block processing of the symbol sequence.

Journal ArticleDOI
TL;DR: The experimental results show that the presented NLMIC-POCS algorithm can significantly improve the image quality of the sparse angular CT reconstruction in suppressing streak artifacts and preserving the edges of the image.

Patent
11 Feb 2011
TL;DR: In this paper, an electrically passive device and method for in-situ acoustic emission, and/or releasing, sampling, and measuring of a fluid or various material(s) is provided.
Abstract: An electrically passive device and method for in-situ acoustic emission, and/or releasing, sampling and/or measuring of a fluid or various material(s) is provided The device may provide a robust timing mechanism to release, sample and/or perform measurements on a predefined schedule, and, in various embodiments, emits an acoustic signal sequence(s) that may be used for triangulation of the device position within, for example, a hydrocarbon reservoir or a living body

Journal ArticleDOI
TL;DR: A digital sensorless adaptive voltage positioning (SLAVP) control method is presented in this paper in order to realize AVP control without the need for load or inductor current sensing and high-resolution high-speed analog-to-digital converter sampling.
Abstract: A digital sensorless adaptive voltage positioning (SLAVP) control method is presented in this paper in order to realize AVP control without the need for load or inductor current sensing and high-resolution high-speed analog-to-digital converter sampling. The SLAVP control law utilizes the readily available error signal of the conventional voltage-mode closed-loop compensated controller or, in other words, the duty cycle of a dc-dc buck converter in order to realize AVP control. The elimination of the need for high-speed and accurate sensing and sampling of currents using the proposed SLAVP control reduces the size and cost of the digital controller, reduces the power losses associated with current sensing and sampling, and simplifies hardware design. Moreover, the SLAVP control can easily be added to controllers with conventional voltage-mode closed-loop control with minimum size and cost increase, and therefore, SLAVP control can be used for a wide range of applications and not only for high-end applications like powering microprocessors. The theoretical SLAVP control law derivation and analysis and SLAVP controller architecture are presented in this paper and verified by experimental results obtained from a proof-of-concept experimental prototype.

Proceedings ArticleDOI
01 Oct 2011
TL;DR: This paper investigates power efficiency aspects of a recently proposed adaptive nonuniform sampling scheme, Time-Stampless Adaptive Nonuniform Sampling (TANS), which can potentially enable the development of new applications that require continuous signals sensing.
Abstract: Nowadays, since more and more battery-operated devices are involved in applications with continuous sensing, development of an efficient sampling mechanisms is an important issue for these applications. In this paper, we investigate power efficiency aspects of a recently proposed adaptive nonuniform sampling. This sampling scheme minimizes the energy consumption of the sampling process, which is approximately proportional to sampling rate. The main characteristics of our method are that, first, sampling times do not need to be transmitted, since the receiver can compute them by using a function of previously taken samples, and second, only innovative samples are taken from the signal of interest, reducing the sampling rate and therefore the energy consumption. We call this scheme Time-Stampless Adaptive Nonuniform Sampling (TANS). TANS can be used in several scenarios, showing promising results in terms of energy savings, and can potentially enable the development of new applications that require continuous signals sensing, such as applications related to health monitoring, location tracking and entertainment.

Proceedings ArticleDOI
07 Apr 2011
TL;DR: This work focuses on terahertz- and mm-Wave-based imagers, which have recently gained interest for imaging in security screening and bio-imaging applications and the challenge of routing large numbers of analog signals between each pixel in the array and the sampling ADC.
Abstract: Terahertz- and mm-Wave-based imagers have recently gained interest for imaging in security screening and bio-imaging applications [1,2]. For these applications to become practical, the core pixel circuits employed in an imaging array must meet challenging constraints that originate from the system level design and the needs of constructing large array structures on-chip. The most critical of these constraints is that the pixel must consume very low power, as an array will inflate the total power by n2, where n2 is the total number of pixels in a square (n × n) array. Pixel circuit area is the 2nd major constraint, as the single pixel area will be also inflated by n2. This area constraint is critical because the cost-effective pixel array should ideally fit on a wafer to facilitate monolithic fabrication and avoid the need for complicated mechanical assembly of multiple array sections. A third system-level constraint similar to that experienced in CMOS image sensor arrays is the challenge of routing large numbers of analog signals between each pixel in the array and the sampling ADC.

Journal ArticleDOI
TL;DR: A new fast terahertz reflection tomography is proposed using block-based compressed sensing that directly reduces the number of sampling points in the spatial domain without modulation or transformation of the signal.
Abstract: In this paper, a new fast terahertz reflection tomography is proposed using block-based compressed sensing. Since measuring the time-domain signal on two-dimensional grid requires excessive time, reducing measurement time is highly demanding in terahertz tomography. The proposed technique directly reduces the number of sampling points in the spatial domain without modulation or transformation of the signal. Compressed sensing in spatial domain suggests a block-based reconstruction, which substantially reduces computational time without degrading the image quality. An overlap-average method is proposed to remove the block artifact in the block-based compressed sensing. Fast terahertz reflection tomography using the block-based compressed sensing is demonstrated with an integrated circuit and parched anchovy as examples.

Journal ArticleDOI
TL;DR: In this article, a wavelet-modulation technique for single-phase voltage-source (VS) inverters is proposed, which is realized through constructing a nondyadic-type multiresolution analysis, which supports sampling of a sinusoidal reference-modulating signal in a non-uniform recurrent manner, then reconstructing it using the inverter-switching actions.
Abstract: This paper presents the real-time implementation and experimental performances of the wavelet-modulation technique for single-phase voltage-source (VS) inverters. The wavelet-modulation technique is realized through constructing a nondyadic-type multiresolution analysis, which supports sampling of a sinusoidal reference-modulating signal in a nonuniform recurrent manner, then reconstructing it using the inverter-switching actions. The required nonuniform recurrent sampling is carried out by using dilated and translated sets of wavelet basis functions, which are generated by the scale-base linearly combined scaling function. The reconstruction of the sampled signal is accomplished by using dilated and translated sets of wavelet basis functions, which are generated by the scale-base linearly combined synthesis scaling function. The dilated and translated sets of wavelet basis functions used in the reconstruction are employed as switching signals to activate the inverter-switching elements. The wavelet-modulation technique is implemented in real time by using a digital signal processing board to generate switching pulses for a single-phase VS H-bridge (four-pulse) inverter. Experimental performances of the single-phase inverter, which is operated by the wavelet-modulation technique are investigated while supplying linear, dynamic, and nonlinear loads with different frequencies. Experimental test results show that high magnitude of fundamental components and significantly reduced harmonic contents of the inverter outputs can be achieved using the wavelet-modulation technique. The efficacy of the developed modulation technique is further demonstrated through performance comparisons with the pulsewidth- and random-pulsewidth-modulation techniques for similar loading conditions.

Proceedings ArticleDOI
03 May 2011
TL;DR: A wideband spectrum sensing method is presented that utilizes a sub-Nyquist sampling scheme to bring substantial savings in terms of the sampling rate and an expression for the detection threshold as a function of sampling parameters and noise power is provided.
Abstract: For systems and devices, such as cognitive radio and networks, that need to be aware of available frequency bands, spectrum sensing has an important role. A major challenge in this area is the requirement of a high sampling rate in the sensing of a wideband signal. In this paper a wideband spectrum sensing method is presented that utilizes a sub-Nyquist sampling scheme to bring substantial savings in terms of the sampling rate. The correlation matrix of a finite number of noisy samples is computed and used by a non-linear least square (NLLS) estimator to detect the occupied and vacant channels of the spectrum. We provide an expression for the detection threshold as a function of sampling parameters and noise power. Also, a sequential forward selection algorithm is presented to find the occupied channels with low complexity. The method can be applied to both correlated and uncorrelated wideband multichannel signals. A comparison with conventional energy detection using Nyquist-rate sampling shows that the proposed scheme can yield similar performance for SNR above 4 dB with a factor of 3 smaller sampling rate.

Patent
27 May 2011
TL;DR: In this article, the authors proposed a method of attenuating a composite audio signal consisting of sampling the audio signal, transforming the sampled audio signal using a fast Fourier transform algorithm, and comparing the transformed audio signal with a predefined impulse control threshold profile, representing the target maximum amplitude for each frequency component for the signal threshold profile to produce a configuring signal representative of the difference between the broadcast signal and the profile.
Abstract: FIELD: physics. ^ SUBSTANCE: method of attenuating a composite audio signal comprises steps for sampling the composite audio signal; transforming the sampled audio signal, using a fast fourier transform algorithm, to produce a signal representative of the amplitude of component frequencies of the audio signal; comparing the transformed audio signal with a predefined impulse control threshold profile, representing the target maximum amplitude for each frequency component for the audio signal threshold profile to produce a configuring signal representative of the difference between the broadcast signal and the profile; using the configuring signal to automatically configure in real time a Finite Impulse Response (FIR) filter so that it attenuates the amplitude of the transformed audio signal in frequency bands centred on the frequencies at which the target threshold is exceeded; and outputting the attenuated audio signal for consumer exposure. ^ EFFECT: increasing audio quality by compensating for the level of background noise. ^ 26 cl, 9 dwg

Journal ArticleDOI
TL;DR: The results of simulation and hardware experiments indicate that the proposed signal reconstruction algorithms are able to reconstruct multi-tone high-speed periodic signals in the discrete time domain.
Abstract: This paper presents a high-speed periodic signal acquisition technique using incoherent sub-sampling and back-end signal reconstruction algorithms. The signal reconstruction algorithms employ a frequency domain analysis for frequency estimation, and suppression of jitter-induced sampling noise. By switching the sampling rate of a digitizer, the analog frequency value of the sampled signal can be recovered. The proposed signal reconstruction uses incoherent sub-sampling to reduce hardware complexity. The results of simulation and hardware experiments indicate that the proposed signal reconstruction algorithms are able to reconstruct multi-tone high-speed periodic signals in the discrete time domain. The new signal acquisition technique simplifies signal acquisition hardware for testing and characterization of high-speed analog and digital signals.

Patent
10 Aug 2011
TL;DR: In this article, a fully automatic chemical luminescent immune analyzer is presented, which consists of a working main body (1) and a control computer (20) connected with the main body, wherein a control box (40) and storage bin (30) are arranged below the working primary body, and the working body comprises a sample bin (11), a reagent bin (3), and an automatic sampling/reacting tube loading device (2), and a sample feeding head unloading device (223).
Abstract: The invention discloses a fully automatic chemical luminescent immune analyzer which comprises a working main body (1) and a control computer (20) connected with the working main body (1), wherein a control box (40) and a storage bin (30) are arranged below the working main body, and the working main body (1) comprises a sample bin (11), a reagent bin (3) and an automatic sampling/reacting tube loading device (2) and a sample feeding head unloading device (223), wherein the automatic sampling/reacting tube loading device (2) and the sample feeding head unloading device (223) are arranged between the sample bin and the reacting bin. The fully automatic chemical luminescent immune analyzer realizes fully automatic operations of the immune reaction process, such as reacting tube loading, automatic sample feeding, automatic reagent filling, reaction solution incubation reaction, automatic cleaning of a reaction solution to detection and analysis of reaction results, reduces the influence of personal factors on the experiment due to the automatic operations, and improves the sensitivity.