scispace - formally typeset
Search or ask a question

Showing papers on "Digital signal processing published in 2010"


Journal ArticleDOI
TL;DR: In this article, a theoretical analysis of the dual-polarization constant modulus algorithm is presented, where the control surfaces several different equalizer algorithms are derived, including the decision-directed, trained, and the radially directed equalizer for both polarization division multiplexed quadriphase shift keyed (PDM-QPSK) and 16 level quadrature amplitude modulation (PDm-16-QAM).
Abstract: Digital coherent receivers have caused a revolution in the design of optical transmission systems, due to the subsystems and algorithms embedded within such a receiver. After giving a high-level overview of the subsystems, the optical front end, the analog-to-digital converter (ADC) and the digital signal processing (DSP) algorithms, which relax the tolerances on these subsystems are discussed. Attention is then turned to the compensation of transmission impairments, both static and dynamic. The discussion of dynamic-channel equalization, which forms a significant part of the paper, includes a theoretical analysis of the dual-polarization constant modulus algorithm, where the control surfaces several different equalizer algorithms are derived, including the constant modulus, decision-directed, trained, and the radially directed equalizer for both polarization division multiplexed quadriphase shift keyed (PDM-QPSK) and 16 level quadrature amplitude modulation (PDM-16-QAM). Synchronization algorithms employed to recover the timing and carrier phase information are then examined, after which the data may be recovered. The paper concludes with a discussion of the challenges for future coherent optical transmission systems.

772 citations


Journal ArticleDOI
TL;DR: This paper takes some first steps in the direction of solving inference problems-such as detection, classification, or estimation-and filtering problems using only compressive measurements and without ever reconstructing the signals involved.
Abstract: The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. Interestingly, it has been shown that random projections are a near-optimal measurement scheme. This has inspired the design of hardware systems that directly implement random measurement protocols. However, despite the intense focus of the community on signal recovery, many (if not most) signal processing problems do not require full signal recovery. In this paper, we take some first steps in the direction of solving inference problems-such as detection, classification, or estimation-and filtering problems using only compressive measurements and without ever reconstructing the signals involved. We provide theoretical bounds along with experimental results.

661 citations


Journal ArticleDOI
TL;DR: A novel error-tolerant adder (ETA) is proposed that is able to ease the strict restriction on accuracy, and at the same time achieve tremendous improvements in both the power consumption and speed performance.
Abstract: In modern VLSI technology, the occurrence of all kinds of errors has become inevitable. By adopting an emerging concept in VLSI design and test, error tolerance (ET), a novel error-tolerant adder (ETA) is proposed. The ETA is able to ease the strict restriction on accuracy, and at the same time achieve tremendous improvements in both the power consumption and speed performance. When compared to its conventional counterparts, the proposed ETA is able to attain more than 65% improvement in the Power-Delay Product (PDP). One important potential application of the proposed ETA is in digital signal processing systems that can tolerate certain amount of errors.

286 citations


Book
13 Mar 2010
TL;DR: This new text presents the basic concepts and theories of speech processing with clarity and currency, while providing hands-on computer-based laboratory experiences for students.
Abstract: Theory and Applications of Digital Speech Processing is ideal for graduate students in digital signal processing, and undergraduate students in Electrical and Computer Engineering. With its clear, up-to-date, hands-on coverage of digital speech processing, this text is also suitable for practicing engineers in speech processing. This new text presents the basic concepts and theories of speech processing with clarity and currency, while providing hands-on computer-based laboratory experiences for students. The material is organized in a manner that builds a strong foundation of basics first, and then concentrates on a range of signal processing methods for representing and processing the speech signal.

270 citations


Proceedings ArticleDOI
03 Aug 2010
TL;DR: This paper reviews the rationale and history of this event-based approach, introduces sensor functionalities, and gives an overview of the papers in this session.
Abstract: The four chips [1–4] presented in the special session on "Activity-driven, event-based vision sensors" quickly output compressed digital data in the form of events. These sensors reduce redundancy and latency and increase dynamic range compared with conventional imagers. The digital sensor output is easily interfaced to conventional digital post processing, where it reduces the latency and cost of post processing compared to imagers. The asynchronous data could spawn a new area of DSP that breaks from conventional Nyquist rate signal processing. This paper reviews the rationale and history of this event-based approach, introduces sensor functionalities, and gives an overview of the papers in this session. The paper concludes with a brief discussion on open questions.

237 citations


Journal ArticleDOI
TL;DR: It is shown that analog wavelet transform is successfully implemented in biomedical signal processing for design of low-power pacemakers and also in ultra-wideband (UWB) wireless communications.

214 citations


Journal ArticleDOI
TL;DR: In this paper, a unified multiblock nonlinear model for the joint compensation of the impairments in fiber transmission is presented, and it is shown that commonly used techniques for overcoming different impairments are often based on the same principles such as feedback and feedforward control, and time-versus-frequency-domain representations.
Abstract: Next-generation optical fiber systems will employ coherent detection to improve power and spectral efficiency, and to facilitate flexible impairment compensation using digital signal processors (DSPs). In a fully digital coherent system, the electric fields at the input and the output of the channel are available to DSPs at the transmitter and the receiver, enabling the use of arbitrary impairment precompensation and postcompensation algorithms. Linear time-invariant (LTI) impairments such as chromatic dispersion and polarization-mode dispersion can be compensated by adaptive linear equalizers. Non-LTI impairments, such as laser phase noise and Kerr nonlinearity, can be compensated by channel inversion. All existing impairment compensation techniques ultimately approximate channel inversion for a subset of the channel effects. We provide a unified multiblock nonlinear model for the joint compensation of the impairments in fiber transmission. We show that commonly used techniques for overcoming different impairments, despite their different appearance, are often based on the same principles such as feedback and feedforward control, and time-versus-frequency-domain representations. We highlight equivalences between techniques, and show that the choice of algorithm depends on making tradeoffs.

207 citations


Journal ArticleDOI
TL;DR: Four types of noise (Gaussian noise, Salt & Pepper noise, Speckle noise and Poisson noise) are used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter .
Abstract: Image processing is basically the use of computer algorithms to perform image processing on digital images. Digital image processing is a part of digital signal processing. Digital image processing has many significant advantages over analog image processing. Image processing allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing of images. Wavelet transforms have become a very powerful tool for de-noising an image. One of the most popular methods is wiener filter. In this work four types of noise (Gaussian noise , Salt & Pepper noise, Speckle noise and Poisson noise) is used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter . Further results have been compared for all noises.

203 citations


Journal ArticleDOI
TL;DR: Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second-order correlations and power spectra.

201 citations


Journal ArticleDOI
TL;DR: A practical scheme to perform the fast Fourier transform in the optical domain is introduced, which performs an optical real-time FFT on the consolidated OFDM data stream, thereby demultiplexing the signal into lower bit rate subcarrier tributaries, which can then be processed electronically.
Abstract: A practical scheme to perform the fast Fourier transform in the optical domain is introduced. Optical real-time FFT signal processing is performed at speeds far beyond the limits of electronic digital processing, and with negligible energy consumption. To illustrate the power of the method we demonstrate an optical 400 Gbit/s OFDM receiver. It performs an optical real-time FFT on the consolidated OFDM data stream, thereby demultiplexing the signal into lower bit rate subcarrier tributaries, which can then be processed electronically.

186 citations


Proceedings ArticleDOI
11 Sep 2010
TL;DR: This work characterize a large set of stream programs that was implemented directly in a stream programming language, allowing new insights into the high-level structure and behavior of the applications.
Abstract: Stream programs represent an important class of high-performance computations. Defined by their regular processing of sequences of data, stream programs appear most commonly in the context of audio, video, and digital signal processing, though also in networking, encryption, and other areas. In order to develop effective compilation techniques for the streaming domain, it is important to understand the common characteristics of these programs. Prior characterizations of stream programs have examined legacy implementations in C, C++, or FORTRAN, making it difficult to extract the high-level properties of the algorithms. In this work, we characterize a large set of stream programs that was implemented directly in a stream programming language, allowing new insights into the high-level structure and behavior of the applications. We utilize the StreamIt benchmark suite, consisting of 65 programs and 33,600 lines of code. We characterize the bottlenecks to parallelism, the data reference patterns, the input/output rates, and other properties. The lessons learned have implications for the design of future architectures, languages and compilers for the streaming domain.

Journal ArticleDOI
TL;DR: A measurement scheme capable of recording the amplitude and phase of arbitrary shaped optical waveforms with a bandwidth of up to 160 GHz is presented that is compatible with integration on a silicon photonic chip and could aid the study of transient ultrafast phenomena.
Abstract: The development of a real-time optical waveform measurement technique with quantum-limited sensitivity, unlimited record lengths and an instantaneous bandwidth scalable to terahertz frequencies would be beneficial in the investigation of many ultrafast optical phenomena. Currently, full-field (amplitude and phase) optical measurements with a bandwidth greater than 100 GHz require repetitive signals to facilitate equivalent-time sampling methods or are single-shot in nature with limited time records. Here, we demonstrate a bandwidth- and time-record scalable measurement that performs parallel coherent detection on spectral slices of arbitrary optical waveforms in the 1.55 µm telecommunications band. External balanced photodetection and high-speed digitizers record the in-phase and quadrature-phase components of each demodulated spectral slice, and digital signal processing reconstructs the signal waveform. The approach is passive, extendable to other regions of the optical spectrum, and can be implemented as a single silicon photonic integrated circuit. A measurement scheme that is capable of recording the amplitude and phase of arbitrary shaped optical waveforms with a bandwidth of up to 160 GHz is presented. The approach is compatible with integration on a silicon photonic chip and could aid the study of transient ultrafast phenomena.

Book
31 Jan 2010
TL;DR: This book builds on the student's background from a first course in logic design and focuses on developing, verifying, and synthesizing designs of digital circuits.
Abstract: Advanced Digital Design with the Verilog HDL, 2e, is ideal for an advanced course in digital design for seniors and first-year graduate students in electrical engineering, computer engineering, and computer science. This book builds on the student's background from a first course in logic design and focuses on developing, verifying, and synthesizing designs of digital circuits. The Verilog language is introduced in an integrated, but selective manner, only as needed to support design examples (includes appendices for additional language details). It addresses the design of several important circuits used in computer systems, digital signal processing, image processing, and other applications.

Journal ArticleDOI
Bernhard Spinnler1
TL;DR: An overview of digital equalization algorithms for coherent receivers and derive expressions for their complexity is given, which compare single-carrier and multicarrier approaches, and investigates blind equalizer adaptation as well as training-symbol-based algorithms.
Abstract: Digital signal processing has completely changed the way optical communication systems work during recent years. In combination with coherent demodulation, it enables compensation of optical distortions that seemed impossible only a few years ago. However, at high bit rates, this comes at the price of complex processing circuits and high power consumption. In order to translate theoretic concepts into economically viable products, careful design of the digital signal processing algorithms is needed. In this paper, we give an overview of digital equalization algorithms for coherent receivers and derive expressions for their complexity. We compare single-carrier and multicarrier approaches, and investigate blind equalizer adaptation as well as training-symbol-based algorithms. We examine tradeoffs between parameters like sampling rate and tracking speed that are important for algorithm design and practical implementation.

01 Oct 2010
TL;DR: A detailed description of its functionality, justification of major design decisions, analysis of phase-coherent dispersion removal algorithms, and demonstration of performance on some contemporary microprocessor architectures are presented.
Abstract: dspsr is a high-performance, open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. Written primarily in C++, the library implements an extensive range of modular algorithms that can optionally exploit both multiple-core processors and general-purpose graphics processing units. After over a decade of research and development, dspsr is now stable and in widespread use in the community. This paper presents a detailed description of its functionality, justification of major design decisions, analysis of phase-coherent dispersion removal algorithms, and demonstration of performance on some contemporary microprocessor architectures.

Journal ArticleDOI
TL;DR: The performance of the receiver using a digital backpropagation algorithm with varying nonlinear step size is characterized to determine an upper bound on the suppression of intrachannel nonlinearities in a single-channel system.
Abstract: Coherent detection with receiver-based DSP has recently enabled the mitigation of fiber nonlinear effects. We investigate the performance benefits available from the backpropagation algorithm for polarization division multiplexed quadrature amplitude phase-shift keying (PDM-QPSK) and 16-state quadrature amplitude modulation (PDM-QAM16). The performance of the receiver using a digital backpropagation algorithm with varying nonlinear step size is characterized to determine an upper bound on the suppression of intrachannel nonlinearities in a single-channel system. The results show that for the system under investigation PDM-QPSK and PDM-QAM16 have maximum step sizes for optimal performance of 160 and 80 km, respectively. Whilst the optimal launch power is increased by 2 and 2.5 dB for PDM-QPSK and PDM-QAM16, respectively, the Q-factor is correspondingly increased by 1.6 and 1 dB, highlighting the importance of studying nonlinear compensation for higher level modulation formats.

Journal ArticleDOI
TL;DR: Event-driven analog-to-digital conversion and associated digital signal processing techniques are reviewed and have the potential to significantly reduce the consumption of energy and bandwidth resources in several important applications.
Abstract: Event-driven analog-to-digital conversion and associated digital signal processing techniques are reviewed. Such techniques, still in the research stage, have the potential to significantly reduce the consumption of energy and bandwidth resources in several important applications.

Journal ArticleDOI
TL;DR: The proposed method is based on detecting phase discontinuity of the power grid signal, referred to as electric network frequency (ENF), which is sometimes embedded in audio signals when the recording is carried out with the equipment connected to an electrical outlet or when certain microphones are in an ENF magnetic field.
Abstract: This paper addresses a forensic tool used to assess audio authenticity. The proposed method is based on detecting phase discontinuity of the power grid signal; this signal, referred to as electric network frequency (ENF), is sometimes embedded in audio signals when the recording is carried out with the equipment connected to an electrical outlet or when certain microphones are in an ENF magnetic field. After down-sampling and band-filtering the audio around the nominal value of the ENF, the result can be considered a single tone such that a high-precision Fourier analysis can be used to estimate its phase. The estimated phase provides a visual aid to locating editing points (signalled by abrupt phase changes) and inferring the type of audio editing (insertion or removal of audio segments). From the estimated values, a feature is used to quantify the discontinuity of the ENF phase, allowing an automatic decision concerning the authenticity of the audio evidence. The theoretical background is presented along with practical implementation issues related to the proposed technique, whose performance is evaluated on digitally edited audio signals.

Journal ArticleDOI
TL;DR: The attributes of coherent systems are reviewed in light of the challenges faced by system designers to realize increased bit rates for next-generation optical systems.
Abstract: The demand for increased bandwidth is ever present. Coherent technology coupled with advanced modulation formats and digital signal processing is a key enabler for optical communication systems at 100 Gb/s and beyond. This article reviews the attributes of coherent systems in light of the challenges faced by system designers to realize increased bit rates for next-generation optical systems.

Journal ArticleDOI
TL;DR: Analysis on sensitivity and link budget have been presented to guide the design of high-sensitivity noncontact vital sign detector and results show that the fabricated chip has a sensitivity of better than -101 dBm for ideal detection in the absence of random body movement.
Abstract: In this paper, analyses on sensitivity and link budget have been presented to guide the design of high-sensitivity noncontact vital sign detector. Important design issues such as flicker noise, baseband bandwidth, and gain budget have been discussed with practical considerations of analog-to-digital interface and signal processing methods in noncontact vital sign detection. Based on the analyses, a direct-conversion 5.8-GHz radar sensor chip with 1-GHz bandwidth was designed and fabricated. This radar sensor chip is software configurable to set the operation point and detection range for optimal performance. It integrates all the analog functions on-chip so that the output can be directly sampled for digital signal processing. Measurement results show that the fabricated chip has a sensitivity of better than -101 dBm for ideal detection in the absence of random body movement. Experiments have been performed successfully in laboratory environment to detect the vital signs of human subjects.

Journal ArticleDOI
TL;DR: Digital signal transmission at 300 GHz using a versatile Schottky mixer based measurement system designed for terahertz communication channel modelling and propagation studies is demonstrated and analysed.
Abstract: Recently, analogue video signal transmission at 300 GHz has been demonstrated using a versatile Schottky mixer based measurement system designed for terahertz communication channel modelling and propagation studies. In this reported work, digital signal transmission at 300 GHz using this system is demonstrated and analysed. The performance of the digital transmission setup is characterised with respect to phase noise and modulation errors. For demonstration, high data rate digital video signals have been transmitted over a distance of up to 52 m.

Journal ArticleDOI
TL;DR: Investigations reveal that multichannel ScanSAR and Terrain Observation by Progressive Scans system concepts enable the imaging of ultrawide swaths with high azimuth resolution and compact antenna lengths.
Abstract: Due to a system-inherent limitation, conventional synthetic aperture radar (SAR) is incapable of imaging a wide swath with high geometric resolution. This restriction can be overcome by systems with multiple receive channels in combination with an additional digital signal processing network. So far, the application of such digital beamforming algorithms for high-resolution wide-swath SAR imaging has been restricted to multichannel systems in stripmap operation. However, in stripmap mode, the overall azimuth antenna length restricts the achievable swath width, thus preventing very wide swaths as requested by future SAR missions. Consequently, new concepts for ultrawide-swath imaging are needed. A promising candidate is a SAR system with multiple azimuth channels being operated in burst mode. This paper analyzes innovative ScanSAR and Terrain Observation by Progressive Scans (TOPS) system concepts with regard to multichannel azimuth processing. For this, the theoretical analyses, performance figures, and SAR signal processing, which had previously been derived for multichannel stripmap mode, are extended to systems operating in burst modes. The investigations reveal that multichannel ScanSAR systems enable the imaging of ultrawide swaths with high azimuth resolution and compact antenna lengths. These considerations are embedded in a multichannel ScanSAR system design example to demonstrate its capability to image an ultrawide swath of 400 km with a high geometric resolution of 5 m. In a next step, this system is adapted to TOPS mode operation, including an innovative “staircase” multichannel processing approach optimized for TOPS.

Journal ArticleDOI
TL;DR: A TF-based audio coding scheme with novel psychoacoustics model, music classification, audio classification of environmental sounds, audio fingerprinting, and audio watermarking will be presented to demonstrate the advantages of using time-frequency approaches in analyzing and extracting information from audio signals.
Abstract: Audio signals are information rich nonstationary signals that play an important role in our day-to-day communication, perception of environment, and entertainment. Due to its non-stationary nature, time- or frequency-only approaches are inadequate in analyzing these signals. A joint time-frequency (TF) approach would be a better choice to efficiently process these signals. In this digital era, compression, intelligent indexing for content-based retrieval, classification, and protection of digital audio content are few of the areas that encapsulate a majority of the audio signal processing applications. In this paper, we present a comprehensive array of TF methodologies that successfully address applications in all of the above mentioned areas. A TF-based audio coding scheme with novel psychoacoustics model, music classification, audio classification of environmental sounds, audio fingerprinting, and audio watermarking will be presented to demonstrate the advantages of using time-frequency approaches in analyzing and extracting information from audio signals.

Journal ArticleDOI
TL;DR: In this article, a step-by-step approach is presented to optimise the signal processing both for offline and online applications based on the characteristics of the signal for in-cylinder pressure analysis.

Proceedings ArticleDOI
23 May 2010
TL;DR: This work considers the problem of estimating the impulse response of a dispersive channel when the channel output is sampled using a low-precision analog-to-digital converter (ADC), and shows that, even with such low ADC precision, it is possible to attain near full-pre precision performance using closed-loop estimation, where the ADC input is dithered and scaled.
Abstract: We consider the problem of estimating the impulse response of a dispersive channel when the channel output is sampled using a low-precision analog-to-digital converter (ADC). While traditional channel estimation techniques require about 6 bits of ADC precision to approach full-precision performance, we are motivated by applications to multiGigabit communication, where we may be forced to use much lower precision (e.g., 1-3 bits) due to considerations of cost, power, and technological feasibility. We show that, even with such low ADC precision, it is possible to attain near full-precision performance using closed-loop estimation, where the ADC input is dithered and scaled. The dither signal is obtained using linear feedback based on the Minimum Mean Squared Error (MMSE) criterion. The dither feedback coefficients and the scaling gains are computed offline using Monte Carlo simulations based on a statistical model for the channel taps, and are found to work well over wide range of channel variations.

Proceedings ArticleDOI
26 Jul 2010
TL;DR: A new language, Feldspar, is presented, enabling high-level and platform-independent description of digital signal processing (DSP) algorithms, based on a low-level, functional core language which has a relatively small semantic gap to machine-oriented languages like C.
Abstract: A new language, Feldspar, is presented, enabling high-level and platform-independent description of digital signal processing (DSP) algorithms. Feldspar is a pure functional language embedded in Haskell. It offers a high-level dataflow style of programming, as well as a more mathematical style based on vector indices. The key to generating efficient code from such descriptions is a high-level optimization technique called vector fusion. Feldspar is based on a low-level, functional core language which has a relatively small semantic gap to machine-oriented languages like C. The core language serves as the interface to the back-end code generator, which produces C. For very small examples, the generated code performs comparably to hand-written C code when run on a DSP target. While initial results are promising, to achieve good performance on larger examples, issues related to memory access patterns and array copying will have to be addressed.

Proceedings ArticleDOI
23 Aug 2010
TL;DR: The proposed VAD algorithm demonstrates the simplicity of 1-D LBP processing with low computational complexity and it is shown that distinct LBP features are obtained to identify the voiced and the unvoiced components of speech signals.
Abstract: Local Binary Patterns (LBP) have been used in 2-D image processing for applications such as texture segmentation and feature detection. In this paper a new 1-dimensional local binary pattern (LBP) signal processing method is presented. Speech systems such as hearing aids require fast and computationally inexpensive signal processing. The practical use of LBP based speech processing is demonstrated on two signal processing problems: — (i) signal segmentation and (ii) voice activity detection (VAD). Both applications use the underlying features extracted from the 1-D LBP. The proposed VAD algorithm demonstrates the simplicity of 1-D LBP processing with low computational complexity. It is also shown that distinct LBP features are obtained to identify the voiced and the unvoiced components of speech signals.

Journal ArticleDOI
TL;DR: AnySP, a fully programmable architecture that targets multiple application domains, addresses the challenges for next-generation mobile signal processing.
Abstract: Looking forward, the computation requirements of mobile devices will increase by one to two orders of magnitude, but their power requirements will remain stringent to ensure reasonable battery lifetimes. Scaling existing approaches won't suffice; instead, the hardware's inherent computational efficiency, programmability, and adaptability must change. AnySP, a fully programmable architecture that targets multiple application domains, addresses these challenges for next-generation mobile signal processing.

Journal ArticleDOI
TL;DR: DSPSR as discussed by the authors is a high-performance, open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy, written primarily in C++, implements an extensive range of modular algorithms that can optionally exploit both multiple-core processors and general-purpose graphics processing units.
Abstract: DSPSR is a high-performance, open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. Written primarily in C++, the library implements an extensive range of modular algorithms that can optionally exploit both multiple-core processors and general-purpose graphics processing units. After over a decade of research and development, DSPSR is now stable and in widespread use in the community. This paper presents a detailed description of its functionality, justification of major design decisions, analysis of phase-coherent dispersion removal algorithms, and demonstration of performance on some contemporary microprocessor architectures.

Posted Content
TL;DR: A mixed analog-digital spectrum sensing method that is especially suited to the typical wideband setting of cognitive radio (CR), based on the modulated wideband converter (MWC) system, which samples sparse wideband inputs at sub-Nyquist rates.
Abstract: We present a mixed analog-digital spectrum sensing method that is especially suited to the typical wideband setting of cognitive radio (CR). The advantages of our system with respect to current architectures are threefold. First, our analog front-end is fixed and does not involve scanning hardware. Second, both the analog-to-digital conversion (ADC) and the digital signal processing (DSP) rates are substantially below Nyquist. Finally, the sensing resources are shared with the reception path of the CR, so that the lowrate streaming samples can be used for communication purposes of the device, besides the sensing functionality they provide. Combining these advantages leads to a real time map of the spectrum with minimal use of mobile resources. Our approach is based on the modulated wideband converter (MWC) system, which samples sparse wideband inputs at sub-Nyquist rates. We report on results of hardware experiments, conducted on an MWC prototype circuit, which affirm fast and accurate spectrum sensing in parallel to CR communication.