scispace - formally typeset
Search or ask a question

Showing papers on "Digital signal processing published in 2006"


Patent
15 Nov 2006
TL;DR: In this article, the authors present methods and systems for encoding digital watermarks into content signals, including window identifier for identifying a sample window in the signal; an interval calculator for determining a quantization interval of the sample window; and a sampler for normalizing sample window to provide normalized samples.
Abstract: Disclosed herein are methods and systems for encoding digital watermarks into content signals. Also disclosed are systems and methods for detecting and/or verifying digital watermarks in content signals. According to one embodiment, a system for encoding of digital watermark information includes: a window identifier for identifying a sample window in the signal; an interval calculator for determining a quantization interval of the sample window; and a sampler for normalizing the sample window to provide normalized samples. According to another embodiment, a system for pre-analyzing a digital signal for encoding at least one digital watermark using a digital filter is disclosed. According to another embodiment, a method for pre-analyzing a digital signal for encoding digital watermarks comprises: (1) providing a digital signal; (2) providing a digital filter to be applied to the digital signal; and (3) identifying an area of the digital signal that will be affected by the digital filter based on at least one measurable difference between the digital signal and a counterpart of the digital signal selected from the group consisting of the digital signal as transmitted, the digital signal as stored in a medium, and the digital signal as played backed. According to another embodiment, a method for encoding a watermark in a content signal includes the steps of (1) splitting a watermark bit stream; and (2) encoding at least half of the watermark bit stream in the content signal using inverted instances of the watermark bit stream. Other methods and systems for encoding/decoding digital watermarks are also disclosed.

603 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper proposes a system that uses modulation, filtering, and sampling to produce a low-rate set of digital measurements, inspired by the theory of compressive sensing (CS), which states that a discrete signal having a sparse representation in some dictionary can be recovered from a small number of linear projections of that signal.
Abstract: Many problems in radar and communication signal processing involve radio frequency (RF) signals of very high bandwidth. This presents a serious challenge to systems that might attempt to use a high-rate analog-to-digital converter (ADC) to sample these signals, as prescribed by the Shannon/Nyquist sampling theorem. In these situations, however, the information level of the signal is often far lower than the actual bandwidth, which prompts the question of whether more efficient schemes can be developed for measuring such signals. In this paper we propose a system that uses modulation, filtering, and sampling to produce a low-rate set of digital measurements. Our "analog-to-information converter" (AIC) is inspired by the recent theory of Compressive Sensing (CS), which states that a discrete signal having a sparse representation in some dictionary can be recovered from a small number of linear projections of that signal. We generalize the CS theory to continuous-time sparse signals, explain our proposed AIC system in the CS context, and discuss practical issues regarding implementation.

408 citations


Patent
03 Mar 2006
TL;DR: In this article, a wearable physiologic monitor comprises a mixed analog and digital application-specific integrated circuit (ASIC) including signal conditioning circuitry, an A/D converter, a real-time clock, and digital control logic.
Abstract: In some embodiments, a wearable physiologic monitor comprises a mixed analog and digital application-specific integrated circuit (ASIC) including signal conditioning circuitry, an A/D converter, a real-time clock, and digital control logic. The signal conditioning circuitry includes analog amplification circuitry, analog (continuous-time or switched capacitor) filtering circuitry before the A/D converter, and in some embodiments digital (DSP) filtering circuitry after the A/D converter. The monitor includes sensors such as electrocardiogram (ECG) electrodes, accelerometers, and a temperature sensor, some of which may be integrated on the ASIC. The digital control logic receives digital physiologic data sampled at different rates, assembles the data into physiologic data packets, time-stamps at least some of the packets, and periodically stores the packets in a digital memory. The monitor may include a disposable patch including the ASIC, and a reusable, removable digital memory such as flash memory card. Applications include ambulatory monitoring and quantitative titration of care.

266 citations


Proceedings ArticleDOI
01 Dec 2006
TL;DR: A framework for analog-to-information conversion that enables sub-Nyquist acquisition and processing of wideband signals that are sparse in a local Fourier representation is developed and an efficient information recovery algorithm is developed to compute the spectrogram of the signal, which is dubbed the sparsogram.
Abstract: We develop a framework for analog-to-information conversion that enables sub-Nyquist acquisition and processing of wideband signals that are sparse in a local Fourier representation The first component of the framework is a random sampling system that can be implemented in practical hardware The second is an efficient information recovery algorithm to compute the spectrogram of the signal, which we dub the sparsogram A simulated acquisition of a frequency hopping signal operates at 33times sub-Nyquist average sampling rate with little degradation in signal quality

264 citations


Journal ArticleDOI
TL;DR: Physical modelling techniques that can be used for simulating musical instruments are described, including some nonlinear and time-varying models and new results on the digital waveguide modelling of a nonlinear string.
Abstract: This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass–spring, modal, wave digital, finite difference, digital waveguide and source–filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.

173 citations


Journal ArticleDOI
TL;DR: In this paper, a procedure for filtering electromyographic (EMG) signals is introduced, which can decompose any time-series into a set of functions designated as intrinsic mode functions.

171 citations


Book
15 Oct 2006
TL;DR: The Digital Signal Processing Using MATLAB and Wavelets, Second Edition as mentioned in this paper focuses on the practical applications of signal processing over 100 MATLAB examples and wavelet techniques provide the latest applications of DSP, including image processing, games, filters, transforms, networking, parallel processing, and sound.
Abstract: Although Digital Signal Processing (DSP) has long been considered an electrical engineering topic, recent developments have also generated significant interest from the computer science community DSP applications in the consumer market, such as bioinformatics, the MP3 audio format, and MPEG-based cable/satellite television have fueled a desire to understand this technology outside of hardware circles Designed for upper division engineering and computer science students as well as practicing engineers and scientists, Digital Signal Processing Using MATLAB & Wavelets, Second Edition emphasizes the practical applications of signal processing Over 100 MATLAB examples and wavelet techniques provide the latest applications of DSP, including image processing, games, filters, transforms, networking, parallel processing, and sound This Second Edition also provides the mathematical processes and techniques needed to ensure an understanding of DSP theory Designed to be incremental in difficulty, the book will benefit readers who are unfamiliar with complex mathematical topics or those limited in programming experience Beginning with an introduction to MATLAB programming, it moves through filters, sinusoids, sampling, the Fourier transform, the z-transform and other key topics Two chapters are dedicated to the discussion of wavelets and their applications A CD-ROM (platform independent) accompanies every new printed copy of the book and contains source code, projects for each chapter, and the figures from the book (eBook version does not include the CD-ROM)

165 citations


Journal ArticleDOI
TL;DR: In this article, a fault signal diagnosis technique for internal combustion engines that uses a continuous wavelet transform algorithm is presented, which is used for both acoustic and vibration signals for the diagnosis of an internal combustion engine and its cooling system.
Abstract: A fault signal diagnosis technique for internal combustion engines that uses a continuous wavelet transform algorithm is presented in this paper. The use of mechanical vibration and acoustic emission signals for fault diagnosis in rotating machinery has grown significantly due to advances in the progress of digital signal processing algorithms and implementation techniques. The conventional diagnosis technology using acoustic and vibration signals already exists in the form of techniques applying the time and frequency domain of signals, and analyzing the difference of signals in the spectrum. Unfortunately, in some applications the performance is limited, such as when a smearing problem arises at various rates of engine revolution, or when the signals caused by a damaged element are buried in broadband background noise. In the present study, a continuous wavelet transform technique for the fault signal diagnosis is proposed. In the experimental work, the proposed continuous wavelet algorithm was used for fault signal diagnosis in an internal combustion engine and its cooling system. The experimental results indicated that the proposed continuous wavelet transform technique is effective in fault signal diagnosis for both experimental cases. Furthermore, a characteristic analysis and experimental comparison of the vibration signal and acoustic emission signal analysis with the proposed algorithm are also presented in this report.

157 citations


Journal ArticleDOI
11 Dec 2006
TL;DR: In this article, the Zhao-Atlas-Marks distribution is used to enhance nonstationary fault diagnostics in electric motors, which can be implemented on a digital signal processing platform.
Abstract: As the use of electric motors increases in the aerospace and transportation industries where operating conditions continuously change with time, fault detection in electric motors has been gaining importance. Motor diagnostics in a nonstationary environment is difficult and often needs sophisticated signal processing techniques. In recent times, a plethora of new time-frequency distributions has appeared, which are inherently suited to the analysis of nonstationary signals while offering superior frequency resolution characteristics. The Zhao-Atlas-Marks distribution is one such distribution. This paper proposes the use of these new time-frequency distributions to enhance nonstationary fault diagnostics in electric motors. One common myth has been that the quadratic time-frequency distributions are not suitable for commercial implementation. This paper also addresses this issue in detail. Optimal discrete-time implementations of some of these quadratic time-frequency distributions are explained. These time-frequency representations have been implemented on a digital signal processing platform to demonstrate that the proposed methods can be implemented commercially.

146 citations


Journal Article
TL;DR: A DSP-based phase-estimation scheme consists of a simple and demultiplexable architecture that allows the system to reach significantly higher performance than conventional optical delay detection and various kinds of postprocessing of the received signal become possible.
Abstract: This paper describes a phase-diversity homodyne receiver that which can cope with multilevel modulation formats. The carrier phase drift is estimated with digital signal processing (DSP) on the homodyne-detected signal, entirely restoring the complex amplitude of the incoming signal. Our DSP-based phase-estimation scheme consists of a simple and demultiplexable architecture that allows the system to reach significantly higher performance than conventional optical delay detection. Since the whole optical signal information is preserved with our receiver, various kinds of postprocessing of the received signal become possible. For example, we can demultiplex wavelength-division/optical time-division multiplexed channels and compensate for group velocity dispersion of fibers as well as the nonlinear phase noise in the electrical domain. We also experimentally evaluate the performance of our receiver. Our offline bit-error rate experiments show the feasibility of transmitting polarization-multiplexed 40-Gb/s quadrature phase-shift keying signals over 200 km with channel spacing of 16 GHz, leading to spectral efficiency of 2.5 b/s/Hz

145 citations


Proceedings ArticleDOI
21 May 2006
TL;DR: The principle possibilities of calibrating TI-ADCs are reviewed, where the necessities and advantages of digital enhancement are pointed out and open issues of channel mismatch identification as well as channel mismatch correction are discussed.
Abstract: We discuss time-interleaved analog-to-digital converters (ADCs) as a prime example of merging analog and digital signal processing. A time-interleaved ADC (TI-ADC) consists of M parallel channel ADCs that alternately take samples from the input signal, where the sampling rate can be increased by the number of channels compared to a single channel. We recall the advantages of time interleaving and investigate the problems involved. In particular, we explain the error behavior of mismatches among the channels, which distort the output signal and reduce the system performance significantly, and provide a concise framework for dealing with them. Based on this analysis, we review the principle possibilities of calibrating TI-ADCs, where we point out the necessities and advantages of digital enhancement. To this end, we discuss open issues of channel mismatch identification as well as channel mismatch correction.

Journal ArticleDOI
Wenbin Luo1
TL;DR: A new impulse noise removal technique is presented to restore digital images corrupted by impulse noise, based on fuzzy impulse detection technique, which can remove impulse noise efficiently from highly corrupted images while preserving image details.
Abstract: A new impulse noise removal technique is presented to restore digital images corrupted by impulse noise. The algorithm is based on fuzzy impulse detection technique, which can remove impulse noise efficiently from highly corrupted images while preserving image details. Extensive experimental results show that the proposed technique performs significantly better than many existing state-of-the-art algorithms. Due to its low complexity, the proposed algorithm is very suitable for hardware implementation. Therefore, it can be used to remove impulse noise in many consumer electronics products such as digital cameras and digital television (DTV) for its performance and simplicity.

Journal ArticleDOI
TL;DR: In this article, a phase-diversity homodyne receiver is proposed to cope with multilevel modulation formats, where the carrier phase drift is estimated with digital signal processing (DSP) on the signal, entirely restoring the complex amplitude of the incoming signal.
Abstract: This paper describes a phase-diversity homodyne receiver that which can cope with multilevel modulation formats. The carrier phase drift is estimated with digital signal processing (DSP) on the homodyne-detected signal, entirely restoring the complex amplitude of the incoming signal. Our DSP-based phase-estimation scheme consists of a simple and demultiplexable architecture that allows the system to reach significantly higher performance than conventional optical delay detection. Since the whole optical signal information is preserved with our receiver, various kinds of postprocessing of the received signal become possible. For example, we can demultiplex wavelength-division/optical time-division multiplexed channels and compensate for group velocity dispersion of fibers as well as the nonlinear phase noise in the electrical domain. We also experimentally evaluate the performance of our receiver. Our offline bit-error rate experiments show the feasibility of transmitting polarization-multiplexed 40-Gb/s quadrature phase-shift keying signals over 200 km with channel spacing of 16 GHz, leading to spectral efficiency of 2.5 b/s/Hz

Proceedings ArticleDOI
22 Oct 2006
TL;DR: It is shown that probabilistic arithmetic can be used to compute the FFT in an extremely energy-efficient manner, yielding energy savings of over 5.6X in the context of the widely used synthetic aperture radar (SAR) application.
Abstract: Probabilistic arithmetic, where the ith output bit of addition and multiplication is correct with a probability pi , is shown to be a vehicle for realizing extremely energy-efficient, embedded computing. Specifically, probabilistic adders and multipliers, realized using elements such as gates that are in turn probabilistic, are shown to form a natural basis for primitives in the signal processing (DSP) domain. In this paper, we show that probabilistic arithmetic can be used to compute the FFT in an extremely energy-efficient manner, yielding energy savings of over 5. 6X in the context of the widely used synthetic aperture radar (SAR) application [1]. Our results are derived using novel probabilistic cmos (PC-MOS) technology, characterized and applied in the past to realize ultra-efficient architectures for probabilistic applications [2, 3, 4]. When applied to the dsp domain, the resulting error in the output of a probabilistic arithmetic primitive, such as an adder for example, manifests as degradation in the signal-to-noise ratio (SNR) ofthe sar image that is reconstructed through the FFT algorithm. In return for this degradation that is enabled by our probabilistic arithmetic primitives ?- degradation visually indistinguishable from an image reconstructed using conventional deterministic approaches -- significant energy savings and performance gains are shown to be possible per unit of SNR degradation. These savings stem from a novel method of voltage scaling, which we refer to as biased voltage scaling (or BIVOS), that is the major technical innovation on which our probabilistic designs are based.

Journal ArticleDOI
01 Mar 2006
TL;DR: A systematic derivation of VLSI architectures and algorithms for efficient implementation of lifting based Discrete Wavelet Transform for both 1-dimensional and 2-dimensional DWT is provided.
Abstract: In this paper, we review recent developments in VLSI architectures and algorithms for efficient implementation of lifting based Discrete Wavelet Transform (DWT). The basic principle behind the lifting based scheme is to decompose the finite impulse response (FIR) filters in wavelet transform into a finite sequence of simple filtering steps. Lifting based DWT implementations have many advantages, and have recently been proposed for the JPEG2000 standard for image compression. Consequently, this has become an area of active research and several architectures have been proposed in recent years. In this paper, we provide a survey of these architectures for both 1-dimensional and 2-dimensional DWT. The architectures are representative of many design styles and range from highly parallel architectures to DSP-based architectures to folded architectures. We provide a systematic derivation of these architectures along with an analysis of their hardware and timing complexities.

Journal ArticleDOI
TL;DR: In this paper, a digital control algorithm capable of separately specifying the desired output voltage and transient response for a synchronous buck converter operating in voltage mode was developed, based on superimposing a small control signal onto a voltage reference at each switching cycle to cancel out the perturbations.
Abstract: A digital control algorithm capable of separately specifying the desired output voltage and transient response for a synchronous buck converter operating in voltage mode was developed. This algorithm is based on superimposing a small control signal onto a voltage reference at each switching cycle to cancel out the perturbations. A zero steady-state error in the output voltage can be obtained with the aid of additional dynamics to allow the controller to track a load change and update the reference to a new load state. The specifications of the control algorithm are achieved by pole placement using complete state feedback. The control algorithm was implemented on a digital signal processor (DSP)-controlled synchronous buck converter.

Journal ArticleDOI
TL;DR: Developing and demonstrating novel digital techniques to mitigate the effects of harmonic and intermodulation distortion in wideband multicarrier or multichannel receivers using adaptive interference cancellation and results indicate that the proposed compensation technique can be used to suppress nonlinear distortion due to receiver front-end sections under realistic signaling assumptions.
Abstract: One of the main trends in the evolution of radio receivers and other wireless device is to implement more and more of the receiver functionalities using digital signal processing (DSP). However, due to practical limitations in the analog-to-digital conversion process, some analog signal processing stages are likely to remain also in the continuation. With the ever-increasing demands for the system performance and supported data rates on one side, and the terminal flexibility and implementation costs on the other, the requirements for these remaining analog front-end stages become extremely challenging to meet. Then, one interesting idea in this context is to apply sophisticated DSP-based techniques to compensate for some of the most fundamental nonidealities of the receiver analog front-ends. In this paper, we focus on developing and demonstrating novel digital techniques to mitigate the effects of harmonic and intermodulation distortion in wideband multicarrier or multichannel receivers using adaptive interference cancellation. The approach in general is practically oriented and largely based on analyzing and processing measured real-world receiver front-end signals. The obtained results indicate that the proposed compensation technique can be used to suppress nonlinear distortion due to receiver front-end sections under realistic signaling assumptions

Journal ArticleDOI
TL;DR: It is shown that the proposed technique, referred to as algorithmic soft error-tolerance (ASET), employs low-complexity estimators of a main DSP block to achieve reliable operation in the presence of soft errors.
Abstract: In this paper, we present energy-efficient soft error-tolerant techniques for digital signal processing (DSP) systems. The proposed technique, referred to as algorithmic soft error-tolerance (ASET), employs low-complexity estimators of a main DSP block to achieve reliable operation in the presence of soft errors. Three distinct ASET techniques - spatial, temporal and spatio-temporal- are presented. For frequency selective finite-impulse response (FIR) filtering, it is shown that the proposed techniques provide robustness in the presence of soft error rates of up to P/sub er/=10/sup -2/ and P/sub er/=10/sup -3/ in a single-event upset scenario. The power dissipation of the proposed techniques ranges from 1.1 X to 1.7 X (spatial ASET) and 1.05 X to 1.17 X (spatio-temporal and temporal ASET) when the desired signal-to-noise ratio SNR/sub des/=25 dB. In comparison, the power dissipation of the commonly employed triple modular redundancy technique is 2.9 X.

Journal ArticleDOI
TL;DR: This paper recollects the events that led to proposing the linear prediction coding (LPC) method, then the multipulse LPC and the code-excited LPC.
Abstract: This paper recollects the events that led to proposing the linear prediction coding (LPC) method, then the multipulse LPC and the code-excited LPC

Patent
01 May 2006
TL;DR: In this paper, an analog to digital converters are formed into a two-dimensional array and the array may incorporate digital signal processing functionality, such an array is particularly well-suited for operation as a readout integrated circuit and, in combination with a sensor array, forms a digital focal plane array.
Abstract: Autonomously operating analog to digital converters are formed into a two dimensional array. The array may incorporate digital signal processing functionality. Such an array is particularly well-suited for operation as a readout integrated circuit and, in combination with a sensor array, forms a digital focal plane array.

Patent
21 Dec 2006
TL;DR: In this paper, a system and method for performing monitoring of anesthesia and sedation in a patient includes a patient sensor integrating EEG, pulse oximetry, ECG, and AEP signal inputs, integrated analog hardware, digital hardware, and a digital signal processing system that executes a selected algorithm to process received signals representative of a patient's condition, and which generates an index value associated with said patient condition.
Abstract: A system and method for performing monitoring of anesthesia and sedation in a patient includes a patient sensor integrating EEG, pulse oximetry, ECG, and AEP signal inputs, integrated analog hardware, digital hardware, and a digital signal processing system that executes a selected algorithm to process received signals representative of a patient's condition, and which generates an index value associated with said patient condition


Journal Article
TL;DR: Performance tests show that real-time processing on modem PC-based hardware is possible even with algorithms written in the Matlab script language, although in this case processing delay is larger and floating point performance is smaller than in algorithms programmed in the C language.
Abstract: Development and evaluation of algorithms for digital signal processing in hearing aids includes many stages from the first implementation of the algorithmic idea, technical evaluations, subjective evaluations with patients up to field tests, and involves several expert groups, usually physicists, engineers, audiologists and hearing aid acousticians. In order to facilitate this complex process, a common platform for development and evaluation is desirable that covers the whole development process and integrates seamlessly the work of the different expert groups. This paper discusses the possibility of using PC-based hardware for this task. Considering the Master-Hearing-Aid (MHA) developed within the HorTech center of competence on hearing technology as an example, it is shown that the approach of using standardized hardware and software is most promising. The MHA allows for the integration of algorithm development using standard software like Matlab and algorithm evaluation using low-delay real-time processing in combination with user-friendly graphical control interfaces. Performance tests show that real-time processing on modem PC-based hardware is possible even with algorithms written in the Matlab script language, although in this case processing delay is larger and floating point performance is smaller than in algorithms programmed in the C language. For waveform processing, a total delay of 4.35 ms can be reached at less than 5% CPU-load when implemented in C, and a delay of about 94 ms at about 20% CPU-load when implemented as a Matlab script. For spectral FFT-based processing, a total delay of 7.25 ms can be reached at less than 10% CPU-load when implemented in C, and a delay of about 187 ms at about 30% CPU-load when implemented as a Matlab script.

Proceedings ArticleDOI
18 Sep 2006
TL;DR: An array of simple programmable processors designed for DSP applications is implemented in 0.18mum CMOS and contains 36 asynchronously clocked independent processors.
Abstract: An array of simple programmable processors designed for DSP applications is implemented in 0.18mum CMOS and contains 36 asynchronously clocked independent processors. The processors operate at 475MHz, and each processor has a maximum power of 144mW at 1.8V and occupies 0.66 mm2

Proceedings ArticleDOI
01 Dec 2006
TL;DR: A new framework for wideband signal acquisition purpose-built for compressible signals that enables sub-Nyquist data acquisition via an analog-to-information converter (AIC) based on the recently developed theory of compressive sensing.
Abstract: The stability and programmability of digital signal processing systems has motivated engineers to move the analog-to-digital conversion (ADC) process closer and closer to the front end of many signal processing systems in order to perform as much processing as possible in the digital domain. Unfortunately, many important applications, including radar and communication systems, involve wideband signals that seriously stress modern ADCs; sampling these signals above the Nyquist rate is in some cases challenging and in others impossible. While wideband signals by definition have a large bandwidth, often the amount of information they carry per second is much lower; that is, they are compressible in some sense. The first contribution of this paper is a new framework for wideband signal acquisition purpose-built for compressible signals that enables sub-Nyquist data acquisition via an analog-to-information converter (AIC). The framework is based on the recently developed theory of compressive sensitng in which a small number of non-adaptive, randomized measurements are sufficient to reconstruct compressible signals. The second contribution of this paper is an AIC implementation design and study of the tradeoffs and nonidealities introduced by real hardware. The goal is to identify and optimize the parameters that dominate the overall system performance.

Journal ArticleDOI
TL;DR: A new methodology for the floating-to-fixed point conversion is proposed for software implementations to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint.
Abstract: Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

Journal ArticleDOI
TL;DR: In this paper, coherent demodulation of optical multilevel (M-ary) phase-shift-keying (PSK) signals was demonstrated using distributed feedback semiconductor lasers with linewidths of 150 kHz as a transmitter and a local oscillator.
Abstract: We demonstrate coherent demodulation of optical multilevel (M-ary) phase-shift-keying (PSK) signals. Since the carrier phase is estimated accurately through digital signal processing after phase-diversity homodyne detection, the system performance is highly tolerant to the carrier phase noise. By off-line bit-error-rate measurements using distributed feedback semiconductor lasers with linewidths of 150 kHz as a transmitter and a local oscillator, it is shown that binary PSK (M=2), quadrature PSK (M=4), and eight-PSK (M=8) signals are successfully demodulated at the symbol rate of 10 Gsymbol/s

Journal ArticleDOI
TL;DR: An approximate analytical expression for the bit error rate of a QPSK homodyne receiver employing digital signal processing for carrier recovery is derived and it is found that BER estimated using the analytical expression is in excellent agreement with Monte-Carlo simulations.
Abstract: An approximate analytical expression for the bit error rate of a QPSK homodyne receiver employing digital signal processing for carrier recovery is derived. BER estimated using the analytical expression is in excellent agreement with Monte-Carlo simulations. The analytical approximation leads to an intuitive understanding of the trade off in such systems and allows optimization of system parameters without resorting to Monte-Carlo simulations.

Proceedings ArticleDOI
01 Aug 2006
TL;DR: In this article, a speed sensorless control scheme for permanent magnet synchronous motor (PMSM) drive using an improved sliding mode observer (SMO) is proposed, where the estimated rotor position and speed are obtained directly from them.
Abstract: This paper proposes a speed sensorless control scheme for permanent magnet synchronous motor (PMSM) drive using an improved sliding mode observer (SMO) A variable frequency or cutoff frequency in low pass filter is not essential to use in the improved SMO Since the product of the improved SMO gain and the control action of sigmoid function, which replaces the Bang-Bang control or discontinuous control, commonly found in the conventional SMO, can determine the equivalent back emfs The estimated rotor position and speed are obtained directly from them Therefore, the low pass filter can be eliminated and the improved SMO could simplify the conventional SMO Also, the cutoff frequency tuning is not necessary A DSP based digital controller using the TMS320F2812 from the Texas Instruments has been employed to realize the proposed sensorless control scheme Experimental results show that the proposed control scheme can achieve robust sensorless requirement

Journal ArticleDOI
TL;DR: An algorithm is described that is able to fit a multiharmonic acquired signal, determining the amplitude and phase of all harmonics, using least-squares sine-fitting algorithms.
Abstract: A new generation of multipurpose measurement equipment is transforming the role of computers in instrumentation. The new features involve mixed devices, such as analog-to-digital and digital-to-analog converters and digital signal processing techniques, that are able to substitute typical discrete instruments like multimeters and analyzers. Signal-processing applications frequently use least-squares (LS) sine-fitting algorithms. Periodic signals may be interpreted as a sum of sine waves with multiple frequencies: the Fourier series. This paper describes an algorithm that is able to fit a multiharmonic acquired signal, determining the amplitude and phase of all harmonics. Simulation and experimental results are presented.