scispace - formally typeset
Search or ask a question

Showing papers on "Noise (signal processing) published in 2008"


Journal ArticleDOI
TL;DR: This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery, which is eigen decomposition based, unsupervised, and fully automatic.
Abstract: Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.

1,154 citations


Posted Content
TL;DR: In this article, the eigenvalues of the covariance matrix of signals received at the secondary users are used for signal detection in cognitive radio systems, and the proposed methods overcome the noise uncertainty problem, and can even perform better than the ideal energy detection when the signals to be detected are highly correlated.
Abstract: Spectrum sensing is a fundamental component is a cognitive radio. In this paper, we propose new sensing methods based on the eigenvalues of the covariance matrix of signals received at the secondary users. In particular, two sensing algorithms are suggested, one is based on the ratio of the maximum eigenvalue to minimum eigenvalue; the other is based on the ratio of the average eigenvalue to minimum eigenvalue. Using some latest random matrix theories (RMT), we quantify the distributions of these ratios and derive the probabilities of false alarm and probabilities of detection for the proposed algorithms. We also find the thresholds of the methods for a given probability of false alarm. The proposed methods overcome the noise uncertainty problem, and can even perform better than the ideal energy detection when the signals to be detected are highly correlated. The methods can be used for various signal detection applications without requiring the knowledge of signal, channel and noise power. Simulations based on randomly generated signals, wireless microphone signals and captured ATSC DTV signals are presented to verify the effectiveness of the proposed methods.

1,022 citations


Journal ArticleDOI
TL;DR: A new ECG enhancement method based on the recently developed empirical mode decomposition (EMD) that is able to remove both high-frequency noise and BW with minimum signal distortion and is validated through experiments on the MIT-BIH databases.

604 citations


Posted Content
TL;DR: In this paper, spectrum-sensing algorithms are proposed based on the sample covariance matrix calculated from a limited number of received signal samples, which do not need any information about the signal, channel, and noise power a priori.
Abstract: Spectrum sensing, i.e., detecting the presence of primary users in a licensed spectrum, is a fundamental problem in cognitive radio. Since the statistical covariances of received signal and noise are usually different, they can be used to differentiate the case where the primary user's signal is present from the case where there is only noise. In this paper, spectrum sensing algorithms are proposed based on the sample covariance matrix calculated from a limited number of received signal samples. Two test statistics are then extracted from the sample covariance matrix. A decision on the signal presence is made by comparing the two test statistics. Theoretical analysis for the proposed algorithms is given. Detection probability and associated threshold are found based on statistical theory. The methods do not need any information of the signal, the channel and noise power a priori. Also, no synchronization is needed. Simulations based on narrowband signals, captured digital television (DTV) signals and multiple antenna signals are presented to verify the methods.

494 citations


Proceedings ArticleDOI
25 Jun 2008
TL;DR: The experiment described in this paper indicates that RSSI is, in fact, a poor distance estimator when using wireless sensor networks in buildings.
Abstract: In todaypsilas modern wireless ZigBee-based modules, there are two well-known values for link quality estimation: RSSI (received signal strength indicator) and LQI (link quality indicator). In respect to wireless channel models, received power should be a function of distance. From this aspect, we believed that RSSI can be used for evaluating distances between nodes. The experiment described in this paper indicates that RSSI is, in fact, a poor distance estimator when using wireless sensor networks in buildings. Reflection, scattering and other physical properties have an extreme impact on RSSI measurement and so we can conclude: RSSI is a bad distance estimator.

293 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used recent data on noise levels in gene expression to show that it should be possible to transmit much more than just one regulatory bit, which would require that the dynamic range of TF concentrations used by the cell, the input/output relation of the regulatory module, and the noise in gene expressions satisfy certain matching relations.
Abstract: In the simplest view of transcriptional regulation, the expression of a gene is turned on or off by changes in the concentration of a transcription factor (TF). We use recent data on noise levels in gene expression to show that it should be possible to transmit much more than just one regulatory bit. Realizing this optimal information capacity would require that the dynamic range of TF concentrations used by the cell, the input/output relation of the regulatory module, and the noise in gene expression satisfy certain matching relations, which we derive. These results provide parameter-free, quantitative predictions connecting independently measurable quantities. Although we have considered only the simplified problem of a single gene responding to a single TF, we find that these predictions are in surprisingly good agreement with recent experiments on the Bicoid/Hunchback system in the early Drosophila embryo and that this system achieves ∼90% of its theoretical maximum information transmission.

282 citations


Journal ArticleDOI
TL;DR: A wavelet-based denoising technique for the recovery of signal contaminated by white additive Gaussian noise and a new thresholding procedure is proposed, called subband adaptive, which outperforms the existing thresholding techniques.

224 citations


Journal ArticleDOI
TL;DR: A new method to automatically determine the number of components from a limited number of (possibly) high dimensional noisy samples, based on the eigenvalues of the sample covariance matrix, which compares favorably with other common algorithms.

222 citations


Journal ArticleDOI
TL;DR: The main challenges associated with noninvasive, continuous, wearable, and long-term breathing monitoring are analyzed and an algorithm has been devised to detect breathing, suitable for a miniature sensor device.
Abstract: This paper analyzes the main challenges associated with noninvasive, continuous, wearable, and long-term breathing monitoring The characteristics of an acoustic breathing signal from a miniature sensor are studied in the presence of sources of noise and interference artifacts that affect the signal Based on these results, an algorithm has been devised to detect breathing It is possible to implement the algorithm on a single integrated circuit, making it suitable for a miniature sensor device The algorithm is tested in the presence of noise sources on five subjects and shows an average success rate of 913% (combined true positives and true negatives)

213 citations


Journal ArticleDOI
TL;DR: BCED can be much better than ED for highly correlated signals, and most importantly, it does not need noise power estimation and overcomes ED's susceptibility to noise uncertainty.
Abstract: In this letter, a method is proposed to optimally combine the received signal samples in space and time based on the principle of maximizing the signal-to-noise ratio (SNR). After the combining, energy detection (ED) is used. However, optimal combining needs information of the source signal and channel, which is usually unknown. To overcome this difficulty, a method is proposed to blindly combine the signal samples. Similar to energy detection, blindly combined energy detection (BCED) does not need any information of the source signal and the channel a priori. BCED can be much better than ED for highly correlated signals, and most importantly, it does not need noise power estimation and overcomes ED's susceptibility to noise uncertainty. Also, perfect synchronization is not required. Simulations based on wireless microphone signals and randomly generated signals are presented to verify the methods.

210 citations


Journal ArticleDOI
TL;DR: A method for removing unwanted components of biological origin from neurophysiological recordings such as magnetoencephalography, electroencephalographs, or multichannel electrophysiological or optical recordings by synthesizing spatial filters synthesized using a blind source separation method known as denoising source separation (DSS).

Posted Content
TL;DR: In this paper, the authors present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem, and show that the algorithm has the following properties (made more precise in the main text of the paper)
Abstract: Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) - It gives near-optimal error guarantees. - It is robust to observation noise. - It succeeds with a minimum number of observations. - It can be used with any sampling operator for which the operator and its adjoint can be computed. - The memory requirement is linear in the problem size. - Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. - It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. - Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.

Journal ArticleDOI
TL;DR: A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented and can account for most the key properties of the data and is more powerful than the original model.
Abstract: A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications.

Journal ArticleDOI
TL;DR: Equalization-enhanced phase noise (EEPN) imposes a tighter constraint on the receive laser phase noise for transmission systems with high symbol rate and large electronically-compensated chromatic dispersion.
Abstract: In coherent optical systems employing electronic digital signal processing, the fiber chromatic dispersion can be gracefully compensated in electronic domain without resorting to optical techniques. Unlike optical dispersion compensator, the electronic equalizer enhances the impairments from the laser phase noise. This equalization-enhanced phase noise (EEPN) imposes a tighter constraint on the receive laser phase noise for transmission systems with high symbol rate and large electronically-compensated chromatic dispersion.

Journal ArticleDOI
David O. Walsh1
TL;DR: In this article, reference coil-based noise cancellation and integrated FID imaging are used to increase the effective signal to noise ratios by an order of magnitude or more, which enables multi-coil surface NMR to produce useful and reliable images when the post-averaged SNR is less than 1.

Patent
Chu Hee Lee1, Jonathan Lee1, Daniel Rosario1, Edward Kim1, Thomas Chan1 
03 Oct 2008
TL;DR: In this article, the system bus is queried for one or more possible sources of a noise component in the input signal, such as window status, fan blower speed, vehicle speed, etc.
Abstract: A voice command acquisition method and system for motor vehicles is improved in that noise source information is obtained directly from the vehicle system bus. Upon receiving an input signal with a voice command, the system bus is queried for one or more possible sources of a noise component in the input signal. In addition to vehicle-internal information (e.g., window status, fan blower speed, vehicle speed), the system may acquire external information (e.g., weather status) in order to better classify the noise component in the input signal. If the noise source is found to be a window, for example, the driver may be prompted to close the window. In addition, if the fan blower is at a high speed level, it may be slowed down automatically.

Journal ArticleDOI
TL;DR: In this article, a spatial filter was developed that incorporates the noise and full signal variance covariance matrix to tailor the filter to the error characteristics of a particular monthly solution, which can accommodate noise of an arbitrary shape, such as the characteristic stripes.
Abstract: SUMMARY Most applications of the publicly released Gravity Recovery and Climate Experiment monthly gravity field models require the application of a spatial filter to help suppressing noise and other systematic errors present in the data The most common approach makes use of a simple Gaussian averaging process, which is often combined with a ‘destriping’ technique in which coefficient correlations within a given degree are removed As brute force methods, neither of these techniques takes into consideration the statistical information from the gravity solution itself and, while they perform well overall, they can often end up removing more signal than necessary Other optimal filters have been proposed in the literature; however, none have attempted to make full use of all information available from the monthly solutions By examining the underlying principles of filter design, a filter has been developed that incorporates the noise and full signal variance–covariance matrix to tailor the filter to the error characteristics of a particular monthly solution The filter is both anisotropic and nonsymmetric, meaning it can accommodate noise of an arbitrary shape, such as the characteristic stripes The filter minimizes the mean-square error and, in this sense, can be considered as the most optimal filter possible Through both simulated and real data scenarios, this improved filter will be shown to preserve the highest amount of gravity signal when compared to other standard techniques, while simultaneously minimizing leakage effects and producing smooth solutions in areas of low signal

Proceedings ArticleDOI
08 Dec 2008
TL;DR: Several spectrum sensing methods designed using the generalized likelihood ratio test (GLRT) paradigm, for application in a cognitive radio network are proposed, showing that by making various assumptions on the availability of side information such as noise variance and signal space dimension, several feasible algorithms result which all outperform the standard energy detector.
Abstract: In this paper, we propose several spectrum sensing methods designed using the generalized likelihood ratio test (GLRT) paradigm, for application in a cognitive radio network. The proposed techniques utilize the eigenvalues of the sample covariance matrix of the received signal vector, taking advantage of the fact that in practice, the primary signal in a cognitive radio environment will either occupy a subspace of dimension strictly smaller than the dimension of the observation space, or have a spectrum that is non-white. We show that by making various assumptions on the availability of side information such as noise variance and signal space dimension, several feasible algorithms result which all outperform the standard energy detector.

Journal ArticleDOI
TL;DR: This paper introduces a 2D strain imaging technique based on minimizing a cost function using dynamic programming (DP) that incorporates similarity of echo amplitudes and displacement continuity and generates high-quality strain images of freehand palpation elastography with up to 10% compression.
Abstract: This paper introduces a 2D strain imaging technique based on minimizing a cost function using dynamic programming (DP). The cost function incorporates similarity of echo amplitudes and displacement continuity. Since tissue deformations are smooth, the incorporation of the smoothness into the cost function results in reduced decorrelation noise. As a result, the method generates high-quality strain images of freehand palpation elastography with up to 10% compression, showing that the method is more robust to signal decorrelation (caused by scatterer motion in high axial compression and nonaxial motions of the probe) in comparison to the standard correlation techniques. The method operates in less than 1 s and is thus also potentially suitable for real time elastography.

Journal ArticleDOI
TL;DR: A block thresholding estimation procedure is introduced, which adjusts all parameters adaptively to signal property by minimizing a Stein estimation of the risk.
Abstract: Removing noise from audio signals requires a nondiagonal processing of time-frequency coefficients to avoid producing ldquomusical noise.rdquo State of the art algorithms perform a parameterized filtering of spectrogram coefficients with empirically fixed parameters. A block thresholding estimation procedure is introduced, which adjusts all parameters adaptively to signal property by minimizing a Stein estimation of the risk. Numerical experiments demonstrate the performance and robustness of this procedure through objective and subjective evaluations.

Journal ArticleDOI
TL;DR: The error in estimating the derivative(s) of a noisy signal by using a high-gain observer by using the infinity norms of the noise and a derivative of the signal is studied and quantified.

Patent
16 Jan 2008
TL;DR: In this paper, an active noise control (ANC) system with a plurality of microphones and a multiplicity of loudspeakers is described. And the adaptive filter bank is configured to filter the reference signal to provide the loudspeaker signals as filtered signals.
Abstract: The present disclosure relates to an active noise control (ANC) system. In accordance with one aspect of the invention, the ANC system includes a plurality of microphones and a plurality of loudspeakers. Each microphone is configured to provide an error signal that represents a residual noise signal. Each loudspeaker is configured to receive a loudspeaker signal and to radiate a respective acoustic signal. The ANC system further includes an adaptive filter bank, which is supplied with a reference signal and configured to filter the reference signal to provide the loudspeaker signals as filtered signals. The filter characteristics of the adaptive filter bank are adapted such that a cost function is minimized. The cost function thereby represents the weighted sum of the squared error signals.

Proceedings ArticleDOI
22 Sep 2008
TL;DR: A new algorithm for estimating the signal-to-noise ratio (SNR) of speech signals, called WADA-SNR (Waveform Amplitude Distribution Analysis) is introduced, which shows significantly less bias and less variability with respect to the type of noise compared to the standard NIST STNR algorithm.
Abstract: In this paper, we introduce a new algorithm for estimating the signal-to-noise ratio (SNR) of speech signals, called WADA-SNR (Waveform Amplitude Distribution Analysis) In this algorithm we assume that the amplitude distribution of clean speech can be approximated by the Gamma distribution with a shaping parameter of 04, and that an additive noise signal is Gaussian Based on this assumption, we can estimate the SNR by examining the amplitude distribution of the noise-corrupted speech We evaluate the performance of the WADA-SNR algorithm on databases corrupted by white noise, background music, and interfering speech The WADA-SNR algorithm shows significantly less bias and less variability with respect to the type of noise compared to the standard NIST STNR algorithm In addition, the algorithm is quite computationally efficient Index Terms : SNR estimation, Gamma distribution, Gaussian distribution 1 Introduction The estimation of signal-to-noise ratios (SNRs) has been extensively investigated for decades and it is still an active field of research (

05 Nov 2008
TL;DR: This is a selective review article that attempts to synthesize some recent work on ``nonlinear'' wavelet methods in nonparametric curve estimation and their role on a variety of applications.
Abstract: The development of wavelet theory has in recent years spawned applications in signal processing, in fast algorithms for integral transforms, and in image and function representation methods. This last application has stimulated interest in wavelet applications to statistics and to the analysis of experimental data, with many successes in the efficient analysis, processing, and compression of noisy signals and images. This is a selective review article that attempts to synthesize some recent work on “nonlinear” wavelet methods in nonparametric curve estimation and their role on a variety of applications. After a short introduction to wavelet theory, we discuss in detail several wavelet shrinkage and wavelet thresholding estimators, scattered in the literature and developed, under more or less standard settings, for density estimation from i.i.d. observations or to denoise data modeled as observations of a signal with additive noise. Most of these methods are fitted into the general concept of regularization with appropriately chosen penalty functions. A narrow range of applications in major areas of statistics is also discussed such as partial linear regression models and functional index models. The usefulness of all these methods are illustrated by means of simulations and practical examples.

Proceedings ArticleDOI
TL;DR: In this paper, the authors extend the camera identification technology based on sensor noise to a more general setting when the image under investigation has been simultaneously cropped and scaled. And they demonstrate that sensor noise can be used as a template to reverse-engineer in-camera geometrical processing as well as recover from later geometric transformations, thus offering a possible application for resynchronizing in digital watermark detection.
Abstract: In this paper, we extend our camera identification technology based on sensor noise to a more general setting when the image under investigation has been simultaneously cropped and scaled. The sensor fingerprint detection is formulated using hypothesis testing as a two-channel problem and a detector is derived using the generalized likelihood ratio test. A brute force search is proposed to find the scaling factor which is then refined in a detailed search. The cropping parameters are determined from the maximum of the normalized cross-correlation between two signals. The accuracy and limitations of the proposed technique are tested on images that underwent a wide range of cropping and scaling, including images that were acquired by digital zoom. Additionally, we demonstrate that sensor noise can be used as a template to reverse-engineer in-camera geometrical processing as well as recover from later geometrical transformations, thus offering a possible application for re-synchronizing in digital watermark detection.

Proceedings ArticleDOI
18 May 2008
TL;DR: This research aims to improve the convenience and mobility of EEG recording by eliminating the need for conductive gel and creating sensors that fit into a scalable array architecture.
Abstract: Electroencephalograph (EEG) recording systems offer a versatile, non-invasive window on the brain's spatiotemporal activity for many neuroscience and clinical applications. Our research aims to improve the convenience and mobility of EEG recording by eliminating the need for conductive gel and creating sensors that fit into a scalable array architecture. The EEG dry-contact electrodes are created with micro-electrical-mechanical system (MEMS) technology. Each channel of our analog signal processing front-end comes on a custom-built, dime-sized circuit board which contains an amplifier, Alters, and analog-to-digital conversion. A daisy-chain configuration between boards with bit-serial output reduces the wiring needed. A system consisting of seven sensors is demonstrated in a real- world setting. Consuming just 3 mW, it is suitable for mobile applications. The system achieves an input-referred noise of 0.28 muVrms in the signal band of 1 to 100 Hz, comparable to the best medical-grade systems in use. Noise behavior across the daisy-chain is characterized, alpha-band rhythms are detected, and an eye-blink study is demonstrated.

Proceedings ArticleDOI
19 Mar 2008
TL;DR: Simulation results show that a coordinated beamforming system can significantly outperform a conventional system with per-cell signal processing and also naturally leads to a distributed implementation.
Abstract: In a conventional wireless cellular system, signal processing is performed on a per-cell basis; out-of-cell interference is treated as background noise. This paper considers the benefit of coordinating base-stations across multiple cells in a multi-antenna beamforming system, where multiple base-stations may jointly optimize their respective beamformers to improve the overall system performance. This paper focuses on a downlink scenario where each remote user is equipped with a single antenna, but where multiple remote users may be active simultaneously in each cell. The design criterion is the minimization of the total weighted transmitted power across the base-stations subject to signal-to-interference-and-noise-ratio (SINR) constraints at the remote users. The main contribution is a practical algorithm that is capable of finding the joint optimal beamformers for all base-stations globally and efficiently. The proposed algorithm is based on a generalization of uplink-downlink duality to the multi-cell setting using the Lagrangian duality theory. The algorithm also naturally leads to a distributed implementation. Simulation results show that a coordinated beamforming system can significantly outperform a conventional system with per-cell signal processing.

Journal ArticleDOI
TL;DR: A photonic subsampling ADC is demonstrated that downconverts and digitizes a narrowband microwave signal at 40 GHz carrier frequency with higher than 7 effective-number-of-bit (ENOB) resolution.
Abstract: Conversion of analog signals into digital signals is one of the most important functionalities in modern signal processing systems. As the signal frequency increases beyond 10 GHz, the timing jitter from electronic clocks, currently limited at ~100 fs, compromises the achievable resolution of analog-to-digital converters (ADCs). Owing to their ultralow timing jitter, the use of optical pulse trains from passively mode-locked lasers has been considered to be a promising way for sampling electronic signals. In this paper, based on sub-10 fs jitter optical sampling pulse trains, we demonstrate a photonic subsampling ADC that downconverts and digitizes a narrowband microwave signal at 40 GHz carrier frequency with higher than 7 effective-number-of-bit (ENOB) resolution.

Journal ArticleDOI
TL;DR: A model for the 3D NPS, DQE, and NEQ ofCBCT is presented that reduces to conventional descriptions of axial CT as a special case and provides a fairly general framework that can be applied to the design and optimization of CBCT systems for various applications.
Abstract: The physical factors that govern 2D and 3D imaging performance may be understood from quantitative analysis of the spatial-frequency-dependent signal and noise transfer characteristics [e.g., modulation transfer function (MTF), noise-power spectrum (NPS), detective quantum efficiency (DQE), and noise-equivalent quanta (NEQ)] along with a task-based assessment of performance (e.g., detectability index). This paper advances a theoretical framework based on cascaded systems analysis for calculation of such metrics in cone-beam CT (CBCT). The model considers the 2D projection NPS propagated through a series of reconstruction stages to yield the 3D NPS and allows quantitative investigation of tradeoffs in image quality associated with acquisition and reconstruction techniques. While the mathematical process of 3D image reconstruction is deterministic, it is shown that the process is irreversible, the associated reconstruction parameters significantly affect the 3D DQE and NEQ, and system optimization should consider the full 3D imaging chain. Factors considered in the cascade include: system geometry; number of projection views; logarithmic scaling; ramp, apodization, and interpolation filters; 3D back-projection; and 3D sampling (noise aliasing). The model is validated in comparison to experiment across a broad range of dose, reconstruction filters, and voxel sizes, and the effects of 3D noise correlation on detectability are explored. The work presents a model for the 3D NPS, DQE, and NEQ of CBCT that reduces to conventional descriptions of axial CT as a special case and provides a fairly general framework that can be applied to the design and optimization of CBCT systems for various applications.

Journal ArticleDOI
TL;DR: A reconstruction filter that limits the frequency components beyond the Nyquist frequency in the z direction, referred to as the slice thickness filter, eliminates noise aliasing and improves 3D DQE.
Abstract: The optimization of digital breast tomosynthesis (DBT) geometry and reconstruction is crucial for the clinical translation of this exciting new imaging technique. In the present work, the authors developed a three-dimensional (3D) cascaded linear system model for DBT to investigate the effects of detector performance, imaging geometry, and image reconstruction algorithm on the reconstructed image quality. The characteristics of a prototype DBT system equipped with an amorphous selenium flat-panel detector and filtered backprojection reconstruction were used as an example in the implementation of the linear system model. The propagation of signal and noise in the frequency domain was divided into six cascaded stages incorporating the detector performance, imaging geometry, and reconstruction filters. The reconstructed tomosynthesis imaging quality was characterized by spatial frequency dependent presampling modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) in 3D. The results showed that both MTF and NPS were affected by the angular range of the tomosynthesis scan and the reconstruction filters. For image planes parallel to the detector (in-plane), MTF at low frequencies was improved with increase in angular range. The shape of the NPS was affected by the reconstruction filters. Noise aliasing in 3D could be introduced by insufficient voxel sampling, especially in the z (slice-thickness) direction where the sampling distance (slice thickness) could be more than ten times that for in-plane images. Aliasing increases the noise at high frequencies, which causes degradation in DQE. Application of a reconstruction filter that limits the frequency components beyond the Nyquist frequency in the z direction, referred to as the slice thickness filter, eliminates noise aliasing and improves 3D DQE. The focal spot blur, which arises from continuous tube travel during tomosynthesis acquisition, could degrade DQE significantly because it introduces correlation in signal only, not NPS.