scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Fundamentals of statistical signal processing: Estimation theory: by Steven M. KAY; Prentice Hall signal processing series; Prentice Hall; Englewood Cliffs, NJ, USA; 1993; xii + 595 pp.; $65; ISBN: 0-13-345711-7

01 Aug 1994-Control Engineering Practice-Vol. 2, Iss: 4, pp 728
About: This article is published in Control Engineering Practice.The article was published on 1994-08-01. It has received 337 citations till now. The article focuses on the topics: Statistical signal processing.
Citations
More filters
Journal ArticleDOI
TL;DR: It is shown that when decoherence is taken into account, the maximal possible quantum enhancement in the asymptotic limit of infinite N amounts generically to a constant factor rather than quadratic improvement.
Abstract: Quantum metrology employs the properties of quantum states to further enhance the accuracy of some of the most precise measurement schemes to date. Here, a method for estimating the upper bounds to achievable precision in quantum-enhanced metrology protocols in the presence of decoherence is presented.

608 citations

01 Jan 2003
TL;DR: This article distills spectral unmixing algorithms into a unique set and surveys their characteristics through hierarchical taxonomies that reveal the commonalities and differences between algorithms.
Abstract: ■ Spatial pixel sizes for multispectral and hyperspectral sensors are often large enough that numerous disparate substances can contribute to the spectrum measured from a single pixel. Consequently, the desire to extract from a spectrum the constituent materials in the mixture, as well as the proportions in which they appear, is important to numerous tactical scenarios in which subpixel detail is valuable. With this goal in mind, spectral unmixing algorithms have proliferated in a variety of disciplines that exploit hyperspectral data, often duplicating and renaming previous techniques. This article distills these approaches into a unique set and surveys their characteristics through hierarchical taxonomies that reveal the commonalities and differences between algorithms. A set of criteria organizes algorithms according to the philosophical assumptions they impose on the unmixing problem. Examples demonstrate the performance of key techniques.

469 citations


Cites background from "Fundamentals of statistical signal ..."

  • ...Full additivity requires the abundances in a to sum to one [14], and this requirement restricts the solution to lie on the hyperplane given by...

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors derived closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with $N$ while maintaining high rates.
Abstract: Massive multiple-input multiple-output (MIMO) systems are cellular networks where the base stations (BSs) are equipped with unconventionally many antennas, deployed on co-located or distributed arrays. Huge spatial degrees-of-freedom are achieved by coherent processing over these massive arrays, which provide strong signal gains, resilience to imperfect channel knowledge, and low interference. This comes at the price of more infrastructure; the hardware cost and circuit power consumption scale linearly/affinely with the number of BS antennas $N$ . Hence, the key to cost-efficient deployment of large arrays is low-cost antenna branches with low circuit power, in contrast to today's conventional expensive and power-hungry BS antenna branches. Such low-cost transceivers are prone to hardware imperfections, but it has been conjectured that the huge degrees-of-freedom would bring robustness to such imperfections. We prove this claim for a generalized uplink system with multiplicative phase-drifts, additive distortion noise, and noise amplification. Specifically, we derive closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with $N$ while maintaining high rates. The connection between this scaling law and the power consumption of different transceiver circuits is rigorously exemplified. This reveals that one can make the circuit power increase as $\sqrt{N} $ , instead of linearly, by careful circuit-aware system design.

399 citations

Journal ArticleDOI
TL;DR: The proposed power scheduling scheme suggests that the sensors with bad channels or poor observation qualities should decrease their quantization resolutions or simply become inactive in order to save power.
Abstract: We consider the optimal power scheduling problem for the decentralized estimation of a noise-corrupted deterministic signal in an inhomogeneous sensor network. Sensor observations are first quantized into discrete messages, then transmitted to a fusion center where a final estimate is generated. Supposing that the sensors use a universal decentralized quantization/estimation scheme and an uncoded quadrature amplitude modulated (QAM) transmission strategy, we determine the optimal quantization and transmit power levels at local sensors so as to minimize the total transmit power, while ensuring a given mean squared error (mse) performance. The proposed power scheduling scheme suggests that the sensors with bad channels or poor observation qualities should decrease their quantization resolutions or simply become inactive in order to save power. For the remaining active sensors, their optimal quantization and transmit power levels are determined jointly by individual channel path losses, local observation noise variance, and the targeted mse performance. Numerical examples show that in inhomogeneous sensing environment, significant energy savings is possible when compared to the uniform quantization strategy.

390 citations


Cites background from "Fundamentals of statistical signal ..."

  • ...Thus, we can obtain from (21) and (22) that Similarly, we have Since the above bounds are independent of , we obtain the following estimates for the unconditioned means (23) Let be the mse of the centralized BLUE [defined in (7)] (24) Now we calculate the part of mse due to the channel distortion....

    [...]

  • ...Recall from the property of BLUE (2) that the optimal weight of is proportional to ....

    [...]

  • ...The estimator at the fusion center is a generalized version of the BLUE estimator (2) which weighs the message functions linearly with weights decided by both the observation noise and the quantization noise....

    [...]

  • ...,K} to the fusion center, the fusion center can simply perform the linear minimum MSE estimation to recover θ which leads to the following Best Linear Unbiased Estimator (BLUE) [13] θK = Γ(x1, x2, ....

    [...]

  • ...4, the percentage of active sensors versus the normalized deviation of channel path losses is plotted by keeping distribution of local sensor noise variances fixed choosing , and the target mse where is the mse of the centralized BLUE defined in (3)....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases and shows that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small.
Abstract: The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramer-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

276 citations


Cites methods from "Fundamentals of statistical signal ..."

  • ...Figure 1 shows the mean square range errors (MSREs) of the TOA-based CWLS and NLS estimators as well as CRLB versus power of distance error based on the TOA measurements....

    [...]

  • ...(104) From the figures, we observe that the performance of all the proposed methods approached the corresponding CRLBs for sufficiently small measurement errors, which verified their optimality at sufficiently high SNRs....

    [...]

  • ...The optimum value ofΨ is also determined based on the BLUE as follows....

    [...]

  • ...PERFORMANCE ANALYSIS As briefly mentioned in Section 1, the CWLS and WLS estimators in Section 3 can achieve zero bias and the CRLB approximately when the noise is uncorrelated and small in power....

    [...]

  • ...We have proved that for small uncorrelated noise disturbances, the performance of all the proposed CWLS and WLS algorithms attains zero bias and the Cramér-Rao lower bound (CRLB) approximately....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: It is shown that when decoherence is taken into account, the maximal possible quantum enhancement in the asymptotic limit of infinite N amounts generically to a constant factor rather than quadratic improvement.
Abstract: Quantum metrology employs the properties of quantum states to further enhance the accuracy of some of the most precise measurement schemes to date. Here, a method for estimating the upper bounds to achievable precision in quantum-enhanced metrology protocols in the presence of decoherence is presented.

608 citations

01 Jan 2003
TL;DR: This article distills spectral unmixing algorithms into a unique set and surveys their characteristics through hierarchical taxonomies that reveal the commonalities and differences between algorithms.
Abstract: ■ Spatial pixel sizes for multispectral and hyperspectral sensors are often large enough that numerous disparate substances can contribute to the spectrum measured from a single pixel. Consequently, the desire to extract from a spectrum the constituent materials in the mixture, as well as the proportions in which they appear, is important to numerous tactical scenarios in which subpixel detail is valuable. With this goal in mind, spectral unmixing algorithms have proliferated in a variety of disciplines that exploit hyperspectral data, often duplicating and renaming previous techniques. This article distills these approaches into a unique set and surveys their characteristics through hierarchical taxonomies that reveal the commonalities and differences between algorithms. A set of criteria organizes algorithms according to the philosophical assumptions they impose on the unmixing problem. Examples demonstrate the performance of key techniques.

469 citations

Journal ArticleDOI
TL;DR: The proposed power scheduling scheme suggests that the sensors with bad channels or poor observation qualities should decrease their quantization resolutions or simply become inactive in order to save power.
Abstract: We consider the optimal power scheduling problem for the decentralized estimation of a noise-corrupted deterministic signal in an inhomogeneous sensor network. Sensor observations are first quantized into discrete messages, then transmitted to a fusion center where a final estimate is generated. Supposing that the sensors use a universal decentralized quantization/estimation scheme and an uncoded quadrature amplitude modulated (QAM) transmission strategy, we determine the optimal quantization and transmit power levels at local sensors so as to minimize the total transmit power, while ensuring a given mean squared error (mse) performance. The proposed power scheduling scheme suggests that the sensors with bad channels or poor observation qualities should decrease their quantization resolutions or simply become inactive in order to save power. For the remaining active sensors, their optimal quantization and transmit power levels are determined jointly by individual channel path losses, local observation noise variance, and the targeted mse performance. Numerical examples show that in inhomogeneous sensing environment, significant energy savings is possible when compared to the uniform quantization strategy.

390 citations

Journal ArticleDOI
TL;DR: This paper presents a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases and shows that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small.
Abstract: The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramer-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

276 citations

Journal ArticleDOI
TL;DR: A general rule is derived for how the Fisher information scales with the tuning width, regardless of the exact shape of the tuning function, the probability distribution of spikes, and allowing some correlated noise between neurons.
Abstract: Sensory and motor variables are typically represented by a population of broadly tuned neurons. A coarser representation with broader tuning can often improve coding accuracy, but sometimes the accuracy may also improve with sharper tuning. The theoretical analysis here shows that the relationship between tuning width and accuracy depends crucially on the dimension of the encoded variable. A general rule is derived for how the Fisher information scales with the tuning width, regardless of the exact shape of the tuning function, the probability distribution of spikes, and allowing some correlated noise between neurons. These results demonstrate a universal dimensionality effect in neural population coding.

272 citations