TL;DR: A constrained epsilon-minimax test is proposed to detect and classify nonorthogonal vectors in Gaussian noise, with a general covariance matrix, and in presence of linear interferences to minimizes the maximum classification error probability subject to a constraint on the false alarm probability.
Abstract: A constrained epsilon-minimax test is proposed to detect and classify nonorthogonal vectors in Gaussian noise, with a general covariance matrix, and in presence of linear interferences. This test is epsilon-minimax in the sense that it has a small loss of optimality with respect to the purely theoretical and incalculable constrained minimax test which minimizes the maximum classification error probability subject to a constraint on the false alarm probability. This loss is even more negligible as the signal-to-noise ratio is large. Furthermore, it is also an epsilon-equalizer test since its classification error probabilities are equalized up to a negligible difference. When the signal-to-noise ratio is sufficiently large, an asymptotically equivalent test with a very simple form is proposed. This equivalent test coincides with the generalized likelihood ratio test when the vectors to classify are strongly separated in term of Euclidean distance. Numerical experiments on active user identification in a multiuser system confirm the theoretical findings.
This problem has many applications including radar and sonar signal processing [1], image processing [2], speech segmentation [3], [4], integrity monitoring of navigation systems [5], quantitative nondestructive testing [6], network monitoring [7] and digital communication [8] among others.
The unknown nuisance vector belongs to the nuisance parameter subspace spanned by the columns of a known matrix .
A. Relation to Previous Work
From the statistical point of view, this problem of simultaneous detection/classification can be viewed as an hypotheses testing problem between several composite hypotheses [9], [10].
The first approach to the design of statistical detection and classification tests is the uncoupled design strategy where detection performance is optimized under the false alarm constraint and the classification is gated by this optimal detection.
All the above mentioned probabilities generally vary as a function of both the vector and the nuisance parameter .
This strategy is optimal only in some cases.
These results are not yet extended to the case of multiple hypotheses testing with a constraint on the false alarm probability.
B. Motivation of the Study
The design of the optimal constrained minimax test mainly depends on three major points: 1) the geometric complexity of the vector constellation, 2) the covariance matrix of the Gaussian noise and 3) the presence of nuisance parameters.
In the simplest case, the vectors are orthogonal and have the same norm (the least complex vector constellation), the covariance matrix is the identity matrix (possibly multiplied by a known scalar) and there is no nuisance parameter .
This solution depends on some unknown coefficients, namely the optimal weights and the threshold.
The common solution, namely the parity space approach, involves two steps.
When the optimal statistical test is unknown or intractable, it is often assumed that the optimal weights are equal (since it is the least informative a priori choice) and the threshold is tuned to satisfy the false alarm constraint.
C. Contribution and Organization of the Paper
The first contribution is the design of a constrained -minimax detection/classification test solving the detection/classification problem in the case of nonorthogonal vectors with linear nuisance parameters and an additive Gaussian noise with a known general covariance matrix.
It is also shown that this test coincides with a constrained -equalizer Bayesian test which equalizes the classification error probabilities over the alternative hypotheses up to a constant .
This map is also used to calculate in advance the asymptotic maximum classification error probability of the constrained -minimax test as the Signal-to-Noise Ratio (SNR) tends to infinity.
Finally, it is shown that the MGLRT is -optimal when the mutual geometry between the hypotheses is very simple, i.e., when each vector has at most one other vector nearest to it in term of Euclidean distance.
In general, the MGLRT is suboptimal and the loss of optimality may be significant.
II. PROBLEM STATEMENT
This section presents the multiple hypotheses testing problem which consists in detecting and classifying a vector in the presence of linear nuisance parameters.
A new optimality criterion, namely the constrained -minimax criterion, is introduced and motivated.
A. Multiple Hypotheses Testing
Without any loss of generality, it is assumed that the noise vector follows a zero-mean Gaussian distribution .
In fact, it is always possible to multiply (1) on the left by the inverse square-root matrix of to obtain the linear Gaussian model with the vector , the nuisance matrix and a Gaussian noise having the covariance matrix .
The following condition of separability is assumed to be satisfied: (3) In other words, it is assumed that the intersection of the two linear manifolds and (which are parallel to each other) is an empty set for all (two parallel linear manifolds with nonempty intersection are equal).
A test function for the multiple hypotheses is a -dimensional vector function defined on such that and Given , the test function decides the hypothesis if and only if, also known as Definition 1.
The maximum classification error probability for the test function is denoted.
B. Constrained Epsilon-Minimax Test
As mentioned in the introduction, the constrained minimax criterion given in [18] is a very natural criterion for problem (2).
Hence, to overcome this difficulty, it makes sense to consider constrained -minimax tests, i.e., tests that approximate optimal minimax test with a small loss, say , of optimality.
There exists a positive function satisfying as such that for any other test function .
Obviously, Definition 2 assumes that the positive constant is (very) small.
In some cases, it is possible to get (see Section VI) but, generally, because of the vector constellation complexity.
III. EPSILON-MINIMAX TEST FOR COMPOSITE HYPOTHESES
This section introduces the constrained -equalizer Bayesian test of level .
The first step to design the constrained -equalizer Bayesian test between composite hypotheses in presence of nuisance parameters consists in eliminating these unknown parameters.
Proposition 2 shows that this elimination, based on the nuisance parameters rejection, leads to a reduced decision problem between simple statistical hypotheses.
A. Constrained Epsilon-Equalizer Test
Let us recall the definition of the constrained Bayesian test before introducing the definition of the constrained -equalizer test.
Let be a probability distribution over called the a priori distribution.
The constrained Bayesian test function of level associated to is given by (6) and for (7) where the threshold is selected to satisfy the constraint (8) The following -equalization criterion serves to design a constrained -minimax test.
Definition 3: A test function is a constrained -equalizer test between the hypotheses in the class if the following condi- tions are fulfilled : i) ; ii).
B. Reduction to Epsilon-Equalizer Test for Simple Hypotheses
The presence of linear nuisance parameters complicates the statistical decision problem.
The family of distributions for and remains invariant (see [10] for details and definitions) under the group of translations which induces in the parameter space the group that preserves all the sets , i.e., for all and .
The following proposition shows that a constrained -equalizer Bayesian test for the reduced problem (13) is a constrained -minimax test for the initial problem (2).
Define the -dimensional unit simplex Proof: Let for all .
This comes from the fact that the rejection principle is equivalent to assume that the parameter follows under a degenerate a priori distribution whose support consists of only one value for all .
A. Principle of Epsilon-Equalization
Designing a constrained -equalizer Bayesian test for the problem (13) is difficult for mainly three reasons: i) the common value of the classification error probabilities is unknown, ii) the weight vector ensuring the -equalization of the classification error probabilities is not easily calculable and iii) the threshold must be chosen accordingly to the prescribed false alarm probability .
Loosely speaking, the separability map is a graph [34], [35] whose topology is deduced from the Euclidean distances between the vectors to be classified.
The neighborhood of characterizes the maximum classification error probability of .
Thus, it is proposed to solve an original linear programming problem whose solution gives the remaining weights.
To the authors’s knowledge, such an ap- proach has never been addressed before in the literature.
B. Mutual Geometry for Simple Gaussian Hypotheses
The distance between the null hypothesis and the alterna- tive one is defined as (15) where is a real depending on the prescribed false alarm probability .
Suppose that the component contains elements with, also known as Lemma 1.
Then, according to the definition of the critical value, and one obtain .
When the component is not a star graph, the least separable vector is not easily identifiable and it becomes necessary to study in details the internal geometry of the component in order to calculate .
The separability map and its two components and are represented in Fig.
C. Constrained Epsilon-Equalizer Test for Simple Hypotheses
To derive the constrained -equalizer test, some bounds on the classification error probabilities and also on the false alarm probability are used.
Finally, constraint (28) means that the weight of the noncritical vector must not exceed a certain level imposed by the prescribed false alarm probability.
Under the assumptions A1) and A2), there exist a weight vector given by (23) and (29) and a threshold (30) where , for which the test is a constrained -equalizer test between such that, also known as Theorem 1.
The optimal weight vector is computed by using an iterative algorithm.
V. ASYMPTOTICALLY EQUIVALENT TEST
This section proposes some asymptotically equivalent tests to the constrained -minimax one given in Theorem 1.
These tests have the same asymptotic maximum classification error but their form is simpler.
A. Simplified Asymptotic Form of the Epsilon-Minimax Test
Let be the positive real where with and Let be cut into the three disjoint subsets and such that where denotes the component containing .
Let be defined by for all where and let be the threshold (34) Proposition 4: Under the assumptions A1) and A2), the exact comparison between and is not relevant.
It must be noted that is not necessarily a constrained -equalizer test.
B. Optimality of the MGLRT
Let be the uniform weight vector such that for all .
Let Define as the number of vectors at distance from and Proposition 5: The test , with the threshold satisfies and (35) Proof: From (55) applied to the weight vector with the threshold , the authors get for and for .
Generally, the test is not a constrained -minimax test because .
Corollary 2: Under the assumptions A1), A2) and A3), the test such that is asymptotically equivalent to the constrained -minimax test as (36) Proof: Under the assumptions A1) and A3), the authors get .
It is important to note that the assumption A3) is very severe in practice since it imposes very strict requirements on the mutual geometry between the hypotheses.
A. Detection of a New User Entrance
In order to make the simulation free of secondary details, let us consider the plainest, yet general enough, model of Direct-Sequence/Code-Division Multiple-Access (DS/CDMA) involving real signatures and Binary Phase Shift Keying (BPSK) data transmission [29], [30].
Following [31], the fact that is not explicitly taken into account.
Strictly speaking, the goal is to estimate both the entry time, the signature waveform vector and the transmitted bit of the new user.
This is a change detection/classification problem between hypotheses [31].
In contrast to the sequential strategy, the repeated Fixed Size Sample (FSS) one is easily applicable to systems with a variable structure for quickest detection and classification of changes.
B. Simulation Results
For simplicity, it is assumed that the signatures are chosen such that the rejection mechanism (see Subsection III.B) leads to the reduced model where and is either or one of the following vectors: for .
It has the same separability values and critical values than .
This zone contains the maximum classification error probability .
The false alarm zone of the constrained -minimax test is the region defined by the two following curves.
The second one, at the bottom of the region, is the lower bound given in (58).
VII. CONCLUSION
This paper has proposed a constrained -minimax test to detect and classify nonorthogonal vectors in presence of linear nuisance parameters.
Proposition 2 shows that these nuisance parameters can be rejected without any significant loss of optimality.
Theorem 1 proposes a test which classifies the nonorthogonal vectors obtained after this rejection by equalizing the classification error probabilities up to a small constant under a constraint on the false alarm probability.
The test design is based on some weighting coefficients which are computed by solving a linear programming problem deduced from the separability map.
Finally, Proposition 5 proves that the proposed test clearly outperforms the famous MGLRT when the vector constellation is too complex.
TL;DR: A suboptimal CUSUM-type transient change detection algorithm, based on a subclass of truncated Sequential Probability Ratio Tests, is proposed, and the optimization of the proposed algorithm in this subclass leads to a specially designed Finite Moving Average Test.
Abstract: This paper addresses the detection of a suddenly arriving dynamic profile of a finite duration often called a transient change. In contrast to the traditional abrupt change detection, where the post-change period is assumed to be infinitely long, the detection of a suddenly arriving transient change should be done before it disappears. The detection of transient changes after their disappearance is considered as missed. Hence, the traditional quickest change detection criterion, minimizing the average detection delays provided a prescribed false alarm rate, is compromised. The proposed optimality criterion minimizes the worst case probability of missed detection provided that the worst case probability of false alarm during a certain period is upper bounded. A suboptimal CUSUM-type transient change detection algorithm, based on a subclass of truncated Sequential Probability Ratio Tests, is proposed. The optimization of the proposed algorithm in this subclass leads to a specially designed Finite Moving Average Test. The proposed method is analyzed theoretically and by simulation. A special attention is paid to the case of Gaussian observations with a dynamic profile.
TL;DR: This paper addresses the problem of distinguishing between two vector lines observed through noisy measurements and proposes a suboptimal test, called the epsilon most stringent test, which has a very simple form and its statistical properties are expressed in closed-form.
Abstract: This paper addresses the problem of distinguishing between two vector lines observed through noisy measurements. This is a hypothesis testing problem where the two hypotheses are composite since the signal amplitudes are deterministic and not known. An ideal criterion of optimality, namely the most stringent test, consists in minimizing the maximum shortcoming of the test subject to a constrained false alarm probability. The maximum shortcoming corresponds to the maximum gap between the power function of the test and the envelope power function which is defined as the supremum of the power over all tests satisfying the prescribed false alarm probability. The most stringent test is unfortunately intractable. Hence, a suboptimal test, called the epsilon most stringent test, is proposed. This test has a very simple form and its statistical properties are expressed in closed-form. It is numerically shown that the proposed test has a small loss of optimality and that it outperforms the generalized likelihood ratio test.
5 citations
Cites background or methods from "Constrained Epsilon-Minimax Test fo..."
...The notations used throughout the paper come from [25], [26]....
[...]
...It can be concluded that
(15)
It must be noted that is a symmetric function, i.e.,
(16)...
TL;DR: The main contribution of the paper is the design of the Bayesian test with a quadratic loss function and its asymptotic study and the numerical experiments show that the proposed test outperforms theBayesian test associated to the 0—1 loss function when compared by using the quadratics loss function.
Abstract: The Bayesian test with 0—1 loss function is a standard solution to solve a multiple hypothesis testing problem in the Bayesian framework. For a large number of applications (like the intrusion detection, the anomaly detection,…) the alternative hypotheses have quite different importance and 0—1 loss function does not reflect the reality. The quadratic loss function can be more appropriate to distinguish the concurrent hypotheses. The main contribution of the paper is the design of the Bayesian test with a quadratic loss function and its asymptotic study. When the signal-to-noise ratio tends to infinity, it is theoretically established that the error probabilities of the proposed test coincide with the error probabilities of the standard one associated to the 0—1 loss function. In the non-asymptotic case, the numerical experiments show that the proposed test outperforms the Bayesian test associated to the 0—1 loss function when compared by using the quadratic loss function.
TL;DR: The optimization procedure for computing the discrete boxconstrained minimax classifier is presented, and a projected subgradient algorithm which computes the prior maximizing this concave multivariate piecewise affine function over a polyhedral domain is considered.
Abstract: In this paper, we present the optimization procedure for computing the discrete boxconstrained minimax classifier introduced in [1, 2]. Our approach processes discrete or beforehand discretized features. A box-constrained region defines some bounds for each class proportion independently. The box-constrained minimax classifier is obtained from the computation of the least favorable prior which maximizes the minimum empirical risk of error over the box-constrained region. After studying the discrete empirical Bayes risk over the probabilistic simplex, we consider a projected subgradient algorithm which computes the prior maximizing this concave multivariate piecewise affine function over a polyhedral domain. The convergence of our algorithm is established.
TL;DR: A Bayesian test with a modified quadratic loss function is proposed to solve a multiple hypothesis testing (MHT) problem and the conditional asymptotic equivalence between these two tests is theoretically established.
Abstract: A Bayesian test has been previously proposed for a multiple hypothesis testing (MHT) problem with a quadratic loss function such that this problem can fit with some real applications where the concurrent hypotheses should be distinguished. However, this MHT problem as well as this quadratic loss function are insufficient for some other applications such as the simultaneous intrusion detection and localization in a wireless sensor network (WSN). This kind of applications could be considered as a MHT problem with null hypothesis. Therefore, a Bayesian test with a modified quadratic loss function is proposed to solve this MHT problem. The non-asymptotic bounds for analyzing the performance of the proposed test and the Bayesian test with the 0-1 loss function are obtained, from which the conditional asymptotic equivalence between these two tests is then theoretically established. The effectiveness of these bounds and the analysis on the conditional asymptotic equivalence are verified by the simulation results.
1 citations
Cites background from "Constrained Epsilon-Minimax Test fo..."
...MOTIVATION AND CONTRIBUTION This paper deals with a multiple hypothesis testing (MHT) problem [1][2][3] in the Bayesian framework....
[...]
...Remark 1: The MHT problem with an unknown prior distribution has been tackled in a minimax framework [2][5][6]....
TL;DR: In this paper, the authors present Graph Theory with Applications: Graph theory with applications, a collection of applications of graph theory in the field of Operational Research and Management. Journal of the Operational research Society: Vol. 28, Volume 28, issue 1, pp. 237-238.
Abstract: (1977). Graph Theory with Applications. Journal of the Operational Research Society: Vol. 28, Volume 28, issue 1, pp. 237-238.
TL;DR: Gaph Teory Fourth Edition is standard textbook of modern graph theory which covers the core material of the subject with concise yet reliably complete proofs, while offering glimpses of more advanced methods in each chapter by one or two deeper results.
Abstract: Gaph Teory Fourth Edition Th is standard textbook of modern graph theory, now in its fourth edition, combines the authority of a classic with the engaging freshness of style that is the hallmark of active mathematics. It covers the core material of the subject with concise yet reliably complete proofs, while offering glimpses of more advanced methods in each fi eld by one or two deeper results, again with proofs given in full detail.
TL;DR: This self-contained and comprehensive book sets out the basic details of multiuser detection, starting with simple examples and progressing to state-of-the-art applications.
Abstract: From the Publisher:
The development of multiuser detection techniques is one of the most important recent advances in communications technology. This self-contained and comprehensive book sets out the basic details of multiuser detection, starting with simple examples and progressing to state-of-the-art applications. The only prerequisites assumed are undergraduate-level probability, linear algebra, and digital communications. The book contains over 240 exercises and will be a suitable textbook for electrical engineering students. It will also be an ideal self-study guide for practicing engineers, as well as a valuable reference volume for researchers in communications, information theory, and signal processing.
5,048 citations
"Constrained Epsilon-Minimax Test fo..." refers background or methods in this paper
...of user is the diagonal matrix of user amplitudes and is the vector whose th component is the antipodal symbol, or 1, transmitted by user [29], [30]....
[...]
...In order to make the simulation free of secondary details, let us consider the plainest, yet general enough, model of Direct-Sequence/Code-Division Multiple-Access (DS/CDMA) involving real signatures and Binary Phase Shift Keying (BPSK) data transmission [29], [30]....
[...]
...Proof: For a large value of , it is well-known that [29]...
TL;DR: In this article, the authors review the state of the art of fault detection and isolation in automatic processes using analytical redundancy, and present some new results with emphasis on the latest attempts to achieve robustness with respect to modelling errors.
Abstract: The paper reviews the state of the art of fault detection and isolation in automatic processes using analytical redundancy, and presents some new results. It outlines the principles and most important techniques of model-based residual generation using parameter identification and state estimation methods with emphasis upon the latest attempts to achieve robustness with respect to modelling errors. A solution to the fundamental problem of robust fault detection, providing the maximum achievable robustness by decoupling the effects of faults from each other and from the effects of modelling errors, is given. This approach not only completes the theory but is also of great importance for practical applications. For the case where the prerequisites for complete decoupling are not given, two approximate solutions—one in the time domain and one in the frequency domain—are presented, and the crossconnections to earlier approaches are evidenced. The resulting observer schemes for robust instrument fault detection, component fault detection, and actuator fault detection are briefly discussed. Finally, the basic scheme of fault diagnosis using a combination of analytical and knowledge-based redundancy is outlined.
Q1. What are the contributions mentioned in the paper "Constrained epsilon-minimax test for simultaneous detection and classification" ?
In this paper, a constrained epsilon-minimax test is proposed to detect and classify non-orthogonal vectors in Gaussian noise, with a general covariance matrix, and in presence of linear interferences.