scispace - formally typeset
Search or ask a question

Showing papers on "Sequential probability ratio test published in 2009"


Journal ArticleDOI
TL;DR: It is shown that there is a range of network radii in which PUEA are most successful, and that for the same desired threshold on the probability of missing the primary, WSPRT can achieve a probability of successful P UEA 50% less than that obtained by NPCHT.
Abstract: We present a Neyman-Pearson composite hypothesis test (NPCHT) and a Wald's sequential probability ratio test (WSPRT) to detect primary user emulation attacks (PUEA) in cognitive radio networks. Most approaches in the literature on PUEA assume the presence of underlying sensor networks for localization of the malicious nodes. There are no analytical studies available in the literature to study PUEA in the presence of multiple malicious users in fading wireless environments. We present an NPCHT and WSPRT based analysis to detect PUEA in fading wireless channels in the presence of multiple randomly located malicious users. We show that there is a range of network radii in which PUEA are most successful. Results also show that for the same desired threshold on the probability of missing the primary, WSPRT can achieve a probability of successful PUEA 50% less than that obtained by NPCHT.

115 citations


Journal ArticleDOI
TL;DR: The problem of detecting greedy behavior in the IEEE 802.11 MAC protocol is revisited by evaluating the performance of two previously proposed schemes: DOMINO and the sequential probability ratio test (SPRT), and a new analytical formulation of the SPRT is derived that considers access to the wireless medium in discrete time slots.
Abstract: We revisit the problem of detecting greedy behavior in the IEEE 802.11 MAC protocol by evaluating the performance of two previously proposed schemes: DOMINO and the sequential probability ratio test (SPRT). Our evaluation is carried out in four steps. We first derive a new analytical formulation of the SPRT that considers access to the wireless medium in discrete time slots. Then, we introduce an analytical model for DOMINO. As a third step, we evaluate the theoretical performance of SPRT and DOMINO with newly introduced metrics that take into account the repeated nature of the tests. This theoretical comparison provides two major insights into the problem: it confirms the optimality of SPRT, and motivates us to define yet another test: a nonparametric CUSUM statistic that shares the same intuition as DOMINO but gives better performance. We finalize the paper with experimental results, confirming the correctness of our theoretical analysis and validating the introduction of the new nonparametric CUSUM statistic.

57 citations


01 Jan 2009
TL;DR: Rather than make a classification decision for an individual after administering a fixed number of items, it is possible to sequentially select items to maximize information, update the estimated classification probabilities and then evaluate whether there is enough information to terminate testing.
Abstract: Rather than make a classification decision (pass/fail, below basic/basic/proficient/advanced) for an individual after administering a fixed number of items, it is possible to sequentially select items to maximize information, update the estimated classification probabilities and then evaluate whether there is enough information to terminate testing. In measurement this is frequently called adaptive or tailored testing. In statistics, this is called sequential testing.

57 citations


Journal ArticleDOI
TL;DR: In this article, a method for computing the probability of a fault given multiple different types of residuals processors is presented, which uses the Shiryayev sequential probability ratio test to estimate the probabilities of the presence of a signal given the residuals generated from either parity relationships or fault detection filters, and an adaptive fault estimation scheme that enables processing with fewer residuals.
Abstract: A method for detecting faults in the navigation and control system of deep space satellites is presented. A new method for computing the probability of a fault given multiple different types of residuals processors is presented. The method uses the Shiryayev sequential probability ratio test to estimate the probability of the presence of a fault signal given the residuals generated from either parity relationships or fault detection filters, a fault map of the impact of each fault signal on the residuals, and an adaptive fault estimation scheme that enables processing with fewer residuals. This new methodology is applied to the detection of the fault signals in the attitude control system and navigation system of deep space satellites. First a sensor fusion process is presented for blending star tracker data, gyro data, accelerometer data, and information from the vehicle control system to form the best estimate of the navigation state. Then a set of fault detection filters are developed that detect and uniquely identify faults in each of the sensors or actuators. Decision-making is handled through the sequential processing. Simulation results for a single-satellite system are presented.

36 citations


Journal ArticleDOI
TL;DR: In this paper, a general problem of testing two simple hypotheses about the distribution of a discrete-time stochastic process is considered, and the main goal is to minimize an average sample number over all sequential tests whose error probabilities do not exceed some prescribed levels.
Abstract: A general problem of testing two simple hypotheses about the distribution of a discrete-time stochastic process is considered. The main goal is to minimize an average sample number over all sequential tests whose error probabilities do not exceed some prescribed levels. As a criterion of minimization, the average sample number under a third hypothesis is used (modified Kiefer–Weiss problem). For a class of sequential testing problems, the structure of optimal sequential tests is characterized. An application to the Kiefer–Weiss problem for discrete-time stochastic processes is proposed. As another application, the structure of Bayes sequential tests for two composite hypotheses, with a fixed cost per observation, is given. The results are also applied for finding optimal sequential tests for discrete-time Markov processes. In a particular case of testing two simple hypotheses about a location parameter of an autoregressive process of order 1, it is shown that the sequential probability ratio test...

35 citations


Journal ArticleDOI
TL;DR: Two new schemes are proposed, named enhanced weighted sequential probability ratio test (EWSPRT) andEnhanced weighted sequential zero/one test ( EWSZOT), which are robust against SSDF attack, which have much less sampling numbers than WSPRT.
Abstract: As wireless spectrum resources become more scarce while some portions of frequency bands suffer from low utilization, the design of cognitive radio (CR) has recently been urged, which allows opportunistic usage of licensed bands for secondary users without interference with primary users. Spectrum sensing is fundamental for a secondary user to find a specific available spectrum hole. Cooperative spectrum sensing is more accurate and more widely used since it obtains helpful reports from nodes in different locations. However, if some nodes are compromised and report false sensing data to the fusion center on purpose, the accuracy of decisions made by the fusion center can be heavily impaired. Weighted sequential probability ratio test (WSPRT), based on a credit evaluation system to restrict damage caused by malicious nodes, was proposed to address such a spectrum sensing data falsification (SSDF) attack at the price of introducing four times more sampling numbers. In this paper, we propose two new schemes, named enhanced weighted sequential probability ratio test (EWSPRT) and enhanced weighted sequential zero/one test (EWSZOT), which are robust against SSDF attack. By incorporating a new weight module and a new test module, both schemes have much less sampling numbers than WSPRT. Simulation results show that when holding comparable error rates, the numbers of EWSPRT and EWSZOT are 40% and 75% lower than WSPRT, respectively. We also provide theoretical analysis models to support the performance improvement estimates of the new schemes.

34 citations


Journal ArticleDOI
TL;DR: In this paper, two soft-sensing models based on a fuzzy inference system and support vector regression for online prediction of the feedwater flow rate were developed using a training data set and a verification data set, and validated using an independent test data set.
Abstract: The Venturi flow meters that are being used to measure the feedwater flow rate in most pressurized water reactors are confronted with fouling phenomena, resulting in an overestimation of the flow rate. In this paper, we will therefore develop two soft-sensing models based on a fuzzy inference system and support vector regression for online prediction of the feedwater flow rate. The data-based models are developed using a training data set and a verification data set, and validated using an independent test data set. These data sets are divided from the startup data of Yonggwang Nuclear Power Plant Unit 3. The data for training the data-based models is selected with the aid of a subtractive clustering scheme because informative data increases the learning effect. The uncertainty of the data-based models is analyzed using 100 sampled training and verification data sets, and a fixed test data set. The prediction intervals are very small, which means that the predicted values are very accurate. The root mean square error and relative maximum error of the models were quite small. Also, the residual signal between the measured value and the estimated value is used to determine the overmeasure due to the fouling phenomena by a sequential probability ratio test which consequently monitors the existing feedwater flow meters.

23 citations


Journal ArticleDOI
TL;DR: A search algorithm for the truncation apex (TA), with dependences for the search domain, and for the position of the oblique test boundaries, serving jointly as the basis for the development of the test planning algorithm.
Abstract: A sequential probability ratio test (SPRT) is discussed, for comparison of two systems, one "basic" (b), and the other "new" (n), with exponentially distributed times between failures (TBF). The hypothesis that the mean TBFn/MTBFb ges 1 is checked, versus one that it is <1. The paper deals with tests with a low Average Sample Number (ASN), having the advantage of economy in time requirement, and cost; and it is shown that the points of possible solutions in them are sparse. Criteria are proposed for assessment of the test quality, with a view to optimization of its parameters. We present a search algorithm for the truncation apex (TA), with dependences for the search domain, and for the position of the oblique test boundaries, serving jointly as the basis for our development of the test planning algorithm.

22 citations


Proceedings ArticleDOI
Takao Murakami1, Kenta Takahashi1
01 Dec 2009
TL;DR: A new multimodal biometric technique is proposed that significantly reduces the number of inputs by adopting a multihypothesis sequential test that minimizes the average number of observations.
Abstract: Biometric identification has lately attracted attention because of its high convenience; it does not require a user to enter a user ID. The identification accuracy, however, degrades as the number of the enrollees increases. Although many multimodal biometric techniques have been proposed to improve the identification accuracy, it requires the user to input multiple biometric samples and makes the application less convenient. In this paper, we propose a new multimodal biometric technique that significantly reduces the number of inputs by adopting a multihypothesis sequential test that minimizes the average number of observations. The results of the experimental evaluation using the NIST BSSR1 (Biometric Score Set - Release 1) database showed its effectiveness.

20 citations


Journal ArticleDOI
TL;DR: It is concluded that the 2-SPRT chart is competitive in that it is more sensitive and economical for small shifts and has advantages in administration because of fixed sampling points and a proper upper bound on the sample size.
Abstract: Sequential probability ratio test (SPRT) control charts are shown to be able to detect most shifts in the mean or proportion substantially faster than conventional charts such as CUSUM charts. However, they are limited in applications because of the absence of the upper bound on the sample size and possibly large sample numbers during implementation. The double SPRT (2-SPRT) control chart, which applies a 2-SPRT at each sampling point, is proposed in this paper to solve some of the limitations of SPRT charts. Approximate performance measures of the 2-SPRT control chart are obtained by the backward method with the Gaussian quadrature in a computer program. On the basis of two industrial examples and simulation comparisons, we conclude that the 2-SPRT chart is competitive in that it is more sensitive and economical for small shifts and has advantages in administration because of fixed sampling points and a proper upper bound on the sample size. Copyright © 2008 John Wiley & Sons, Ltd.

20 citations


Patent
14 May 2009
TL;DR: In this article, the authors present a system that analyzes telemetry data from a monitored system and assesses the integrity of the monitored system based on the statistical deviation of the multidimensional real-time distribution.
Abstract: One embodiment provides a system that analyzes telemetry data from a monitored system. During operation, the system periodically obtains the telemetry data as a set of telemetry variables from the monitored system and updates a multidimensional real-time distribution of the telemetry data using the obtained telemetry variables. Next, the system analyzes a statistical deviation of the multidimensional real-time distribution from a multidimensional reference distribution for the monitored system using a multivariate sequential probability ratio test (SPRT) and assesses the integrity of the monitored system based on the statistical deviation of the multidimensional real-time distribution. If the assessed integrity falls below a threshold, the system determines a fault in the monitored system corresponding to a source of the statistical deviation.

Journal ArticleDOI
TL;DR: Procedures are reviewed and recommendations made for the choice of the size of a sample to estimate the characteristics of a population consisting of discrete items which may belong to one and only one of a number of categories with examples drawn from forensic science.
Abstract: Procedures are reviewed and recommendations made for the choice of the size of a sample to estimate the characteristics (sometimes known as parameters) of a population consisting of discrete items which may belong to one and only one of a number of categories with examples drawn from forensic science. Four sampling procedures are described for binary responses, where the number of possible categories is only two, e.g., licit or illicit pills. One is based on priors informed from historical data. The other three are sequential. The first of these is a sequential probability ratio test with a stopping rule derived by controlling the probabilities of type 1 and type 2 errors. The second is a sequential variation of a procedure based on the predictive distribution of the data yet to be inspected and the distribution of the data that have been inspected, with a stopping rule determined by a prespecified threshold on the probability of a wrong decision. The third is a two-sided sequential criterion which stops sampling when one of two competitive hypotheses has a probability of being accepted which is larger than another prespecified threshold. The fifth procedure extends the ideas developed for binary responses to multinomial responses where the number of possible categories (e.g., types of drug or types of glass) may be more than two. The procedure is sequential and recommends stopping when the joint probability interval or ellipsoid for the estimates of the proportions is less than a given threshold in size. For trinomial data this last procedure is illustrated with a ternary diagram with an ellipse formed around the sample proportions. There is a straightforward generalization of this approach to multinomial populations with more than three categories. A conclusion provides recommendations for sampling procedures in various contexts.

Proceedings Article
06 Jul 2009
TL;DR: This work proposes a detection method that first utilizes a robust localization method to estimate the source parameters and then employs an adaptive SPRT based on estimates to infer detection and shows that this method provides better performance compared to any SPRTbased single sensor detection with fixed threshold.
Abstract: We consider the problem of detecting a source with a scalar intensity inside a two-dimensional monitoring area using intensity sensor measurements in presence of a background process. The sensor measurements may be random due to the underlying nature of the source and background as well as due to sensor errors. The Sequential Probability Ratio Test (SPRT) can be used to infer detections from measurements at the individual sensors. When a network of sensors is available, these detection results may be combined using a fusion rule such as majority rule. We propose a detection method that first utilizes a robust localization method to estimate the source parameters and then employs an adaptive SPRT based on estimates to infer detection. Under Lipschitz conditions on the source and background parameters and minimum size of the packing number of state-space, we show that this method provides better performance compared to: (a) any SPRTbased single sensor detection with fixed threshold, and (b) majority and certain general fusers of SPRT-based single sensor detectors. We analyze the performance of this method for the case of detecting point radiation sources, and present simulation and testbed results.

01 Jan 2009
TL;DR: This paper redefine some concepts about fuzzy hypotheses testing, and then the sequential probability ratio test for fuzzy hypothesis testing is introduced.
Abstract: In hypotheses testing, such as other statistical problems, we may confront imprecise concepts. One case is a situation in which hypotheses are imprecise. In this paper, we redefine some concepts about fuzzy hypotheses testing, and then we introduce the sequential probability ratio test for fuzzy hypotheses testing. Finally, we give some examples. Mathematics Subject Classification: 03E72, 62F03, 62G10

Proceedings ArticleDOI
31 Dec 2009
TL;DR: A novel spectrum sensing technique, called as multi-slot spectrum sensing, to detect spectral holes and to opportunistically use under-utilized frequency bands without causing harmful interference to legacy (primary) networks is proposed.
Abstract: In this paper, we propose a novel spectrum sensing technique, called as multi-slot spectrum sensing, to detect spectral holes and to opportunistically use under-utilized frequency bands without causing harmful interference to legacy (primary) networks. The key idea of the proposed sensing scheme is to combine the observations from the past N (N ≥2) sensing blocks including the latest one. Specifically, we first establish the detection model with the proposed multi-slot spectrum sensing technique. Then, we deploy the backward sequential probability ratio test (BSPRT) for the established model to detect spectral. Moreover, we evaluate the performances of the proposed scheme in terms of the mean delay for detection and the mean time to false alarm. Compared with the equally combining strategy, which equally combines the statistics of the past multiple sensing blocks, the proposed sensing strategy using BSPRT always performs better, which are verified via the conducted simulations.

01 Jan 2009
TL;DR: This study utilized a monte-carlo approach, with 10,000 examinees simulated under each condition, to evaluate differences in efficiency and accuracy due to hypothesis structure, nominal error rate, and indifference region size.
Abstract: Computer-based testing can be used to classify examinees into mutually exclusive groups. Currently, the predominant psychometric algorithm for designing computerized classification tests (CCTs) is the sequential probability ratio test (SPRT; Reckase, 1983) based on item response theory (IRT). The SPRT has been shown to be more efficient than confidence intervals around θ estimates as a method for CCT delivery (Spray & Reckase, 1996; Rudner, 2002). More recently, it was demonstrated that the SPRT, which only uses fixed values, is less efficient than a generalized form which tests whether a given examinee’s θ is below θ1or above θ2 (Thompson, 2007). This formulation allows the indifference region to vary based on observed data. Moreover, this composite hypothesis formulation better represents the conceptual purpose of the test, which is to test whether θ is above or below the cutscore. The purpose of this study was to explore the specifications of the new generalized likelihood ratio (GLR; Huang, 2004). As with the SPRT, the efficiency of the procedure depends on the nominal error rates and the distance between θ1 and θ2 (Eggen, 1999). This study utilized a monte-carlo approach, with 10,000 examinees simulated under each condition, to evaluate differences in efficiency and accuracy due to hypothesis structure, nominal error rate, and indifference region size. The GLR was always at least as efficient as the fixed-point SPRT while maintaining equivalent levels of accuracy.

01 Jan 2009
TL;DR: The current study replicates Finkelman's results, replicates it in realistic settings, and subsequently generalizes the SCSPRT to three categories while using adaptive item selection, showing increased efficiency both when using one and two cut points.
Abstract: Computerized classification testing (CCT) can be used to increase efficiency in educational measurement. The truncated sequential probability ratio test (TSPRT) has been widely studied as a decision algorithm in CCT for two or more categories. Finkelman (2003) added an algorithm to the TSPRT in the form of stochastic curtailment, to classify an examinee in an even earlier stage of testing. This stochastically curtailed SPRT (SCSPRT) halts testing when a change of classification is possible, but unlikely. Finkelman (2003) adapted the algorithm for two categories and with fixed item ordering. The current study replicates his results, replicates it in realistic settings, and subsequently generalizes the SCSPRT to three categories while using adaptive item selection. The results show increased efficiency both when using one and two cut points. Different item selection methods are discussed.

Proceedings ArticleDOI
01 Oct 2009
TL;DR: In this article, a sequential probability ratio test (SPRT) of scaled time-interval data (time to record N radiation pulses), SPRT-scaled, was evaluated against SIT and SPRT with a fixed counting interval, SPRT_fixed, on experimental and simulated data.
Abstract: Sequential probability ratio test (SPRT) of scaled time-interval data (time to record N radiation pulses), SPRT_scaled, was evaluated against commonly used single-interval test (SIT) and SPRT with a fixed counting interval, SPRT_fixed, on experimental and simulated data. Experimental data were acquired with a DGF-4C (XIA, Inc) system in list mode. Simulated time-interval data were obtained using Monte Carlo techniques to perform a random radiation sampling of the Poisson distribution. The three methods (SIT, SPRT_fixed and SPRT_scaled) were compared in terms of detection probability and average time to make a decision about the source of radiation. For both experimental and simulated data, SPRT_scaled provided similar detection probabilities as other tests, but is able to make a quicker decision with fewer pulses at relatively higher radiation levels. SPRT_scaled has a provision for varying the sampling time depending on the radiation level, which may further shorten the time needed for radiation monitoring. Parameter adjustments to the SPRT_scaled method for increased detection probability are discussed.

Proceedings ArticleDOI
14 Jun 2009
TL;DR: It is shown that for the received signal to noise ratios (SNR) typically encountered in GPS, the SPRT can outperform the single dwell detector strategy in terms of mean acquisition time, and that thesingle dwell (SD) detector's fixed dwell time approaches the worst case dwell time for theSPRT.
Abstract: Acquisition of signals in noise, in particular CDMA signals like the Global Positioning System (GPS) L1 C/A signal, can be carried out using fixed or variable dwell times. In this work, a sequential multiple dwell procedure for verifying acquisition is examined and compared to a fixed time single dwell strategy. The procedure under analysis is the sequential probability ratio test (SPRT) which has not been widely used for GPS applications, possibly due to its sensitivity to attenuation of the received signal relative to the design point. In this paper, it is shown that for the received signal to noise ratios (SNR) typically encountered in GPS, the SPRT can outperform the single dwell detector strategy in terms of mean acquisition time. In addition, it is shown that the single dwell (SD) detector's fixed dwell time approaches the worst case dwell time for the SPRT, as design point carrier to noise ratio decreases. Thus, for very weak signals, the SPRT can be a better choice of verification algorithm than the SD strategy, under certain constraints.

Journal ArticleDOI
TL;DR: It is found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions.
Abstract: Populations of cabbage looper, Trichoplusia ni (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998–1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (α, β), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number f...

Proceedings ArticleDOI
20 Apr 2009
TL;DR: An efficient particle filter based distributed track-before-detect (PF-DTBD) algorithm is presented, which reduces delay of detection and improves the precision of state estimation simultaneously and proves that the unnormalized fused particle' weight is actually composed of sensors' local measurement likelihood, which makes the likelihood ratio test feasible at fusion node.
Abstract: An efficient particle filter based distributed track-before-detect (PF-DTBD) algorithm is presented in this paper. It key idea is the fusion of multi-sensor local estimated conditional probability density functions (PDFs). Firstly, the PDFs among sensors nodes are estimated by multivariate kernel density estimation (MKDE) technique based on finite particles set and fused to calculate the fused particle's weight at fusion node. Next, according to Bayes rule, we prove that the unnormalized fused particle' weight is actually composed of sensors' local measurement likelihood, which makes the likelihood ratio test feasible at fusion node. Finally we introduce a detection scheme combining sequential probability ratio test (SPRT) and fixed sample size (FSS) likelihood ratio test to definitely realize TBD process for weak targets. Simulation results show our algorithm is efficient, which reduces delay of detection and improves the precision of state estimation simultaneously.

Journal ArticleDOI
TL;DR: This paper presents variable acceptance sampling plans based on the assumption that consecutive observations on a quality characteristic (X) are autocorrelated and are governed by a stationary autoregressive moving average (ARMA) process.
Abstract: This paper presents variable acceptance sampling plans based on the assumption that consecutive observations on a quality characteristic(X) are autocorrelated and are governed by a stationary autoregressive moving average (ARMA) process. The sampling plans are obtained under the assumption that an adequate ARMA model can be identified based on historical data from the process. Two types of acceptance sampling plans are presented: (1) Non-sequential acceptance sampling: In this case historical data is available based on which an ARMA model is identified. Parameter estimates are used to determine the action limit (k) and the sample size(n). A decision regarding acceptance of a process is made after a complete sample of size n is selected. (2) Sequential acceptance sampling: Here too historical data is available based on which an ARMA model is identified. A decision regarding whether or not to accept a process is made after each individual sample observation becomes available. The concept of Sequential Probability Ratio Test (SPRT) is used to derive the sampling plans. Simulation studies are used to assess the effect of uncertainties in parameter estimates and the effect of model misidentification (based on historical data) on sample size for the sampling plans. Macros for computing the required sample size using both methods based on several ARMA models can be found on the author's web page http://pages.towson.edu/aminza/papers.html .

01 Oct 2009
TL;DR: In this paper, a classification method based on Wald's Sequential Probability Ratio Test was developed for application to CAT with a multidimensional item response theory model in which each item measures multiple abilities.
Abstract: Computerized adaptive tests (CATs) were originally developed to obtain an efficient estimate of the examinee’s ability, but they can also be used to classify the examinee into one of two or more levels (e.g. master/non-master). These computerized classification tests have the advantage that they can also be tailored to the individual student’s ability. Computerized classification tests require a method that decides whether testing can stop and which decision with the desired confidence can be made. Furthermore, a method to select the items is required. In classification testing for unidimensional constructs, items are often selected that attempt to measure optimal at either the cutoff point(s) or the student’s current ability estimate. Four methods were developed that combined the efficiency of the first approach with the adaptive item selection of the second approach. Their efficiency and accuracy was investigated using simulations. Several methods are available to make the classification decisions for constructs modeled with an unidimensional item response theory model. But if the construct is multidimensional, few classification methods are available. A classification method based on Wald’s Sequential Probability Ratio Test was developed for application to CAT with a multidimensional item response theory model in which each item measures multiple abilities. Seitz and Frey’s (2013) method to make classifications per dimension, when each item measures one dimension, was adapted to make classifications on the entire test and on parts of the test. Kingsbury and Weiss’s (1979) popular unidimensional classification method, which uses the confidence interval surrounding the ability estimate, was also adapted for multidimensional decisions. Simulation studies were used to investigate the efficiency and accuracy of the classification methods. Comparisons were made between different item selection methods, between different classification methods and between different settings for the classification methods. Tests can be used for formative assessment, formative evaluation, summative assessment, and summative evaluation. For seven types of tests, including computerized classification tests and educational games; the design, the possibility to adapt the test, and the possible use for each of the test goals was explored.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: In this framework, two coupled detection and estimation procedures are introduced for the cases of discrete and continuous state space and it is shown that, under a set of rather mild conditions, the procedures end with probability one and the stopping time is almost surely minimized in the class of tests with the same or smaller error probabilities.
Abstract: The problem of joint detection and state estimation of a Markov signal when a variable number of noisy measurements can be taken is here considered. In particular, the signal-observation sequence {X i ;Z i } i∈N is a hidden Markov process (HMP) while, if the signal is absent, the measurement {Zi} i2∈N is an i.i.d. process. In this framework, two coupled detection and estimation procedures are introduced for the cases of discrete and continuous state space. Bounds on the performance of the proposed procedures in terms of the thresholds are derived, similar to the classical bounds for the sequential probability ratio test (SPRT). Moreover, it is shown that, under a set of rather mild conditions, the procedures end with probability one and the stopping time is almost surely minimized in the class of tests with the same or smaller error probabilities.

Journal Article
TL;DR: In this paper, the potential limitation of Wald Sequential Probability Ratio Test (SPRT) due to the uniqueness of alternative assumption when used for fault diagnosis and residual test is analyzed.
Abstract: The potential limitation of Wald Sequential Probability Ratio Test(SPRT) method due to the uniqueness of alternative assumption when used for fault diagnosis and residual test is analyzed.Focusing on the limitation,an improved method is put forward specially for normal distributed residual test,of which the alternative assumption is variable during detection process and the testing time delay is avoided.Mathematic simulation results indicate that the improved method,which guarantees a higher level of real-time performance than SPRT method,is more suitable for soft fault detection.

Proceedings ArticleDOI
18 Mar 2009
TL;DR: Theoretical performance of parametric and non-parametric sequential change detection algorithms for detecting in-band wormholes in wireless ad hoc networks are compared, and the advantage of the parametric method is illustrated.
Abstract: This paper compares the performance of parametric and non-parametric sequential change detection algorithms for detecting in-band wormholes in wireless ad hoc networks. The algorithms considered are the non-parametric cumulative sum (NP-CUSUM) and the repeated sequential probability ratio test (R-SPRT). Theoretical performance of the two is compared using metrics that take into account the algorithms' repeated nature, and the advantage of the parametric method is illustrated. On the other hand, connections between the parametric and non-parametric methods are made in the proposed worst case adversary model, where the non-parametric method is shown to be more robust to attack strategy changes. Experimental evaluation of wormhole detection schemes based on the two algorithms is presented. This work has implications for both the theoretical understanding and practical design of wormhole detection schemes based on parametric and nonparametric change detection algorithms.

Book ChapterDOI
01 Jan 2009
TL;DR: This chapter develops a multilook fusion approach for improving the performance of a single-look vehicle classification system for infrared video using the multinomial pattern-matching algorithm to match the signature to a database of learned signatures.
Abstract: This chapter develops a multilook fusion approach for improving the performance of a single-look vehicle classification system for infrared video. Vehicle classification is a challenging problem since vehicles can take on many different appearances and sizes due to their form and function and the viewing conditions. The low resolution of uncooled infrared video and the large variability of naturally occurring environmental conditions can make this an even more difficult problem. Our single-look approach is based on extracting a signature consisting of a histogram of gradient orientations from a set of regions covering the moving object. We use the multinomial pattern-matching algorithm to match the signature to a database of learned signatures. To combine the match scores of multiple signatures from a single tracked object, we use the sequential probability ratio test. Using infrared data, we show excellent classification performance, with low expected error rates, when using at least 25 looks.

Proceedings ArticleDOI
07 Mar 2009
TL;DR: The two method are fuzzily fused by means of fuzzy variable weight method to arrive at a decision about the extent and location of leakage and the results show that the fusion method has better effects than single method.
Abstract: In the pipeline leakage detection and location, what we need to know is whether the pipe is leaky and where the leakage point is. In addition, we are very concerned about the extent of leakage, false alarm rate and missing alarm rate. Both correlation analysis method and sequential probability ratio test method can be used in the pipeline leakage detection and location. Correlation analysis method can get the extent of leakage but it cannot limit false alarm rate and missing alarm rate. Sequential probability ratio test method can reflect the two rates, but it cannot get the extent of leakage. In this paper, the two method are fuzzily fused by means of fuzzy variable weight method, thus we can arrive at a decision about the extent and location of leakage. We verify this method in simulated pipeline in lab, and the results show that the fusion method has better effects than single method.

Proceedings ArticleDOI
18 Oct 2009
TL;DR: This paper presents a sequential misbehavior technique based on Sequential Probability Ratio Test (SPRT) for cooperative networks using automatic repeat request (ARQ) and evaluates performance of the detection technique both analytically and using numerical methods.
Abstract: Existing cooperative communications protocols are designed with the assumption that users always behave in a socially efficient manner. This assumption may be valid in networks under the control of a single authority where nodes cooperate efficiently to achieve a common goal. On the other hand, in commercial wireless networks where nodes are individually motivated to cooperate, the assumption that nodes will always obey rules of cooperation may not hold without implementing a mechanism to detect and mitigate misbehavior. In this paper, we present a sequential misbehavior technique based on Sequential Probability Ratio Test (SPRT) for cooperative networks using automatic repeat request (ARQ). We evaluate performance of the detection technique both analytically and using numerical methods.

Journal Article
TL;DR: It is proved that the stability of the classical optimal sequential probability ratio test is studied when testing two simple hypotheses about their common density f: f = fo versus f = f 1 .
Abstract: We study the stability of the classical optimal sequential probability ratio test based on independent identically distributed observations X 1 , X 2 , ... when testing two simple hypotheses about their common density f: f = fo versus f = f 1 . As a functional to be minimized, it is used a weighted sum of the average (under fo) sample number and the two types error probabilities. We prove that the problem is reduced to stopping time optimization for a ratio process generated by X 1 , X2 , ... with the density fo. For τ * being the corresponding optimal stopping time we consider a situation when this rule is applied for testing between f o and an alternative 1 , where 1 is some approximation to f 1 . An inequality is obtained which gives an upper bound for the expected cost excess, when τ * is used instead of the rule τ * optimal for the pair ( f o, f 1 ). The inequality found also estimates the difference between the minimal expected costs for optimal tests corresponding to the pairs (f 0 ,f 1 ) and (f 0 , f 1 ).