scispace - formally typeset
Search or ask a question

Showing papers on "Bernoulli sampling published in 2010"


Proceedings Article
14 Mar 2010
TL;DR: An approximate aggregation algorithm based on Bernoulli sampling to satisfy arbitrary precision requirement of WSNs is proposed and two adaptive algorithms are also proposed, one is for adapting the sample with varying of precision requirement, the other is for adapt the samplewith varying of sensed data.
Abstract: Aggregations of sensed data are very important for users to get summary information about monitored area in applications of wireless sensor networks (WSNs). As the approximate aggregation results are enough for users to perform analysis and make decisions, many approximate aggregation algorithms are proposed for WSNs. However, most of the algorithms have fixed error bounds and cannot meet arbitrary precision requirement, the uniform sampling based algorithm which can reach arbitrary precision is just suitable for the static networks. Considering the dynamic property of WSNs, in this paper, we propose an approximate aggregation algorithm based on Bernoulli sampling to satisfy arbitrary precision requirement. Besides, two adaptive algorithms are also proposed, one is for adapting the sample with varying of precision requirement, the other is for adapting the sample with varying of sensed data. The theoretical analysis and experiment results show that the proposed algorithms have high performance in terms of accuracy and energy consumption.

46 citations


Journal ArticleDOI
TL;DR: In this paper, a new estimator for population mean has been proposed in two phase sampling by using information of multiple auxiliary variables The minimum variance of the proposed estimator has been obtained Comparison has also been made with some available estimators of two-phase sampling.
Abstract: A new estimator for population mean has been proposed in two phase sampling by using information of multiple auxiliary variables The minimum variance of the proposed estimator has been obtained Comparison has also been made with some available estimators of two phase sampling

15 citations


Journal ArticleDOI
TL;DR: Correlated Poisson sampling as discussed by the authors uses weights to create correlations between the inclusion indicators to reduce the variation of the sample size and to make the samples more evenly spread over the population, which improves the efficiency in many cases.

12 citations



Proceedings ArticleDOI
14 Mar 2010
TL;DR: An approximate aggregation algorithm based on Bernoulli sampling to satisfy arbitrary precision requirement of WSNs is proposed and two adaptive algorithms are also proposed, one is for adapting the sample with varying of precision requirement, the other is for adapt the samplewith varying of sensed data.
Abstract: Aggregations of sensed data are very important for users to get summary information about monitored area in applications of wireless sensor networks (WSNs). As the approximate aggregation results are enough for users to perform analysis and make decisions, many approximate aggregation algorithms are proposed for WSNs. However, most of the algorithms have fixed error bounds and cannot meet arbitrary precision requirement, the uniform sampling based algorithm which can reach arbitrary precision is just suitable for the static networks. Considering the dynamic property of WSNs, in this paper, we propose an approximate aggregation algorithm based on Bernoulli sampling to satisfy arbitrary precision requirement. Besides, two adaptive algorithms are also proposed, one is for adapting the sample with varying of precision requirement, the other is for adapting the sample with varying of sensed data. The theoretical analysis and experiment results show that the proposed algorithms have high performance in terms of accuracy and energy consumption.

4 citations


Journal Article
TL;DR: This paper proposes an approximate aggregation algorithm based on Bernoulli sampling to satisfy the requirement of arbitrary precision in wireless sensor networks(WSN).
Abstract: This paper proposes an approximate aggregation algorithm based on Bernoulli sampling to satisfy the requirement of arbitrary precision in wireless sensor networks(WSN).Besides,two sample data adaptive algorithms are also provided.One is to adapt the sample to the varying precision requirement.The other is to adapt the sample to the varying sensed data in networks.Theoretical analysis and experimental results show that the proposed algorithms have good performance in terms of accuracy and energy cost.

2 citations


Proceedings ArticleDOI
01 Sep 2010
TL;DR: It is shown that it is possible to accurately reconstruct the signal through Bernoulli sampled measurements, with no assumption on the local sparsity, if the success probability of theBernoulli sampling exceeds a lower bound.
Abstract: Motivated by sensor network applications, in this paper we study the problem of a decentralized network of J sensors, in which each sensor observes either all or some components of an underlying sparse signal ensemble. Sensors operate with no collaboration with each other or the fusion center. Each sensor transmits a subset of its linear measurements to the fusion center. The fusion center gathers the data sent by all sensors and reconstructs the signal. The goal is to compress data at each node efficiently for accurate reconstruction at the fusion center. Accurate reconstruction is possible only if sufficient, well-chosen measurements are provided at the fusion center. In a decentralized network, each sensor measures part of a sparse signal. We refer to the sparsity of the observed signal at each node as local sparsity. Although the original signal is sparse, there is no guarantee on local sparsity at each node. To manage decentralized reconstruction, we propose a new Bernoulli Sampling scheme. This scheme associates an independent Bernoulli trial, with parameter p, to each measurement that a sensor makes. The sensor makes a measurement, if the outcome of the associated Bernoulli trial is 1. The measurement is ignored otherwise. We apply this sampling scheme to different sparsity models, including a common signal model, a common signal with innovation model, and a partitioned signal model for the observed signal. We show that it is possible to accurately reconstruct the signal through Bernoulli sampled measurements, with no assumption on the local sparsity, if the success probability of the Bernoulli sampling exceeds a lower bound. We also show the recovery through Bernoulli sampling is robust to noisy measurements and packet loss. If the signal is of length N, with sparsity k, the lower bound we derived for the parameter p of the Bernoulli Sampling, for robust and accurate reconstruction, is O(k over N log (N over k)). This result implies that the expected number of measurements needed for stable and accurate reconstruction is O(k log (N over k)). This is the same as the result obtained for a collaborating sensor network or a distributed network with local sparsity assumption.

1 citations



Journal ArticleDOI
TL;DR: In this paper, a nonparametric regression approach is proposed to the estimation of a finite population total in model-based frameworks in the case of stratified sampling, where the individual strata are treated as compact Abelian groups and the resulting estimator is easier to compute.
Abstract: We propose a nonparametric regression approach to the estimation of a finite population total in model based frameworks in the case of stratified sampling. Similar work has been done, by Nadaraya and Watson (1964), Hansen et al (1983), and Breidt and Opsomer (2000). Our point of departure from these works is at selection of the sampling weights within every stratum, where we treat the individual strata as compact Abelian groups and demonstrate that the resulting proposed estimator is easier to compute. We also make use of mixed ratios but this time not in the contexts of simple random sampling or two stage cluster sampling, but in stratified sampling schemes, where a void still exists.

1 citations


Journal ArticleDOI
TL;DR: In this paper, the inclusion probability of unequal probability sampling plans without replacement is compared in terms of the Kullback-Leibler divergence, showing that the inclusion probabilities for successive sampling are more proportional to the drawing probabilities than for rejective sampling.
Abstract: Comparison results are obtained for the inclusion probabilities in some unequal probability sampling plans without replacement. For either successive sampling or Hajek's rejective sampling, the larger the sample size, the more uniform the inclusion probabilities in the sense of majorization. In particular, the inclusion probabilities are more uniform than the drawing probabilities. For the same sample size, and given the same set of drawing probabilities, the inclusion probabilities are more uniform for rejective sampling than for successive sampling. This last result confirms a conjecture of Hajek (Sampling from a Finite Population (1981) Dekker). Results are also presented in terms of the Kullback--Leibler divergence, showing that the inclusion probabilities for successive sampling are more proportional to the drawing probabilities.

1 citations


07 Sep 2010
TL;DR: In this paper, a bino-mial approach is used for underreporting in register systems, where both the size and the probability parameter have to be estimated, and the method is applied to data from the Austrian crime register.
Abstract: Underreporting in register systems can be analyzed using a bino- mial approach, where both the size and the probability parameter have to be estimated. Parameter estimation fails when overdispersion is present. Ex- tensions of the binomial model are derived by randomizing the parameters, i.e. considering mixed models. Among these models are the beta-binomial, which results from allowing for a random reporting probability; the negative- binomial, that is the marginal when the size parameter is randomized; and the beta-Poisson model, where both binomial parameters are considered random. Likelihood based estimation is developed and inference issues are discussed. Finally the method is applied to data from the Austrian crime register.

DOI
01 Dec 2010
TL;DR: In this article, the authors developed a two phase sampling procedure to determine the sample size necessary to estimate the population mean of a normally distributed random variable and show that the resulting estimator has pre-assigned variance and is unbiased under a regular condition.
Abstract: We develop a two phase sampling procedure to determine the sample size necessary to estimatethe population mean of a normally distributed random variable and show that the resulting estimator has preassigned variance and is unbiased under a regular condition. We present a necessary and sufficient condition under which the final sample mean is an unbiased estimator for the population mean.