On the histogram as a density estimator:L 2 theory
read more
Citations
Causal Inference without Balance Checking: Coarsened Exact Matching
Computational Statistics Handbook with MATLAB
Precise and reliable gene expression via standard transcription and translation initiation elements
Differential privacy and robust statistics
References
The Art of Computer Programming
An Introduction to Probability Theory and Its Applications
An Introduction to Probability Theory and Its Applications.
An introduction to probability theory
Frequently Asked Questions (7)
Q2. What is the mean square error of kernel estimates?
The results show that the mean square error of kernel estimates tends to zero like a constant times k -4Is, while (1.6) implies that the mean square error of histograms tends to zero like a constant times k -2/3.
Q3. What is the simplest way to prove f?
For instance, if The author= [ 0 , 1] and Xo=0 , condition (1.5) requires that h = l / N for some positive integer N. By present conditions, if The author= [0, 1], then f and f ' are continuous on I, even at 0 and 1.(1.6) Theorem.
Q4. What is the simplest way to determine the h's g?
Choose g) positive, but smaller than rain {6o, b/3c}, where c and 6 o are as in (4.9), Then Ck(h)-ch 3 is a monotone increasing function of h in the interval (4.14).
Q5. what is the p ro o f of terva l ?
These b o u n d s force the fol lowing conclus ions : for large k the h's min imizing 0 k ( ' ) are to be found in the in terva l hk+_A/kl/2; on that whole in terval Ok(h) = C~k(hk)+ O(1/k).
Q6. how can g be approximated in l 2?
But g may be approximated closely in L 2 by a function go which is constant on each class interval: for instance, apply (2.5) to g.
Q7. what is the arbitrary value of k(')?
Because e was arbitrary, the infimum of Ok(h) over h with 0<h___b is(4.8) 3 .2 - 2/3. b 1/3. k- 2/3 + o(k- 2/3).Now (4.4-6) show that ~k(') has a global minimum, say at h~, any such h* tends to 0 as k ~ o% and Ok(h*)=qSk(hk)+