Topic
Parametric statistics
About: Parametric statistics is a research topic. Over the lifetime, 39200 publications have been published within this topic receiving 765761 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The results indicate that non normality in the error terms can be an issue in VBM, however, in balanced designs, provided the data are smoothed with a 4-mm FWHM kernel, nonnormality is sufficiently attenuated to render the tests valid.
248 citations
•
TL;DR: This work provides provable characterizations of the computational, approximation, and sample size properties of policy gradient methods in the context of discounted Markov Decision Processes (MDPs), and shows an important interplay between estimation error, approximation error, and exploration.
Abstract: Policy gradient methods are among the most effective methods in challenging reinforcement learning problems with large state and/or action spaces. However, little is known about even their most basic theoretical convergence properties, including: if and how fast they converge to a globally optimal solution or how they cope with approximation error due to using a restricted class of parametric policies. This work provides provable characterizations of the computational, approximation, and sample size properties of policy gradient methods in the context of discounted Markov Decision Processes (MDPs). We focus on both: "tabular" policy parameterizations, where the optimal policy is contained in the class and where we show global convergence to the optimal policy; and parametric policy classes (considering both log-linear and neural policy classes), which may not contain the optimal policy and where we provide agnostic learning results. One central contribution of this work is in providing approximation guarantees that are average case -- which avoid explicit worst-case dependencies on the size of state space -- by making a formal connection to supervised learning under distribution shift. This characterization shows an important interplay between estimation error, approximation error, and exploration (as characterized through a precisely defined condition number).
248 citations
••
TL;DR: In this article, the authors present a new paradigm using experimental mathematics to examine the claims made in the levels of measurement controversy, which is referred to as monte carlo simulation, and demonstrate that the approach advocated in this paper is linked closely to representational theory.
Abstract: The notion that nonparametric methods are required as a replacement of parametric statistical methods when the scale of measurement in a research study does not achieve a certain level was discussed in light of recent developments in representational measurement theory. A new approach to examining the problem via computer simulation was introduced. Some of the beliefs that have been widely held by psychologists for several decades were examined by means of a computer simulation study that mimicked measurement of an underlying empirical structure and performed two - sample Student t - tests on the resulting sample data. It was concluded that there is no need to replace parametric statistical tests by nonparametric methods when the scale of measurement is ordinal and not interval.Stevens' (1946) classic paper on the theory of scales of measurement triggered one of the longest standing debates in behavioural science methodology. The debate -- referred to as the levels of measurement controversy, or measurement - statistics debate -- is over the use of parametric and nonparametric statistics and its relation to levels of measurement. Stevens (1946; 1951; 1959; 1968), Siegel (1956), and most recently Siegel and Castellan (1988) and Conover (1980) argue that parametric statistics should be restricted to data of interval scale or higher. Furthermore, nonparametric statistics should be used on data of ordinal scale. Of course, since each scale of measurement has all of the properties of the weaker measurement, statistical methods requiring only a weaker scale may be used with the stronger scales. A detailed historical review linking Stevens' work on scales of measurement to the acceptance of psychology as a science, and a pedagogical presentation of fundamental axiomatic (i.e., representational) measurement can be found in Zumbo and Zimmerman (1991).Many modes of argumentation can be seen in the debate about levels of measurement and statistics. This paper focusses almost exclusively on an empirical form of rhetoric using experimental mathematics (Ripley, 1987). The term experimental mathematics comes from mathematical physics. It is loosely defined as the mimicking of the rules of a model of some kind via random processes. In the methodological literature this is often referred to as monte carlo simulation. However, for the purpose of this paper, the terms experimental mathematics or computer simulation are preferred to monte carlo because the latter is typically referred to when examining the robustness of a test in relation to particular statistical assumptions. Measurement level is not an assumption of the parametric statistical model (see Zumbo & Zimmerman, 1991 for a discussion of this issue) and to call the method used herein "monte carlo" would imply otherwise. The term experimental mathematics emphasizes the modelling aspect of the present approach to the debate.The purpose of this paper is to present a new paradigm using experimental mathematics to examine the claims made in the levels of measurement controversy. As Michell (1986) demonstrated, the concern over levels of measurement is inextricably tied to the differing notions of measurement and scaling. Michell further argued that fundamental axiomatic measurement or representational theory (see, for example, Narens & Luce, 1986) is the only measurement theory which implies a relation between measurement scales and statistics. Therefore, the approach advocated in this paper is linked closely to representational theory. The novelty of this approach, to the authors knowledge, is in the use of experimental mathematics to mimic representational measurement. Before describing the methodology used in this paper, we will briefly review its motivation.Admissible TransformationsRepresentational theory began in the late 1950's with Scott and Suppes (1958) and later with Suppes and Zinnes (1963), Pfanzagl (1968), and Krantz, Luce, Suppes & Tversky (1971). …
247 citations
••
TL;DR: The case studies demonstrate the importance of embedding physical constraints within learned models, and highlight the important point that the amount of model training data available in an engineering setting is often much less than it is in other machine learning applications, making it essential to incorporate knowledge from physical models.
247 citations
••
TL;DR: In this paper, the authors present the results of an experimental effort to generate squeezed microwave radiation using the phase-sensitive gain of a Josephson parametric amplifier, which is used for both doubly degenerate and four-photon mode.
Abstract: We present the results of an experimental effort to generate squeezed microwave radiation using the phase-sensitive gain of a Josephson parametric amplifier. To facilitate the interpretation of the experimental results, we first present a discussion of the theory of microwave squeezing via Josephson parametric amplifiers. This is followed by a detailed description of the device fabricated for our experiment. Experimental results are then presented for the device used in both the doubly degenerate or four-photon mode and for the degenerate or three-photon mode. We have observed parametric deamplification of signals by more than 8 dB. We have demonstrated squeezing of 4.2-K thermal noise. When operated at 0.1 K, the amplifier exhibits an excess noise of 0.28 K when referred to the input. This is smaller than the vacuum fluctuation noise level \ensuremath{\Elzxh}\ensuremath{\omega}/2k=0.47 K. The amplifier is thus quieter than a linear phase-insensitive amplifier in principle can be.
247 citations