scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 2003"


Book
01 Jan 2003
TL;DR: In this paper, the authors present a survey of the state of the art in the field of geotechnical reliability analysis, focusing on the following: 1.1 Randomness, uncertainty, and the world. 2.3 Probability.
Abstract: Preface. Part I. 1 Introduction - uncertainty and risk in geotechnical engineering. 1.1 Offshore platforms. 1.2 Pit mine slopes. 1.3 Balancing risk and reliability in a geotechnical design. 1.4 Historical development of reliability methods in civil engineering. 1.5 Some terminological and philosophical issues. 1.6 The organization of this book. 1.7 A comment on notation and nomenclature. 2 Uncertainty. 2.1 Randomness, uncertainty, and the world. 2.2 Modeling uncertainties in risk and reliability analysis. 2.3 Probability. 3 Probability. 3.1 Histograms and frequency diagrams. 3.2 Summary statistics. 3.3 Probability theory. 3.4 Random variables. 3.5 Random process models. 3.6 Fitting mathematical pdf models to data. 3.7 Covariance among variables. 4 Inference. 4.1 Frequentist theory. 4.2 Bayesian theory. 4.3 Prior probabilities. 4.4 Inferences from sampling. 4.5 Regression analysis. 4.6 Hypothesis tests. 4.7 Choice among models. 5 Risk, decisions and judgment. 5.1 Risk. 5.2 Optimizing decisions. 5.3 Non-optimizing decisions. 5.4 Engineering judgment. Part II. 6 Site characterization. 6.1 Developments in site characterization. 6.2 Analytical approaches to site characterization. 6.3 Modeling site characterization activities. 6.4 Some pitfalls of intuitive data evaluation. 6.5 Organization of Part II. 7 Classification and mapping. 7.1 Mapping discrete variables. 7.2 Classification. 7.3 Discriminant analysis. 7.4 Mapping. 7.5 Carrying out a discriminant or logistic analysis. 8 Soil variability. 8.1 Soil properties. 8.2 Index tests and classification of soils. 8.3 Consolidation properties. 8.4 Permeability. 8.5 Strength properties. 8.6 Distributional properties. 8.7 Measurement error. 9 Spatial variability within homogeneous deposits. 9.1 Trends and variations about trends. 9.2 Residual variations. 9.3 Estimating autocorrelation and autocovariance. 9.4 Variograms and geostatistics. Appendix: algorithm for maximizing log-likelihood of autocovariance. 10 Random field theory. 10.1 Stationary processes. 10.2 Mathematical properties of autocovariance functions. 10.3 Multivariate (vector) random fields. 10.4 Gaussian random fields. 10.5 Functions of random fields. 11 Spatial sampling. 11.1 Concepts of sampling. 11.2 Common spatial sampling plans. 11.3 Interpolating random fields. 11.4 Sampling for autocorrelation. 12 Search theory. 12.1 Brief history of search theory. 12.2 Logic of a search process. 12.3 Single stage search. 12.4 Grid search. 12.5 Inferring target characteristics. 12.6 Optimal search. 12.7 Sequential search. Part III. 13 Reliability analysis and error propagation. 13.1 Loads, resistances and reliability. 13.2 Results for different distributions of the performance function. 13.3 Steps and approximations in reliability analysis. 13.4 Error propagation - statistical moments of the performance function. 13.5 Solution techniques for practical cases. 13.6 A simple conceptual model of practical significance. 14 First order second moment (FOSM) methods. 14.1 The James Bay dikes. 14.2 Uncertainty in geotechnical parameters. 14.3 FOSM calculations. 14.4 Extrapolations and consequences. 14.5 Conclusions from the James Bay study. 14.6 Final comments. 15 Point estimate methods. 15.1 Mathematical background. 15.2 Rosenblueth's cases and notation. 15.3 Numerical results for simple cases. 15.4 Relation to orthogonal polynomial quadrature. 15.5 Relation with 'Gauss points' in the finite element method. 15.6 Limitations of orthogonal polynomial quadrature. 15.7 Accuracy, or when to use the point-estimate method. 15.8 The problem of the number of computation points. 15.9 Final comments and conclusions. 16 The Hasofer-Lind approach (FORM). 16.1 Justification for improvement - vertical cut in cohesive soil. 16.2 The Hasofer-Lind formulation. 16.3 Linear or non-linear failure criteria and uncorrelated variables. 16.4 Higher order reliability. 16.5 Correlated variables. 16.6 Non-normal variables. 17 Monte Carlo simulation methods. 17.1 Basic considerations. 17.2 Computer programming considerations. 17.3 Simulation of random processes. 17.4 Variance reduction methods. 17.5 Summary. 18 Load and resistance factor design. 18.1 Limit state design and code development. 18.2 Load and resistance factor design. 18.3 Foundation design based on LRFD. 18.4 Concluding remarks. 19 Stochastic finite elements. 19.1 Elementary finite element issues. 19.2 Correlated properties. 19.3 Explicit formulation. 19.4 Monte Carlo study of differential settlement. 19.5 Summary and conclusions. Part IV. 20 Event tree analysis. 20.1 Systems failure. 20.2 Influence diagrams. 20.3 Constructing event trees. 20.4 Branch probabilities. 20.5 Levee example revisited. 21 Expert opinion. 21.1 Expert opinion in geotechnical practice. 21.2 How do people estimate subjective probabilities? 21.3 How well do people estimate subjective probabilities? 21.4 Can people learn to be well-calibrated? 21.5 Protocol for assessing subjective probabilities. 21.6 Conducting a process to elicit quantified judgment. 21.7 Practical suggestions and techniques. 21.8 Summary. 22 System reliability assessment. 22.1 Concepts of system reliability. 22.2 Dependencies among component failures. 22.3 Event tree representations. 22.4 Fault tree representations. 22.5 Simulation approach to system reliability. 22.6 Combined approaches. 22.7 Summary. Appendix A: A primer on probability theory. A.1 Notation and axioms. A.2 Elementary results. A.3 Total probability and Bayes' theorem. A.4 Discrete distributions. A.5 Continuous distributions. A.6 Multiple variables. A.7 Functions of random variables. References. Index.

1,110 citations


Book ChapterDOI
01 Jan 2003
TL;DR: This chapter gives a broad introduction to probability and statistics and defines the important terms, such as probability, statistics, chance and randomness.
Abstract: This chapter gives a broad introduction to probability and statistics and defines the important terms, such as probability, statistics, chance and randomness It also provides an overview of the information provided in the chapters of the book This book starts with the basics of probability and then covers descriptive statistics Then various probability distributions are investigated The second half of the book is mostly concerned with statistical inference, including relations between two or more variables and there are introductory chapters on the design and analysis of experiments The book also includes a number of computer examples and computer exercises, which can be done using Microsoft Excel Solved problem examples and problems for the reader to solve are included throughout the book A great majority of the problems are directly applied to engineering, involving many different branches of engineering They show how statistics and probability can be applied by professional engineers

893 citations


Journal ArticleDOI
10 Jan 2003-Chaos
TL;DR: Several phenomenological approaches to applying information theoretic measures of randomness and memory to stochastic and deterministic processes are synthesized by using successive derivatives of the Shannon entropy growth curve to look at the relationships between a process's entropy convergence behavior and its underlying computational structure.
Abstract: We study how the Shannon entropy of sequences produced by an information source converges to the source’s entropy rate. We synthesize several phenomenological approaches to applying information theoretic measures of randomness and memory to stochastic and deterministic processes by using successive derivatives of the Shannon entropy growth curve. This leads, in turn, to natural measures of apparent memory stored in a source and the amounts of information that must be extracted from observations of a source in order for it to be optimally predicted and for an observer to synchronize to it. To measure the difficulty of synchronization, we define the transient information and prove that, for Markov processes, it is related to the total uncertainty experienced while synchronizing to a process. One consequence of ignoring a process’s structural properties is that the missed regularities are converted to apparent randomness. We demonstrate that this problem arises particularly for settings where one has access only to short measurement sequences. Numerically and analytically, we determine the Shannon entropy growth curve, and related quantities, for a range of stochastic and deterministic processes. We conclude by looking at the relationships between a process’s entropy convergence behavior and its underlying computational structure.

407 citations


Posted Content
TL;DR: A new method based on the ``go with the winners'' Monte Carlo method is presented, which can be used to evaluate the reliability of the other two methods and demonstrate that the deviations of the switching and matching algorithms under realistic conditions are small.
Abstract: Random graphs with prescribed degree sequences have been widely used as a model of complex networks. Comparing an observed network to an ensemble of such graphs allows one to detect deviations from randomness in network properties. Here we briefly review two existing methods for the generation of random graphs with arbitrary degree sequences, which we call the ``switching'' and ``matching'' methods, and present a new method based on the ``go with the winners'' Monte Carlo method. The matching method may suffer from nonuniform sampling, while the switching method has no general theoretical bound on its mixing time. The ``go with the winners'' method has neither of these drawbacks, but is slow. It can however be used to evaluate the reliability of the other two methods and, by doing this, we demonstrate that the deviations of the switching and matching algorithms under realistic conditions are small compared to the ``go with the winners'' algorithm. Because of its combination of speed and accuracy we recommend the use of the switching method for most calculations.

365 citations


Book ChapterDOI
Ran Canetti1, Tal Rabin1
17 Aug 2003
TL;DR: In this paper, the authors propose a new composition operation called universal composition with joint state and randomness, which is based on the universal composition operation and can handle the case where different components have some amount of joint state.
Abstract: Cryptographic systems often involve running multiple concurrent instances of some protocol, where the instances have some amount of joint state and randomness. (Examples include systems where multiple protocol instances use the same public-key infrastructure, or the same common reference string.) Rather than attempting to analyze the entire system as a single unit, we would like to be able to analyze each such protocol instance as stand-alone, and then use a general composition theorem to deduce the security of the entire system. However, no known composition theorem applies in this setting, since they all assume that the composed protocol instances have disjoint internal states, and that the internal random choices in the various executions are independent. We propose a new composition operation that can handle the case where different components have some amount of joint state and randomness, and demonstrate sufficient conditions for when the new operation preserves security. The new operation, which is called universal composition with joint state (and is based on the recently proposed universal composition operation), turns out to be very useful in a number of quite different scenarios such as those mentioned above.

304 citations


Journal ArticleDOI
15 Sep 2003
TL;DR: This work proposes a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence, and is universal in the sense of asymptotically performing as well as the optimum denoiser that knows theinput sequence distribution, which is only assumed to be stationary.
Abstract: A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given single-letter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary. Moreover, the algorithm is universal also in a semi-stochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise. The proposed denoising algorithm is practical, requiring a linear number of register-level operations and sublinear working storage size relative to the input data length.

259 citations


Journal ArticleDOI
01 Jan 2003
TL;DR: In this paper, a method of RBD with the mixture of random variables with distributions and uncertain variables with intervals is proposed, where the reliability is considered under the condition of the worst combination of interval variables.
Abstract: In Reliability-Based Design (RBD), uncertainties usually imply for randomness. Nondeterministic variables are assumed to follow certain probability distributions. However, in real engineering applications, some of distributions may not be precisely known or uncertainties associated with some uncertain variables are not from randomness. These nondeterministic variables are only known within intervals. In this paper, a method of RBD with the mixture of random variables with distributions and uncertain variables with intervals is proposed. The reliability is considered under the condition of the worst combination of interval variables. In comparison with traditional RBD, the computational demand of RBD with the mixture of random and interval variables increases dramatically. To alleviate the computational burden, a sequential single-loop procedure is developed to replace the computationally expensive double-loop procedure when the worst case scenario is applied directly. With the proposed method, the RBD is conducted within a series of cycles of deterministic optimization and reliability analysis. The optimization model in each cycle is built based on the Most Probable Point (MPP) and the worst case combination obtained in the reliability analysis in previous cycle. Since the optimization is decoupled from the reliability analysis, the computational amount for MPP search is decreased to the minimum extent. The proposed method is demonstrated with a structural design example.Copyright © 2003 by ASME

245 citations


Book Chapter
01 Jul 2003
TL;DR: This Chapter develops ideas about how such phenomena can be modelled showing first how randomness and geometry are all important to local movement and how ordered spatial structures emerge from such actions.
Abstract: When the focus of interest in geographical systems is at the very fine scale, at the level of streets and buildings for example, movement becomes central to simulations of spatial activities. Recent advances in computing power and the acquisition of fine-scale digital data now mean that we can attempt to understand and predict such phenomena, with the focus in spatial modelling changing to dynamic simulations of the individual and collective behaviour of individual decision making at these scales. In this chapter, we develop ideas about how such phenomena can be modelled showing first how randomness and geometry are all important to local movement and how ordered spatial structures emerge from such actions. We focus on these ideas with pedestrians, showing how random walks constrained by geometry but aided by what agents can see, determine how individuals respond to locational patterns. We illustrate these ideas with three examples: first for local scale street scenes where congestion and flocking is all important, second for coarser scale shopping centres such as malls where economic preference interferes much more with local geometry, and finally for semi-organised street festivals where management and control by police and related authorities is integral to the way crowds move.

215 citations


Book ChapterDOI
06 Jan 2003
TL;DR: A test shows that randomness re-use is secure in the strong sense for asymmetric encryption schemes such as El Gamal, Cramer-Shoup, DHIES, and Boneh and Franklin's escrow ElGamal.
Abstract: Kurosawa showed how one could design multi-receiver encryption schemes achieving savings in bandwidth and computation relative to the naive methods We broaden the investigation We identify new types of attacks possible in multi-recipient settings, which were overlooked by the previously suggested models, and specify an appropriate model to incorporate these types of attacks We then identify a general paradigm that underlies his schemes and also others, namely the re-use of randomness: ciphertexts sent to different receivers by a single sender are computed using the same underlying coins In order to avoid case by case analysis of encryption schemes to see whether they permit secure randomness re-use, we provide a condition, or test, that when applied to an encryption scheme shows whether or not the associated randomness re-using version of the scheme is secure As a consequence, our test shows that randomness re-use is secure in the strong sense for asymmetric encryption schemes such as El Gamal, Cramer-Shoup, DHIES, and Boneh and Franklin's escrow El Gamal

143 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the quantum random walk on the line and found exact analytical expressions for the time dependence of the first two moments of position, and showed that in the long-time limit the variance grows linearly with time.
Abstract: The quantum random walk has been much studied recently, largely due to its highly nonclassical behavior. In this paper, we study one possible route to classical behavior for the discrete quantum walk on the line: the presence of decoherence in the quantum ``coin'' which drives the walk. We find exact analytical expressions for the time dependence of the first two moments of position, and show that in the long-time limit the variance grows linearly with time, unlike the unitary walk. We compare this to the results of direct numerical simulation, and see how the form of the position distribution changes from the unitary to the usual classical result as we increase the strength of the decoherence.

143 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the effect of the randomness of the sampling intervals on the performance of a continuous-time model for high-frequency financial data and found that in many situations, the sampling randomness has a larger impact than the discreteness of the data.
Abstract: High-frequency financial data are not only discretely sampled in time but the time separating successive observations is often random. We analyze the consequences of this dual feature of the data when estimating a continuous-time model. In particular, we measure the additional effects of the randomness of the sampling intervals over and beyond those due to the discreteness of the data. We also examine the effect of simply ignoring the sampling randomness. We find that in many situations the randomness of the sampling has a larger impact than the discreteness of the data.

Book ChapterDOI
24 Aug 2003
TL;DR: This paper investigates the notion of computational min-entropy which is the computational analog of statistical min-ENTropy, and considers three possible definitions for this notion, and shows equivalence and separation results in various computational models.
Abstract: Min-entropy is a statistical measure of the amount of randomness that a particular distribution contains. In this paper we investigate the notion of computational min-entropy which is the computational analog of statistical min-entropy. We consider three possible definitions for this notion, and show equivalence and separation results for these definitions in various computational models.

Proceedings ArticleDOI
08 Jun 2003
TL;DR: A biased randomized insertion order is defined which removes enough randomness to significantly improve performance, but leaves enough Randomness so that the algorithms remain theoretically optimal.
Abstract: Randomized incremental constructions are widely used in computational geometry, but they perform very badly on large data because of their inherently random memory access patterns. We define a biased randomized insertion order which removes enough randomness to significantly improve performance, but leaves enough randomness so that the algorithms remain theoretically optimal.

Journal ArticleDOI
TL;DR: The authors discusses some promising new random number generators, as well as formulates the mathematical basis that makes them random variables in the same sense as more familiar ones in probability and statistics, emphasizing his view that randomness exists only in the sense of mathematics.
Abstract: The author discusses some promising new random number generators, as well as formulates the mathematical basis that makes them random variables in the same sense as more familiar ones in probability and statistics, emphasizing his view that randomness exists only in the sense of mathematics. He discusses the need for adequate seeds that provide the axioms for that mathematical basis, and gives examples from Law and Gaming, where inadequacies have led to difficulties. He also describes new versions of the widely used Diehard Battery of Tests of Randomness.

Journal ArticleDOI
15 Sep 2003
TL;DR: A single-letter formula for the optimal tradeoff between the extracted common randomness and classical communication rate is obtained for the special case of classical-quantum correlations.
Abstract: The problem of converting noisy quantum correlations between two parties into noiseless classical ones using a limited amount of one-way classical communication is addressed. A single-letter formula for the optimal tradeoff between the extracted common randomness and classical communication rate is obtained for the special case of classical-quantum correlations. The resulting curve is intimately related to the quantum compression with classical side information tradeoff curve Q/sup */(R) of Hayden, Jozsa, and Winter. For a general initial state, we obtain a similar result, with a single-letter formula, when we impose a tensor product restriction on the measurements performed by the sender; without this restriction, the tradeoff is given by the regularization of this function. Of particular interest is a quantity we call "distillable common randomness" of a state: the maximum overhead of the common randomness over the one-way classical communication if the latter is unbounded. It is an operational measure of (total) correlation in a quantum state. For classical-quantum correlations it is given by the Holevo mutual information of its associated ensemble; for pure states it is the entropy of entanglement. In general, it is given by an optimization problem over measurements and regularization; for the case of separable states we show that this can be single-letterized.

Posted Content
Peter Gacs1
TL;DR: In this article, a general framework for the analysis of randomness in non-compact spaces is introduced, where the underlying space is the set of finite or infinite sequences and the underlying probability distribution is the uniform distribution or a computable distribution.
Abstract: The algorithmic theory of randomness is well developed when the underlying space is the set of finite or infinite sequences and the underlying probability distribution is the uniform distribution or a computable distribution. These restrictions seem artificial. Some progress has been made to extend the theory to arbitrary Bernoulli distributions (by Martin-Loef), and to arbitrary distributions (by Levin). We recall the main ideas and problems of Levin's theory, and report further progress in the same framework. - We allow non-compact spaces (like the space of continuous functions, underlying the Brownian motion). - The uniform test (deficiency of randomness) d_P(x) (depending both on the outcome x and the measure P should be defined in a general and natural way. - We see which of the old results survive: existence of universal tests, conservation of randomness, expression of tests in terms of description complexity, existence of a universal measure, expression of mutual information as "deficiency of independence. - The negative of the new randomness test is shown to be a generalization of complexity in continuous spaces; we show that the addition theorem survives. The paper's main contribution is introducing an appropriate framework for studying these questions and related ones (like statistics for a general family of distributions).

Journal ArticleDOI
Tad Hogg1
TL;DR: The discrete formulation of adiabatic quantum computing is compared with other search methods, classical and quantum, for random satisfiability (SAT) problems and Variants of the quantum algorithm that do not match the adiABatic limit give lower costs, on average, and slower growth than the conventional GSAT heuristic method.
Abstract: The discrete formulation of adiabatic quantum computing is compared with other search methods, classical and quantum, for random satisfiability (SAT) problems. With the number of steps growing only as the cube of the number of variables, the adiabatic method gives solution probabilities close to 1 for problem sizes feasible to evaluate via simulation on current computers. However, for these sizes the minimum energy gaps of most instances are fairly large, so the good performance scaling seen for small problems may not reflect asymptotic behavior where costs are dominated by tiny gaps. Moreover, the resulting search costs are much higher than for other methods. Variants of the quantum algorithm that do not match the adiabatic limit give lower costs, on average, and slower growth than the conventional GSAT heuristic method.

Journal ArticleDOI
TL;DR: In this article, the authors consider Langevin systems driven by general Levy noises, rather than random Wiener noise (white noise), and the resulting Fokker-Planck equation and Boltzmann equilibria.
Abstract: Langevin dynamics driven by random Wiener noise (“white noise”), and the resulting Fokker–Planck equation and Boltzmann equilibria are fundamental to the understanding of transport and relaxation. However, there is experimental and theoretical evidence that the use of the Gaussian Wiener noise as an underlying source of randomness in continuous time systems may not always be appropriate or justified. Rather, models incorporating general Levy noises, should be adopted. In this work we study Langevin systems driven by general Levy, rather than Wiener, noises. Various issues are addressed, including: (i) the evolution of the probability density function of the system's state; (ii) the system's steady state behavior; and, (iii) the attainability of equilibria of the Boltzmann type. Moreover, the issue of reverse engineering is introduced and investigated. Namely: how to design a Langevin system, subject to a given Levy noise, that would yield a pre-specified “target” steady state behavior. Results are complemented with a multitude of examples of Levy driven Langevin systems.

Journal ArticleDOI
TL;DR: The quantum formalism is a ''measurement'' formalism describing certain macroscopic regularities as discussed by the authors, which can be regarded, and best be understood, as arising from Bohmian mechanics, which is what emerges from Schr\"odinger's equation for a system of particles when we merely insist that ''particles'' means particles.
Abstract: The quantum formalism is a ``measurement'' formalism--a phenomenological formalism describing certain macroscopic regularities. We argue that it can be regarded, and best be understood, as arising from Bohmian mechanics, which is what emerges from Schr\"odinger's equation for a system of particles when we merely insist that ``particles'' means particles. While distinctly non-Newtonian, Bohmian mechanics is a fully deterministic theory of particles in motion, a motion choreographed by the wave function. We find that a Bohmian universe, though deterministic, evolves in such a manner that an {\it appearance} of randomness emerges, precisely as described by the quantum formalism and given, for example, by ``$\rho=|\psis|^2$.'' A crucial ingredient in our analysis of the origin of this randomness is the notion of the effective wave function of a subsystem, a notion of interest in its own right and of relevance to any discussion of quantum theory. When the quantum formalism is regarded as arising in this way, the paradoxes and perplexities so often associated with (nonrelativistic) quantum theory simply evaporate.

Proceedings ArticleDOI
11 Oct 2003
TL;DR: An efficient deterministic algorithm which extracts almost-random bits from sources where n/sup 1/2 + /spl gamma// of the n bits are uniformly random and the rest are fixed in advance is given.
Abstract: We give an efficient deterministic algorithm which extracts /spl Omega/(n/sup 2/spl gamma//) almost-random bits from sources where n/sup 1/2 + /spl gamma// of the n bits are uniformly random and the rest are fixed in advance. This improves on previous constructions which required that at least n/2 of the bits be random. Our construction also gives explicit adaptive exposure-resilient functions and in turn adaptive all-or-nothing transforms. For sources where instead of bits the values are chosen from [d], for d > 2, we give an algorithm which extracts a constant fraction of the randomness. We also give bounds on extracting randomness for sources where the fixed bits can depend on the random bits.

Journal ArticleDOI
TL;DR: The benefit of the joint action of two methods is proven by the analysis of artificial sequences with the same main properties as the real time series to which the joint use of these two methods will be applied in future research work.
Abstract: The adoption of the Kolmogorov–Sinai entropy is becoming a popular research tool among physicists, especially when applied to a dynamical system fitting the conditions of validity of the Pesin theorem. The study of time series that are a manifestation of system dynamics whose rules are either unknown or too complex for a mathematical treatment, is still a challenge since the KS entropy is not computable, in general, in that case. Here we present a plan of action based on the joint action of two procedures, both related to the KS entropy, but compatible with computer implementation through fast and efficient programs. The former procedure, called compression algorithm sensitive to regularity (CASToRE), establishes the amount of order by the numerical evaluation of algorithmic compressibility. The latter, called complex analysis of sequences via scaling and randomness assessment (CASSANDRA), establishes the complexity degree through the numerical evaluation of the strength of an anomalous effect. This is the departure, of the diffusion process generated by the observed fluctuations, from ordinary Brownian motion. The CASSANDRA algorithm shares with CASToRE a connection with the Kolmogorov complexity. This makes both algorithms especially suitable to study the transition from dynamics to thermodynamics, and the case of non-stationary time series as well. The benefit of the joint action of these two methods is proven by the analysis of artificial sequences with the same main properties as the real time series to which the joint use of these two methods will be applied in future research work.

Journal ArticleDOI
TL;DR: It is shown that random shortcuts can induce periodic synchronized spatiotemporal motions, even though all oscillators are chaotic when uncoupled, implying that topological randomness can tame spatiotmporal chaos.
Abstract: In this Letter, the effects of random shortcuts in an array of coupled nonlinear chaotic pendulums and their ability to control the dynamical behavior of the system are investigated. We show that random shortcuts can induce periodic synchronized spatiotemporal motions, even though all oscillators are chaotic when uncoupled. This process exhibits a nonmonotonic dependence on the density of shortcuts. Specifically, there is an optimal amount of random shortcuts, which can induce the most ordered motion characterized by the largest order parameter that is introduced to measure the spatiotemporal order. Our results imply that topological randomness can tame spatiotemporal chaos.

Journal ArticleDOI
TL;DR: Applying a set of randomness tests on the evolved CCA PRNGs, it is demonstrated that their randomness is better than that of 1-D CAPRNGs and can be comparable to that of 2-DCA PR NGs.
Abstract: Cellular automata (CA) has been used in pseudorandom number generation for over a decade. Recent studies show that two-dimensional (2-D) CA pseudorandom number generators (PRNGs) may generate better random sequences than conventional one-dimensional (1-D) CA PRNGs, but they are more complex to implement in hardware than 1-D CA PRNGs. In this paper, we propose a new class of 1-D CA - controllable cellular automata (CCA)-without much deviation from the structural simplicity of conventional 1-D CA. We first give a general definition of CCA and then introduce two types of CCA: CCA0 and CCA2. Our initial study shows that these two CCA PRNGs have better randomness quality than conventional 1-D CA PRNGs, but that their randomness is affected by their structures. To find good CCA0/CCA2 structures for pseudorandom number generation, we evolve them using evolutionary multiobjective optimization techniques. Three different algorithms are presented. One makes use of an aggregation function; the other two are based on the vector-evaluated genetic algorithm. Evolution results show that these three algorithms all perform well. Applying a set of randomness tests on the evolved CCA PRNGs, we demonstrate that their randomness is better than that of 1-D CA PRNGs and can be comparable to that of 2-D CA PRNGs.

Book ChapterDOI
15 Dec 2003
TL;DR: It is shown that sharing randomness or entanglement is necessary for non-trivial protocols of non-interactive quantum perfect and statistical zero-knowledge proof systems and the Graph Non-Automorphism problem is shown to have a non-Interactive quantumperfect zero- knowledge proof system.
Abstract: This paper introduces quantum analogues of non-interactive perfect and statistical zero-knowledge proof systems. Similar to the classical cases, it is shown that sharing randomness or entanglement is necessary for non-trivial protocols of non-interactive quantum perfect and statistical zero-knowledge. It is also shown that, with sharing EPR pairs a priori, the complexity class resulting from non-interactive quantum perfect zero-knowledge proof systems of perfect completeness has a natural complete promise problem. Using our complete promise problem, the Graph Non-Automorphism problem is shown to have a non-interactive quantum perfect zero-knowledge proof system.

01 Jan 2003
TL;DR: A reduction of randomness required for Owen's random scrambling by using the notion of i-binomial property is considered, and it is concluded that all the results on the expected errors of the integration problem so far obtained with Owen's scrambling also hold with the left i- binomial scrambling.
Abstract: The computational complexity of the integration problem in terms of the expected error has recently been an important topic in Information-Based Complexity. In this setting, we assume some sample space of integration rules from which we randomly choose one. The most popular sample space is based on Owen’s random scramblingscheme whose theoretical advantage is the fast convergence rate for certain smooth functions. This paper considers a reduction of randomness required for Owen’s random scramblingby usingthe notion of i-binomial property. We first establish a set of necessary and sufficient conditions for digital ð0; sÞ-sequences to have the i-binomial property. Then based on these conditions, the left and right i-binomial scramblings are defined. We show that Owen’s key lemma (Lemma 4, SIAM J. Numer. Anal. 34 (1997) 1884) remains valid with the left ibinomial scrambling, and thereby conclude that all the results on the expected errors of the integration problem so far obtained with Owen’s scrambling also hold with the left i-binomial scrambling.

Book ChapterDOI
Markus Dichtl1
08 Sep 2003
TL;DR: This paper analyze its method of generating randomness and, as a consequence of the analysis, it is described how, in principle, an attack on the generator can be executed.
Abstract: A hardware random number generator was described at CHES 2002 in [Tka03]. In this paper, we analyze its method of generating randomness and, as a consequence of the analysis, we describe how, in principle, an attack on the generator can be executed.

Posted Content
01 Dec 2003
TL;DR: In this article, the authors review two existing methods for the generation of random graphs with arbitrary degree sequences, which they call the switching and matching methods, and present a new method based on the ''go with the winners'' Monte Carlo method.
Abstract: Random graphs with prescribed degree sequences have been widely used as a model of complex networks. Comparing an observed network to an ensemble of such graphs allows one to detect deviations from randomness in network properties. Here we briefly review two existing methods for the generation of random graphs with arbitrary degree sequences, which we call the ``switching'' and ``matching'' methods, and present a new method based on the ``go with the winners'' Monte Carlo method. The matching method may suffer from nonuniform sampling, while the switching method has no general theoretical bound on its mixing time. The ``go with the winners'' method has neither of these drawbacks, but is slow. It can however be used to evaluate the reliability of the other two methods and, by doing this, we demonstrate that the deviations of the switching and matching algorithms under realistic conditions are small compared to the ``go with the winners'' algorithm. Because of its combination of speed and accuracy we recommend the use of the switching method for most calculations.

Journal ArticleDOI
TL;DR: In this article, the authors studied the asymptotic macroscopic properties of the mixed majority-minority game, where two types of heterogeneous adaptive agents, namely fundamentalists driven by differentiation and trend-followers driven by imitation, interact.
Abstract: We study the asymptotic macroscopic properties of the mixed majority–minority game, modelling a population in which two types of heterogeneous adaptive agents, namely 'fundamentalists' driven by differentiation and 'trend-followers' driven by imitation, interact. The presence of a fraction f of trend-followers is shown to induce (a) a significant loss of informational efficiency with respect to a pure minority game (in particular, an efficient, unpredictable phase exists only for f 1/2. We solve the model by means of an approximate static (replica) theory and by a direct dynamical (generating functional) technique. The two approaches coincide and match numerical results convincingly.

Book ChapterDOI
20 Jul 2003
TL;DR: In this article, the authors present the performance of the ideal observer under various signal-uncertainty paradigms with different parameters of simulated parallel-hole collimator imaging systems.
Abstract: We use the performance of the Bayesian ideal observer as a figure of merit for hardware optimization because this observer makes optimal use of signal-detection information. Due to the high dimensionality of certain integrals that need to be evaluated, it is difficult to compute the ideal observer test statistic, the likelihood ratio, when background variability is taken into account. Methods have been developed in our laboratory for performing this computation for fixed signals in random backgrounds. In this work, we extend these computational methods to compute the likelihood ratio in the case where both the backgrounds and the signals are random with known statistical properties. We are able to write the likelihood ratio as an integral over possible backgrounds and signals, and we have developed Markov-chain Monte Carlo (MCMC) techniques to estimate these high-dimensional integrals. We can use these results to quantify the degradation of the ideal-observer performance when signal uncertainties are present in addition to the randomness of the backgrounds. For background uncertainty, we use lumpy backgrounds. We present the performance of the ideal observer under various signal-uncertainty paradigms with different parameters of simulated parallel-hole collimator imaging systems. We are interested in any change in the rankings between different imaging systems under signal and background uncertainty compared to the background-uncertainty case. We also compare psychophysical studies to the performance of the ideal observer.

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed the localization properties of electron states in the quantum Hall regime and provided a short review of the supersymmetric critical field theory, and the interplay between edge states and bulk localization properties was investigated.
Abstract: The localization properties of electron states in the quantum Hall regime are reviewed. The random Landau model, the random matrix model, the tight-binding Peierls model, and the network model of Chalker and Coddington are introduced. Descriptions in terms of equivalent tight-binding Hamiltonians, and the 2D Dirac model, are outlined. Evidences for the universal critical behavior of the localization length are summarized. A short review of the supersymmetric critical field theory is provided. The interplay between edge states and bulk localization properties is investigated. For a system with finite width and with short-range randomness, a sudden breakdown of the two-point conductance from ne 2 / h to 0 ( n integer) is predicted if the localization length exceeds the distance between the edges.