scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 2009"


Posted Content
TL;DR: In this article, a modular framework for constructing randomized algorithms that compute partial matrix decompositions is presented, which uses random sampling to identify a subspace that captures most of the action of a matrix and then the input matrix is compressed to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization.
Abstract: Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.

2,356 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a 512-byte SRAM fingerprint contains sufficient entropy to generate 128-bit true random numbers and that the generated numbers pass the NIST tests for runs, approximate entropy, and block frequency.
Abstract: Intermittently powered applications create a need for low-cost security and privacy in potentially hostile environments, supported by primitives including identification and random number generation. Our measurements show that power-up of SRAM produces a physical fingerprint. We propose a system of fingerprint extraction and random numbers in SRAM (FERNS) that harvests static identity and randomness from existing volatile CMOS memory without requiring any dedicated circuitry. The identity results from manufacture-time physically random device threshold voltage mismatch, and the random numbers result from runtime physically random noise. We use experimental data from high-performance SRAM chips and the embedded SRAM of the WISP UHF RFID tag to validate the principles behind FERNS. For the SRAM chip, we demonstrate that 8-byte fingerprints can uniquely identify circuits among a population of 5,120 instances and extrapolate that 24-byte fingerprints would uniquely identify all instances ever produced. Using a smaller population, we demonstrate similar identifying ability from the embedded SRAM. In addition to identification, we show that SRAM fingerprints capture noise, enabling true random number generation. We demonstrate that a 512-byte SRAM fingerprint contains sufficient entropy to generate 128-bit true random numbers and that the generated numbers pass the NIST tests for runs, approximate entropy, and block frequency.

846 citations


Book
01 Jan 2009
TL;DR: This book provides a very readable introduction to the exciting interface of computability and randomness for graduates and researchers in computability theory, theoretical computer science, and measure theory.
Abstract: The interplay between computability and randomness has been an active area of research in recent years, reflected by ample funding in the USA, numerous workshops, and publications on the subject. The complexity and the randomness aspect of a set of natural numbers are closely related. Traditionally, computability theory is concerned with the complexity aspect. However, computability theoretic tools can also be used to introduce mathematical counterparts for the intuitive notion of randomness of a set. Recent research shows that, conversely, concepts and methods originating from randomness enrich computability theory. Covering the basics as well as recent research results, this book provides a very readable introduction to the exciting interface of computability and randomness for graduates and researchers in computability theory, theoretical computer science, and measure theory.

638 citations


Journal ArticleDOI
TL;DR: The horizontal visibility algorithm as mentioned in this paper is a geometrically simpler and analytically solvable version of our former algorithm, focusing on the mapping of random series series of independent identically distributed random variables.
Abstract: networks. This procedure allows us to apply methods of complex network theory for characterizing time series. In this work we present the horizontal visibility algorithm, a geometrically simpler and analytically solvable version of our former algorithm, focusing on the mapping of random series series of independent identically distributed random variables. After presenting some properties of the algorithm, we present exact results on the topological properties of graphs associated with random series, namely, the degree distribution, the clustering coefficient, and the mean path length. We show that the horizontal visibility algorithm stands as a simple method to discriminate randomness in time series since any random series maps to a graph with an exponential degree distribution of the shape Pk=1 /32 /3 k2 , independent of the probability distribution from which the series was generated. Accordingly, visibility graphs with other Pk are related to nonrandom series. Numerical simulations confirm the accuracy of the theorems for finite series. In a second part, we show that the method is able to distinguish chaotic series from independent and identically distributed i.i.d. theory, studying the following situations: i noise-free low-dimensional chaotic series, ii low-dimensional noisy chaotic series, even in the presence of large amounts of noise, and iii high-dimensional chaotic series coupled map lattice, without needs for additional techniques such as surrogate data or noise reduction methods. Finally, heuristic arguments are given to explain the topological properties of chaotic series, and several sequences that are conjectured to be random are analyzed.

547 citations


Journal IssueDOI
TL;DR: This paper proposes a new cognitive model—cloud model, which can synthetically describe the randomness and fuzziness of concepts and implement the uncertain transformation between a qualitative concept and its quantitative instantiations and may be more adaptive for the uncertainty description of linguistic concepts.
Abstract: Randomness and fuzziness are the two most important uncertainties inherent in human cognition, which have attracted great attention in artificial intelligence research. In this paper, regarding linguistic terms or concepts as the basic units of human cognition, we propose a new cognitive model—cloud model, which can synthetically describe the randomness and fuzziness of concepts and implement the uncertain transformation between a qualitative concept and its quantitative instantiations. Furthermore, by analyzing in detail the statistical properties of normal cloud model, that is, an important kind of cloud models based on normal distribution and Gauss membership function, we show that normal cloud model can not only be viewed as a generalized normal distribution with weak constraints but also avoid the flaw of fuzzy sets to quantify the membership degree of an element as an accurate value between 0 and 1 and, therefore, may be more adaptive for the uncertainty description of linguistic concepts. Finally, two demonstration examples about the fractal evolution of plants and network topologies based on cloud models are given to illustrate the promising applications of cloud models in some more complex knowledge representation tasks. © 2009 Wiley Periodicals, Inc.

410 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on four approaches that give rise to nonlocal representations of advective and dispersive transport of nonreactive tracers in randomly heterogeneous porous or fractured continua.

348 citations


Journal ArticleDOI
TL;DR: An alternative method that can identify random similarity within multiple sequence alignments (MSAs) based on Monte Carlo resampling within a sliding window is proposed and appears to be a powerful tool to identify possible biases of tree reconstructions or gene identification.
Abstract: Random similarity of sequences or sequence sections can impede phylogenetic analyses or the identification of gene homologies. Additionally, randomly similar sequences or ambiguously aligned sequence sections can negatively interfere with the estimation of substitution model parameters. Phylogenomic studies have shown that biases in model estimation and tree reconstructions do not disappear even with large data sets. In fact, these biases can become pronounced with more data. It is therefore important to identify possible random similarity within sequence alignments in advance of model estimation and tree reconstructions. Different approaches have been already suggested to identify and treat problematic alignment sections. We propose an alternative method that can identify random similarity within multiple sequence alignments (MSAs) based on Monte Carlo resampling within a sliding window. The method infers similarity profiles from pairwise sequence comparisons and subsequently calculates a consensus profile. This consensus profile represents a summary of all calculated single similarity profiles. In consequence, consensus profiles identify dominating patterns of nonrandom similarity or randomness within sections of MSAs. We show that the approach clearly identifies randomness in simulated and real data. After the exclusion of putative random sections, node support drastically improves in tree reconstructions of both data. It thus appears to be a powerful tool to identify possible biases of tree reconstructions or gene identification. The method is currently restricted to nucleotide data but will be extended to protein data in the near future.

332 citations


Journal ArticleDOI
TL;DR: A new, self-contained construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al.
Abstract: We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of right-hand vertices are polynomially close to optimal, whereas the previous constructions of Ta-Shma et al. [2007] required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and self-contained description and analysis, based on the ideas underlying the recent list-decodable error-correcting codes of Parvaresh and Vardy [2005].Our expanders can be interpreted as near-optimal “randomness condensers,” that reduce the task of extracting randomness from sources of arbitrary min-entropy rate to extracting randomness from sources of min-entropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new, self-contained construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. [2003] and improving upon it when the error parameter is small (e.g., 1/poly(n)).

304 citations


Journal ArticleDOI
TL;DR: In this article, the applicability of random-matrix theory to nuclear spectra is reviewed, and it is shown that quantum chaos is a generic property of nuclear spectrum, except for the ground state regions of strongly deformed nuclei.
Abstract: Evidence for the applicability of random-matrix theory to nuclear spectra is reviewed. In analogy to systems with few degrees of freedom, one speaks of chaos (more accurately, quantum chaos) in nuclei whenever random-matrix predictions are fulfilled. An introduction into the basic concepts of random-matrix theory is followed by a survey over the extant experimental information on spectral fluctuations, including a discussion of the violation of a symmetry or invariance property. Chaos in nuclear models is discussed for the spherical shell model, for the deformed shell model, and for the interacting boson model. Evidence for chaos also comes from random-matrix ensembles patterned after the shell model such as the embedded two-body ensemble, the two-body random ensemble, and the constrained ensembles. All this evidence points to the fact that chaos is a generic property of nuclear spectra, except for the ground-state regions of strongly deformed nuclei.

231 citations


Journal Article
TL;DR: A novel pseudo random bit generator (PRBG) based on two chaotic logistic maps running side-by-side and starting from random independent initial conditions is proposed, and the suitability of the logistic map is discussed by highlighting some of its interesting statistical properties, which make it a perfect choice for such random bit generation.
Abstract: During last one and half decade an interesting relationship between chaos and cryptography has been developed, according to which many properties of chaotic systems such as: ergodicity, sensitivity to initial conditions/system parameters, mixing property, deterministic dynamics and structural complexity can be considered analogous to the confusion, diffusion with small change in plaintext/secret key, diffusion with a small change within one block of the plaintext, deterministic pseudo randomness and algorithmic complexity properties of traditional cryptosystems. As a result of this close relationship several chaos-based cryptosystems have been put forward since 1990. In one of the stages of the development of chaotic stream ciphers, the application of discrete chaotic dynamical systems in pseudo random bit generation has been widely studied recently. In this communication, we propose a novel pseudo random bit generator (PRBG) based on two chaotic logistic maps running side-by-side and starting from random independent initial conditions. The pseudo random bit sequence is generated by comparing the outputs of both the chaotic logistic maps. We discuss the suitability of the logistic map by highlighting some of its interesting statistical properties, which make it a perfect choice for such random bit generation. Finally, we present the detailed results of the statistical testing on generated bit sequences, done by the most stringent tests of randomness: the NIST suite tests, to detect the specific characteristics expected of truly random sequences. Povzetek: Predstavljen je psevdo nakljucni generator bitov na osnovi kaoticnega pristopa.

203 citations


Journal ArticleDOI
TL;DR: A Coulomb gas method is presented to calculate analytically the probability of rare events where the maximum eigenvalue of a random matrix is much larger than its typical value, e.g. Wishart and Gaussian ensembles.
Abstract: We present a Coulomb gas method to calculate analytically the probability of rare events where the maximum eigenvalue of a random matrix is much larger than its typical value. The large deviation function that characterizes this probability is computed explicitly for Wishart and Gaussian ensembles. The method is general and applies to other related problems, e.g., the joint large deviation function for large fluctuations of top eigenvalues. Our results are relevant to widely employed data compression techniques, namely, the principal components analysis. Analytical predictions are verified by extensive numerical simulations.

Journal ArticleDOI
Robert R. Inman1
TL;DR: The nature of randomness actually found in two industrial manufacturing systems is exhibited and the validity of two common assumptions regarding this randomness in automotive manufacturing is assessed.
Abstract: This paper presents actual data (processing times, interarrival times, cycles-between-failures, and time-to-repair) from two automotive body welding lines. The purpose is twofold. First, to help researchers focus their work on realistic problems, we exhibit the nature of randomness actually found in two industrial manufacturing systems and provide a data source for realistic probability distributions. Second, we assess the validity of two common assumptions regarding this randomness in automotive manufacturing. Many queueing network models assume that certain random variables are independent and exponentially distributed. Though often reasonable, the primary motivation for the independence and exponentiality assumptions is mathematical tractability.

Journal ArticleDOI
TL;DR: An efficient random number generator based on the randomness present in photon emission and detection is presented, using a single-photon counter and FPGA-based data processing for a cost-efficient and convenient implementation.
Abstract: We present an efficient random number generator based on the randomness present in photon emission and detection. The interval between successive photons from a light source with Poissonian statistics is separated into individual time bins, which are then used to create several random bits per detection event. Using a single-photon counter and FPGA-based data processing allows for a cost-efficient and convenient implementation that outputs data at rates of roughly 40 Mbit s−1.

Journal ArticleDOI
TL;DR: In this article, the authors review studies of entanglement entropy in systems with quenched randomness, concentrating on universal behavior at strongly random quantum critical points, and provide insight into the quantum criticality of these systems and an understanding of their relationship to non-random ('pure') quantum criticalness.
Abstract: We review studies of entanglement entropy in systems with quenched randomness, concentrating on universal behavior at strongly random quantum critical points. The disorder-averaged entanglement entropy provides insight into the quantum criticality of these systems and an understanding of their relationship to non-random ('pure') quantum criticality. The entanglement near many such critical points in one dimension shows a logarithmic divergence in subsystem size, similar to that in the pure case but with a different universal coefficient. Such universal coefficients are examples of universal critical amplitudes in a random system. Possible measurements are reviewed along with the one-particle entanglement scaling at certain Anderson localization transitions. We also comment briefly on higher dimensions and challenges for the future.

Book ChapterDOI
02 Dec 2009
TL;DR: In this paper, the authors proposed a scheme called hedged public-key encryption (HED-CDA), which achieves IND-CPA security when the randomness used is of high quality, but, when the latter is not the case, rather than breaking completely, they achieve a weaker but still useful notion of security.
Abstract: Public-key encryption schemes rely for their IND-CPA security on per-message fresh randomness. In practice, randomness may be of poor quality for a variety of reasons, leading to failure of the schemes. Expecting the systems to improve is unrealistic. What we show in this paper is that we can, instead, improve the cryptography to offset the lack of possible randomness. We provide public-key encryption schemes that achieve IND-CPA security when the randomness they use is of high quality, but, when the latter is not the case, rather than breaking completely, they achieve a weaker but still useful notion of security that we call IND-CDA. This hedged public-key encryption provides the best possible security guarantees in the face of bad randomness. We provide simple RO-based ways to make in-practice IND-CPA schemes hedge secure with minimal software changes. We also provide non-RO model schemes relying on lossy trapdoor functions (LTDFs) and techniques from deterministic encryption. They achieve adaptive security by establishing and exploiting the anonymity of LTDFs which we believe is of independent interest.

Journal ArticleDOI
TL;DR: This work revisits Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way and shows that the unique optimal policy can be obtained by solving a simple linear system of equations.
Abstract: This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost---nothing particular up to now---you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy---minimizing the expected routing cost---are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model (1996), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by solving a simple linear system of equations. This second model is therefore more convincing because of its computational efficiency and soundness. Finally, simulation results obtained on simple, illustrative examples show that the models behave as expected.

01 Jan 2009
TL;DR: In this article, the authors argue that uncertainty is an intrinsic property of nature, that causality implies dependence of natural processes in time, thus suggesting predictability, but even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon.
Abstract: According to the traditional notion of randomness and uncertainty, natural phenomena are separated into two mutually exclusive components, random (or stochastic) and deterministic. Within this dichotomous logic, the deterministic part supposedly represents cause-effect relationships and, thus, is physics and science (the “good”), whereas randomness has little relationship with science and no relationship with understanding (the “evil”). We argue that such views should be reconsidered by admitting that uncertainty is an intrinsic property of nature, that causality implies dependence of natural processes in time, thus suggesting predictability, but even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon. On these premises it is possible to shape a consistent stochastic representation of natural processes, in which predictability (suggested by deterministic laws) and unpredictability (randomness) coexist and are not separable or additive components. Deciding which of the two dominates is simply a matter of specifying the time horizon of the prediction. Long horizons of prediction are inevitably associated with high uncertainty, whose quantification relies on understanding the long-term stochastic properties of the processes.

Journal ArticleDOI
TL;DR: The proposed adaptive approach is employed to quantify the effect of uncertainty in input parameters on the performance of micro-electromechanical systems (MEMS) and resolves the pull-in instability in MEMS switches.

Proceedings ArticleDOI
25 Oct 2009
TL;DR: In this article, the authors extend the method of multiplicities to obtain tighter bounds on the size of the Kakeya set, which is tight to within a 2 + o(1) factor.
Abstract: We extend the "method of multiplicities" to get the following results, of interest in combinatorics and randomness extraction. (A) We show that every Kakeya set (a set of points that contains a line in every direction) in $\F_q^n$ must be of size at least $q^n/2^n$. This bound is tight to within a $2 + o(1)$ factor for every $n$ as $q \to \infty$, compared to previous bounds that were off by exponential factors in $n$. (B) We give improved randomness extractors and "randomness mergers". Mergers are seeded functions that take as input $\Lambda$ (possibly correlated) random variables in $\{0,1\}^N$ and a short random seed and output a single random variable in $\{0,1\}^N$ that is statistically close to having entropy $(1-\delta) \cdot N$ when one of the $\Lambda$ input variables is distributed uniformly. The seed we require is only $(1/\delta)\cdot \log \Lambda$-bits long, which significantly improves upon previous construction of mergers. (C) Using our new mergers, we show how to construct randomness extractors that use logarithmic length seeds while extracting $1 - o(1)$ fraction of the min-entropy of the source. The "method of multiplicities", as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset {\em with high multiplicity}. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple bounds on the number of zeroes to complete the analysis. Our augmentation to this technique is that we prove, under appropriate conditions, that the interpolating polynomial vanishes {\em with high multiplicity} outside the set. This novelty leads to significantly tighter analyses.

Journal ArticleDOI
TL;DR: This article focuses on one important dimension of this issue, fuzzy random variables (FRVs), to introduce IME readers to FRVs and to illustrate how naturally compatible and complementary randomness and fuzziness are.
Abstract: There are two important sources of uncertainty: randomness and fuzziness. Randomness models the stochastic variability of all possible outcomes of a situation, and fuzziness relates to the unsharp boundaries of the parameters of the model. In this sense, randomness is largely an instrument of a normative analysis that focuses on the future, while fuzziness is more an instrument of a descriptive analysis reflecting the past and its implications. Clearly, randomness and fuzziness are complementary, and so a natural question is how fuzzy variables could interact with the type of random variables found in actuarial science. This article focuses on one important dimension of this issue, fuzzy random variables (FRVs). The goal is to introduce IME readers to FRVs and to illustrate how naturally compatible and complementary randomness and fuzziness are.

Proceedings Article
23 Mar 2009
TL;DR: Theoretical performance bounds associated with using multi-antenna communications are provided and two practical methods for generating secret keys exploiting the increased randomness are proposed.
Abstract: An emerging area of research in wireless communications is the generation of secret encryption keys based on the shared (or common) randomness of the wireless channel between two legitimate nodes in a communication network. However, to date, little work has appeared on methods to use the increased randomness available when the network nodes have multiple antennas. This paper provides theoretical performance bounds associated with using multi-antenna communications and proposes two practical methods for generating secret keys exploiting the increased randomness. Performance simulations reveal the efficiency of the methods.

Journal ArticleDOI
Wei Wei1, Hong Guo1
TL;DR: This method is proved to be bias free in randomness generation, provided that the single photon detections are mutually independent, and has the advantage in fast random bit generation, since no postprocessing is needed.
Abstract: We propose what we believe to be a new approach to nondeterministic random-number generation. The randomness originated from the uncorrelated nature of consecutive laser pulses with Poissonian photon statistics and that of photon number detections is used to generate random bit, and the von Neumann correction method is used to extract the final random bit. This method is proved to be bias free in randomness generation, provided that the single photon detections are mutually independent. Further, it has the advantage in fast random bit generation, since no postprocessing is needed. A true random-number generator based on this method is realized, and its randomness is tested and guaranteed using three statistical test batteries.

Journal ArticleDOI
TL;DR: A closed-form first-order perturbative solution to the problem of electromagnetic scattering from a layered structure with an arbitrary number of rough interfaces is presented, and a systematic approach that involves the use of matrix formalism and generalized reflection/transmission coefficients is employed to avoid the necessity of the cumbersome Green function formalism.
Abstract: A closed-form first-order perturbative solution to the problem of electromagnetic scattering from a layered structure with an arbitrary number of rough interfaces is presented in this paper. Following the classical scheme employed to deal with a rough surface, a perturbative expansion of the fields in the rough-interface layered structure is performed, assuming that roughness heights and slopes are small enough. In this manner, in the first-order approximation, the geometric randomness of the corrugated interfaces is translated into random current sheets imposed on the unperturbed (flat) interfaces and radiating in the unperturbed (flat boundaries) layered media. The scattered field is then represented as the sum of up- and down-going waves, and a systematic approach that involves the use of matrix formalism and generalized reflection/transmission coefficients is employed. This approach permits us to avoid the necessity of the cumbersome Green function formalism. The demonstration of the consistency of the presented solution is analytically provided, showing that the proposed solution reduces to the corresponding existing ones when the stratification geometry reduces to the simplified ones considered by the other authors.

Journal ArticleDOI
TL;DR: In this article, the authors explore the constraints on parameters required to produce light curves comparable to the observations and find that a tight relation between the size of the emitters, and the bulk and random Lorentz factors is needed and that the random LFR determines the variability.
Abstract: Randomly oriented relativistic emitters in a relativistically expanding shell provide an alternative to internal shocks as a mechanism for producing gamma-ray bursts' variable light curves with efficient conversion of energy to radiation. In this model, the relativistic outflow is broken into small emitters moving relativistically in the outflow's rest frame. Variability arises because an observer sees an emitter only when its velocity points toward him so that only a small fraction of the emitters is seen by a given observer. Significant relativistic random motion requires that a large fraction of the overall energy is converted to random kinetic energy and is maintained in this form. While it is not clear how this is achieved, we explore here, using two toy models, the constraints on parameters required to produce light curves comparable to the observations. We find that a tight relation between the size of the emitters, and the bulk and random Lorentz factors is needed and that the random Lorentz factor determines the variability. While both models successfully produce the observed variability there are several inconsistencies with other properties of the light curves. Most of which, but not all, might be resolved if the central engine is active for a long time, producing a number of shells, resembling to some extent the internal shocks model.

Journal ArticleDOI
TL;DR: It is shown that Information Theory quantifiers are suitable tools for detecting and for quantifying noise-induced temporal correlations in stochastic resonance phenomena and shows that both, H and C, display resonant features as a function of the noise intensity, i.e., for an optimal level of noise the entropy displays a minimum and the complexity, a maximum.
Abstract: We show that Information Theory quantifiers are suitable tools for detecting and for quantifying noise-induced temporal correlations in stochastic resonance phenomena. We use the Bandt & Pompe (BP) method [Phys. Rev. Lett. 88, 174102 (2002)] to define a probability distribution, P, that fully characterizes temporal correlations. The BP method is based on a comparison of neighboring values, and here is applied to the temporal sequence of residence-time intervals generated by the paradigmatic model of a Brownian particle in a sinusoidally modulated bistable potential. The probability distribution P generated via the BP method has associated a normalized Shannon entropy, H[P], and a statistical complexity measure, C[P], which is defined as proposed by Rosso et al. [Phys. Rev. Lett. 99, 154102 (2007)]. The statistical complexity quantifies not only randomness but also the presence of correlational structures, the two extreme circumstances of maximum knowledge (“perfect order") and maximum ignorance (“complete randomness") being regarded an “trivial", and in consequence, having complexity C = 0. We show that both, H and C, display resonant features as a function of the noise intensity, i.e., for an optimal level of noise the entropy displays a minimum and the complexity, a maximum. This resonant behavior indicates noise-enhanced temporal correlations in the sequence of residence-time intervals. The methodology proposed here has great potential for the precise detection of subtle signatures of noise-induced temporal correlations in real-world complex signals.

Posted Content
TL;DR: In this paper, the authors propose a methodology to calibrate decisions to the degree (and computability) of forecast error by classifying decision payoffs in two types: simple payoffs (true/false or binary) and complex (higher moments).
Abstract: The paper presents evidence that econometric techniques based on variance- L2 norm are flawed -and do not replicate. The result is un-computability of role of tail events. The paper proposes a methodology to calibrate decisions to the degree (and computability) of forecast error. It classifies decision payoffs in two types: simple payoffs (true/false or binary) and complex (higher moments); and randomness into type-1 (thin tails) and type-2 (true fat tails) and shows the errors for the estimation of small probability payoffs for type 2 randomness. The Fourth Quadrant is where payoffs are complex with type-2 randomness. We propose solutions to mitigate the effect of the Fourth Quadrant based on the nature of complex systems.


Journal ArticleDOI
TL;DR: In this article, the localization-disorder paradigm is analyzed for a specific system of weakly repulsive Bose gas at zero temperature placed into a quenched random potential.
Abstract: The localization-disorder paradigm is analyzed for a specific system of weakly repulsive Bose gas at zero temperature placed into a quenched random potential. We show that at low average density or weak enough interaction the particles fill deep potential wells of the random potential whose radius and depth depend on the characteristics of the random potential and the interacting gas. The localized state is the random singlet with no long-range phase correlation. At a critical density the quantum phase transition to the coherent superfluid state proceeds. We calculate the critical density in terms of the geometrical characteristics of the noise and the gas. In a finite system the ground state becomes nonergodic at very low density. For atoms in traps four different regimes are found; only one of it is superfluid. The theory is extended to lower (one and two) dimensions. Its quantitative predictions can be checked in experiments with ultracold atomic gases and other Bose systems.

Journal ArticleDOI
TL;DR: This paper proposed a methodology to calibrate decisions to the degree (and computability) of forecast error by classifying decision payoffs in two types: simple (true/false or binary) and complex (higher moments) and randomness into type-1 (thin tails) and type-2 (true fat tails).