scispace - formally typeset
Search or ask a question

Showing papers by "Santa Fe Institute published in 1993"


Journal Article
TL;DR: Mitchell et al. as discussed by the authors presented results from an experiment similar to one performed by Packard (1988), in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task.
Abstract: Author(s): Mitchell, Melanie; Hraber, Peter; Crutchfield, James P | Abstract: We present results from an experiment similar to one performed by Packard (1988), in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton's lambda parameter (Langton, 1990), and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near ``critical'' lambda values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with lambda values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to lambda, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability.

474 citations


Journal ArticleDOI
TL;DR: A statistical reference for RNA secondary structures with minimum free energies is computed by folding large ensembles of random RNA sequences, using two binary alphabets, AU and GC, the biophysical AUGC and the synthetic GCXK alphabet.
Abstract: A statistical reference for RNA secondary structures with minimum free energies is computed by folding large ensembles of random RNA sequences. Four nucleotide alphabets are used: two binary alphabets, AU and GC, the biophysical AUGC and the synthetic GCXK alphabet. RNA secondary structures are made of structural elements, such as stacks, loops, joints, and free ends. Statistical properties of these elements are computed for small RNA molecules of chain lengths up to 100. The results of RNA structure statistics depend strongly on the particular alphabet chosen. The statistical reference is compared with the data derived from natural RNA molecules with similar base frequencies. Secondary structures are represented as trees. Tree editing provides a quantitative measure for the distance dt, between two structures. We compute a structure density surface as the conditional probability of two structures having distance t given that their sequences have distance h. This surface indicates that the vast majority of possible minimum free energy secondary structures occur within a fairly small neighborhood of any typical (random) sequence. Correlation lengths for secondary structures in their tree representations are computed from probability densities. They are appropriate measures for the complexity of the sequence-structure relation. The correlation length also provides a quantitative estimate for the mean sensitivity of structures to point mutations.

298 citations


Posted Content
TL;DR: A spacially extended model of the collective behavior of a large number of locally acting organisms is proposed in which organisms move probabilistically between local cells in space, but with weights dependent on local morphogenetic substances, or morphogens.
Abstract: A spacially extended model of the collective behavior of a large number of locally acting organisms is proposed in which organisms move probabilistically between local cells in space, but with weights dependent on local morphogenetic substances, or morphogens The morphogens are in turn are effected by the passage of an organism The evolution of the morphogens, and the corresponding flow of the organisms constitutes the collective behavior of the group Such models have various types of phase transitions and self-organizing properties controlled both by the level of the noise, and other parameters The model is then applied to the specific case of ants moving on a lattice The local behavior of the ants is inspired by the actual behavior observed in the laboratory, and analytic results for the collective behavior are compared to the corresponding laboratory results It is hoped that the present model might serve as a paradigmatic example of a complex cooperative system in nature In particular swarm models can be used to explore the relation of nonequilibrium phase transitions to at least three important issues encountered in artificial life Firstly, that of emergence as complex adaptive behavior Secondly, as an exploration of continuous phase transitions in biological systems Lastly, to derive behavioral criteria for the evolution of collective behavior in social organisms

296 citations


Journal ArticleDOI
TL;DR: This work proposes this kind of topological classification as a tool for extending the «symbolic dynamics» approach to many-body dynamics by exploring the braid types of potentials of the form V∞r a from a≤-2, where all braidtypes occur, to a=2,where the system is integrable.
Abstract: Point masses moving in 2+1 dimensions draw out braids in space-time. If they move under the influence of some pairwise potential, what braid types are possible? By starting with fictional paths of the desired topology and ``relaxing'' them by minimizing the action, we explore the braid types of potentials of the form V\ensuremath{\propto}${\mathit{r}}^{\mathrm{\ensuremath{\alpha}}}$ from \ensuremath{\alpha}\ensuremath{\le}-2, where all braid types occur, to \ensuremath{\alpha}=2, where the system is integrable. We also discuss issues of symmetry and stability. We propose this kind of topological classification as a tool for extending the ``symbolic dynamics'' approach to many-body dynamics.

293 citations


Proceedings Article
29 Nov 1993
TL;DR: In this article, a simple hill-climbing algorithm (RMHC) was shown to outperform a GA on a simple "Royal Road" function, and an "idealized" GA algorithm (IGA) was proposed.
Abstract: We analyze a simple hill-climbing algorithm (RMHC) that was previously shown to outperform a genetic algorithm (GA) on a simple "Royal Road" function. We then analyze an "idealized" genetic algorithm (IGA) that is significantly faster than RMHC and that gives a lower bound for GA speed. We identify the features of the IGA that give rise to this speedup, and discuss how these features can be incorporated into a real GA.

284 citations


Journal ArticleDOI
TL;DR: The V3 loop of the human immunodeficiency virus type 1 (HIV-1) envelope protein is a highly variable region that is both functionally and immunologically important and an information theoretic quantity called mutual information, a measure of covariation, is used to quantify dependence between mutations in the loop.
Abstract: The V3 loop of the human immunodeficiency virus type 1 (HIV-1) envelope protein is a highly variable region that is both functionally and immunologically important. Using available amino acid sequences from the V3 region, we have used an information theoretic quantity called mutual information, a measure of covariation, to quantify dependence between mutations in the loop. Certain pairs of sites, including non-contiguous sites along the sequence, do not have independent mutations but display considerable, statistically significant, covarying mutations as measured by mutual information. For the pairs of sites with the highest mutual information, specific amino acids were identified that were highly predictive of amino acids in the linked site. The observed interdependence between variable sites may have implications for structural or functional relationships; separate experimental evidence indicates functional linkage between some of the pairs of sites with high mutual information. Further specific mutational studies of the V3 loop's role in determining viral phenotype are suggested by our analyses. Also, the implications of our results may be important to consider for V3 peptide vaccine design. The methods used here are generally applicable to the study of variable proteins.

264 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore the idea of constructing theoretical economic agents that behave like actual human agents and using them in neoclassical economic models, by postulating "artificial agents" who use a learning algorithm calibrated against human learning data from psychological experiments.
Abstract: This paper explores the idea of constructing theoretical economic agents that behave like actual human agents and using them in neoclassical economic models. It does this in a repeated-choice setting by postulating “artificial agents” who use a learning algorithm calibrated against human learning data from psychological experiments. The resulting calibrated algorithm appears to replicate human learning behavior to a high degree and reproduces several “stylized facts” of learning. It can therefore be used to replace the idealized, perfectly rational agents in appropriate neoclassical models with “calibrated agents” that represent actual human behavior. The paper discusses the possibilities of using the algorithm to represent human learning in normal-form stage games and in more general neoclassical models in economics. It explores the likelihood of convergence to long-run optimality and to Nash behavior, and the “characteristic learning time” implicit in human adaptation in the economy.

237 citations


Journal ArticleDOI
TL;DR: The results show that the optimum mutation schedule is one with brief bursts of high mutation rates interspersed between periods of mutation-free growth, and this model provides a framework within which the anatomy and kinetics of the germinal center reaction can be understood.

167 citations


Journal ArticleDOI
TL;DR: In this article, the authors review results on the evolution of cooperation based on the iterated Prisoner's Dilemma and discuss both in situations where everyone plays against everyone, and for spatial games.
Abstract: We review results on the evolution of cooperation based on the iterated Prisoner's Dilemma. Coevolution of strategies is discussed both in situations where everyone plays against everyone, and for spatial games. Simple artificial ecologies are constructed by incorporating an explicit resource flow and predatory interactions into models of coevolving strategies. Properties of food webs are reviewed, and we discuss what artificial ecologies can teach us about community structure.

86 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present evidence that the quasi-periodic oscillations (QPO) and very low frequency noise (VLFN) characteristic of many accretion sources are different aspects of the same physical process.
Abstract: We present evidence that the quasi-periodic oscillations (QPO) and very low frequency noise (VLFN) characteristic of many accretion sources are different aspects of the same physical process. We analyzed a long, high time resolution EXOSAT observation of the low-mass X-ray binary (LMXB) Sco X-1. The X-ray luminosity varies stochastically on time scales from milliseconds to hours. The nature of this variability - as quantified with both power spectrum analysis and a new wavelet technique, the scalegram - agrees well with the dripping handrail accretion model, a simple dynamical system which exhibits transient chaos. In this model both the QPO and VLFN are produced by radiation from blobs with a wide size distribution, resulting from accretion and subsequent diffusion of hot gas, the density of which is limited by an unspecified instability to lie below a threshold.

82 citations


Journal ArticleDOI
TL;DR: A numerical scan of random systems highlights the special properties of elementary replicators: they reduce the effective interconnectedness of the system, resulting in enhanced competition, and strong correlations between the concentrations.

Journal ArticleDOI
TL;DR: Just as with landscapes based on most stable secondary structure prediction, the landscapes defined on the full biophysical GCAU alphabet are much smoother than the landscapes restricted to pure GC sequences and the correlation lengths are almost constant fractions of the chain lengths.
Abstract: Statistical properties of RNA folding landscapes obtained by the partition function algorithm (McCaskill 1990) are investigated in detail. The pair correlation of free energies as a function of the Hamming distance is used as a measure for the ruggedness of the landscape. The calculation of the partition function contains information about the entire ensemble of secondary structures as a function of temperature and opens the door to all quantities of thermodynamic interest, in contrast with the conventional minimal free energy approach. A metric distance of structure ensembles is introduced and pair correlations at the level of the structures themselves are computed. Just as with landscapes based on most stable secondary structure prediction, the landscapes defined on the full biophysical GCAU alphabet are much smoother than the landscapes restricted to pure GC sequences and the correlation lengths are almost constant fractions of the chain lengths. Correlation functions for multi-structure landscapes exhibit an increased correlation length, especially near the melting temperature. However, the main effect on evolution is rather an effective increase in sampling for finite populations where each sequence explores multiple structures.

Journal ArticleDOI
TL;DR: The Landauer cost for erasing information demands that information about a physical system be included in the total entropy, as proposed by Zurek, and this must imply that work can be extracted in going from equilibrium to a typical system state.
Abstract: The Landauer cost for erasing information demands that information about a physical system be included in the total entropy, as proposed by Zurek [Nature 341, 119 (1989); Phys. Rev. A 40, 4731 (1989)]. A consequence is that most system states-either classical phase-space distributions or quantum pure states-have total entropy much larger than thermal equilibrium. If total entropy is to be a useful concept, this must imply that work can be extracted in going from equilibrium to a typical system state. The work comes from randomization of a «memory» that holds a record of the system state

Journal ArticleDOI
TL;DR: Recent developments in the study of the prehistory of the northern Mogollon and Anasazi areas of the North American Southwest are reviewed, with emphasis on the pre-A.D. 1150 period.
Abstract: Recent developments in the study of the prehistory of the northern Mogollon and Anasazi areas of the North American Southwest are reviewed, with emphasis on the pre-A.D. 1150 period, in an attempt to identify key empirical results and incipient interpretive directions.

Journal ArticleDOI
TL;DR: The success of Artificial Life depends on whether it will help solving the conceptual problems of biology, and the example is used as an example of the challenge that Artificial Life must meet.
Abstract: The success of Artificial Life (ALife) depends on whether it will help solve the conceptual problems of biology. Biology may be viewed as the science of the transformation of organizations. Yet biology lacks a theory of organization. We use this as an example of the challenge that ALife must meet.

Journal ArticleDOI
TL;DR: Semirandom DNA, synthesized with a designed, three‐residue repeat pattern, can encode libraries of very high diversity and represents an important tool for the construction of random polypeptide libraries.
Abstract: Libraries of random sequence polypeptides are useful as sources of unevolved proteins, novel ligands, and potential lead compounds for the development of vaccines and therapeutics. The expression of small random peptides has been achieved previously using DNA synthesized with equimolar mixtures of nucleotides. For many potential uses of random polypeptide libraries, concerns such as avoiding termination codons and matching target amino acid compositions make more complex designs necessary. In this study, three mixtures of nucleotides, corresponding to the three positions in the codon, were designed such that semirandom DNA synthesized by repeated cycles of the three mixtures created an open reading frame encoding random sequence polypeptides with desired ensemble characteristics. Two methods were used to design the nucleotide mixtures: the manual use of a spreadsheet and a refining grid search algorithm. Using design targets of less than or equal to 1% stop codons and an amino acid composition based on the average ratios observed in natural, globular proteins, the search methods yielded similar nucleotide ratios, Semirandom DNA, synthesized with a designed, three-residue repeat pattern, can encode libraries of very high diversity and represents an important tool for the construction of random polypeptide libraries.

Journal ArticleDOI
TL;DR: The quantum baker’s map displays a hypersensitivity to perturbations that is analogous to behavior found earlier in the classical case, and characterizes “quantum chaos” in a way that is directly relevant to statistical physics.
Abstract: We analyze a randomly perturbed quantum version of the baker’s transformation, a prototype of an area-conserving chaotic map. By numerically simulating the perturbed evolution, we estimate the information needed to follow a perturbed Hilbert-space vector in time. We find that the Landauer erasure cost associated with this information grows very rapidly and becomes much larger than the maximum statistical entropy given by the logarithm of the dimension of Hilbert space. The quantum baker’s map thus displays a hypersensitivity to perturbations that is analogous to behavior found earlier in the classical case. This hypersensitivity characterizes “quantum chaos” in a way that is directly relevant to statistical physics.

Journal ArticleDOI
TL;DR: This paper shows how methods from nonlinear time series analysis can be used to gain knowledge about correlations between the spiking events recorded from periodically driven sensory neurons and results on nonlinear forecastability of these spike trains are compared to those on data sets derived from the original data set and satisfying an appropriately chosen null hypothesis.
Abstract: The sequence of firing times of a neuron can be viewed as a point process. The problem of spike train analysis is to infer the underlying neural dynamics from this point process when, for example, one does not have access to a state variable such as intracellular voltage. Traditional analyses of spike trains have focussed to a large extent on fitting the parameters of a model stochastic point process to the data, such as the intensity of a homogeneous Poisson point process. This paper shows how methods from nonlinear time series analysis can be used to gain knowledge about correlations between the spiking events recorded from periodically driven sensory neurons. Results on nonlinear forecastability of these spike trains are compared to those on data sets derived from the original data set and satisfying an appropriately chosen null hypothesis. While no predictability, linear or nonlinear, is revealed by our analysis of the raw data using local linear predictors, it appears that there is some predictability in the successive phases (rather than intervals) at which the neurons fire.

Journal ArticleDOI
TL;DR: It is shown that a separate activation step from virgin to active B cells renders the virgin state stable for any choice of biologically reasonable parameters, and a combination of linear response functions for both proliferation and differentiation does not give rise to stable fixed points.

Journal ArticleDOI
TL;DR: In this paper, a Cayley tree model of B cell and antibody dynamics is formulated and analyzed, and the existence and stability of these localized network states are explored as a function of model parameters.

Journal ArticleDOI
01 Sep 1993-Fractals
TL;DR: In this article, the authors demonstrate the self-affine properties of the human heart rate using a spectral analysis based on counting statistics, where each QRS complex is considered to be a point event and from the number of events N(Δt) in consecutive time windows Δt the variance is calculated.
Abstract: Spectral analysis of heart rate variability is usually performed by Fast Fourier Transform. Here we demonstrate the self-affine properties of the human heart rate using a spectral analysis based on counting statistics. Each QRS complex is considered to be a point event and from the number of events N(Δt) in consecutive time windows Δt the variance is calculated. From the finding that the variance of N(Δt) follows a power law proportional to (Δt)1+b in case of 1/fb noise, it is shown that the variance of the heart rate as determined for windows of length Δt, i.e., N(Δt)/Δt, is proportional to (Δt)b−1. From a 12-day Holter recording, the scaling region could be determined to cover 0.16 to 0.000136 Hz. A function X(t) is self-affine if X(t) and X(rt)/rH have the same distribution functions. From the variance-time-curve, it can be shown that the exponent H is dependent on b with b=2H−1. In young healthy men, the parameter b fluctuates between 0.2 and 1.0 during 24 h and thus determines the self-affine scaling factor H=(b−1)/2 for the amplitude of heart rate, if the time axis is scaled by r. Thus, during periods of 1/f noise, the heart rate scales with H=0, and for periods of almost white noise, with H close to .

Posted Content
TL;DR: In this article, a model of learning and adaptation is used to analyze the coevolution of strategies in the repeated Prisoner's Dilemma game under both perfect and imperfect reporting.
Abstract: A model of learning and adaptation is used to analyze the coevolution of strategies in the repeated Prisoner's Dilemma game under both perfect and imperfect reporting. Meta-players submit finite automata strategies and update their choices through an explicit evolutionary process modeled by a genetic algorithm. Using this framework, adaptive strategic choice and the emergence of cooperation are studied through ``computational experiments.'' The results of the analyses indicate that information conditions lead to significant differences among the evolving strategies. Furthermore, they suggest that the general methodology may have much wider applicability to the analysis of adaptation in economic and social systems.

Proceedings Article
29 Nov 1993
TL;DR: The conventional Bayesian justification of backprop is that it finds the MAP weight vector as mentioned in this paper, but to find the MAP i-o function instead one must add a correction term to backprop.
Abstract: The conventional Bayesian justification of backprop is that it finds the MAP weight vector As this paper shows, to find the MAP i-o function instead one must add a correction term to backprop That tenn biases one towards i-o functions with small description lengths, and in particular favors (some kinds of) feature-selection, pruning, and weight-sharing

Posted Content
TL;DR: An experiment similar to one performed by Packard (1988), in which a genetic algorithm is used to evolve cellular automata to perform a particular computational task, demonstrates how symmetry breaking can impede the evolution toward higher computational capability.
Abstract: We present results from an experiment similar to one performed by Packard (1988), in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton's lambda parameter (Langton, 1990), and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near ``critical'' lambda values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with lambda values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to lambda, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability.

Journal Article
TL;DR: The concept of total entropy was introduced by Zurek as discussed by the authors, who showed that most system states (either classical phase-space distributions or quantum pure states) have total entropy much larger than thermal equilibrium, and that work can be extracted in going from equilibrium to a typical system state.
Abstract: The Landauer cost for erasing information demands that information about a physical system be included in the total entropy, as proposed by Zurek [Nature 341, 119 (1989); Phys. Rev. A 40, 4731 (1989)]. A consequence is that most system states-either classical phase-space distributions or quantum pure states-have total entropy much larger than thermal equilibrium. If total entropy is to be a useful concept, this must imply that work can be extracted in going from equilibrium to a typical system state. The work comes from randomization of a «memory» that holds a record of the system state

Posted Content
TL;DR: Genetic algorithms are computational models of evolution that play a central role in many artificial life models as mentioned in this paper, and they are used to study how learning and evolution interact, and to model ecosystems, immune system, cognitive systems and social systems.
Abstract: Genetic algorithms are computational models of evolution that play a central role in many artificial life models. We review the history and current scope of research on genetic algorithms in artificial life, using illustrative examples in which the genetic algorithms is used to study how learning and evolution interact, and to model ecosystems, immune system, cognitive systems, and social systems. We also ouline a number of open questions and future directions for genetic algorithms in artificial life research.

Proceedings Article
29 Nov 1993
TL;DR: Noise sensitivity signature methods use all of the training data and are manifestly data-adaptive and non-parametric, well suited for situations with limited training data.
Abstract: We show how randomly scrambling the output classes of various fractions of the training data may be used to improve predictive accuracy of a classification algorithm. We present a method for calculating the "noise sensitivity signature" of a learning algorithm which is based on scrambling the output classes. This signature can be used to indicate a good match between the complexity of the classifier and the complexity of the data. Use of noise sensitivity signatures is distinctly different from other schemes to avoid overtraining, such as cross-validation, which uses only part of the training data, or various penalty functions, which are not data-adaptive. Noise sensitivity signature methods use all of the training data and are manifestly data-adaptive and non-parametric. They are well suited for situations with limited training data.

Proceedings ArticleDOI
05 Nov 1993
TL;DR: In this article, the authors used the Ott, Grebogi, and Yorke (OGY) method to control complex in vitro cardiac rhythms in periodically perturbed isolated cells.
Abstract: Relevant beat-to-beat measures of local electrical responses during complex cardiac rhythms are interpreted as successive iterates of a low dimensional mapping. That simplified view is supported by previously reported experimental and numerical work. In that approximate theory, low dimensional dynamics (not restricted to chaos) also can be perturbed andcontrolled, much in the same way as in the Ott et a! method for controlling chaos in nonlinear dynamical systems. In the problem at hand, which involves nonlinear waves and spatial degrees of freedom, the task is much more complicated and the phenomena less well understood. Recordings from an in vitro model of ventricular fibrillation are analyzed searching for deterministic recurrences in the local period of activation. 1. INTRODUCTIONControlling complicated patterns of excitation in cardiac tissue during or preceding lethal arrhythmias is a major challenge which can have important clinical implications even if it is only partially successful. The application of strategiesderived from the Ott, Grebogi & Yorke (OGY) method1 to complex in vitro cardiac rhythms was first reported by Garfinkel etal2. The extension of these initial results to the control of clinically relevant cardiac arrhythmias is hindered by the fact that i)cardiac dynamics evolve in time and space rather than on an isolated point, as in the OGY scenario; and ii) current knowledgeof dynamical and elecirophysiological aspects of ventricular fibrillation (VF) is incomplete at best. This paper describes insection 2 what is already known of a relatively simpler situation involving complex rhythms in periodically perturbed isolated

Book ChapterDOI
01 Jan 1993
TL;DR: An evolutionary algorithm which allows entities to increase and decrease in complexity during the evolutionary process is applied to recurrent neural networks, which provides a suitable set of test problems.
Abstract: An evolutionary algorithm which allows entities to increase and decrease in complexity during the evolutionary process is applied to recurrent neural networks. Recognition of various regular languages provides a suitable set of test problems.

Proceedings Article
29 Nov 1993
TL;DR: This work uses neural networks to similtaneously examine both sequence and structure data, and to evolve new classes of secondary structure that can be predicted from sequence with significantly higher accuracy than the conventional classes.
Abstract: We use two co-evolving neural networks to determine new classes of protein secondary structure which are significantly more predictable from local amino sequence than the conventional secondary structure classification. Accurate prediction of the conventional secondary structure classes: alpha helix, beta strand, and coil, from primary sequence has long been an important problem in computational molecular biology. Neural networks have been a popular method to attempt to predict these conventional secondary structure classes. Accuracy has been disappointingly low. The algorithm presented here uses neural networks to similtaneously examine both sequence and structure data, and to evolve new classes of secondary structure that can be predicted from sequence with significantly higher accuracy than the conventional classes. These new classes have both similarities to, and differences with the conventional alpha helix, beta strand and coil.