Other affiliations: University of Camerino, International Military Sports Council, Kavli Institute for Theoretical Physics ...read more
Bio: Angelo Vulpiani is an academic researcher from Sapienza University of Rome. The author has contributed to research in topics: Lyapunov exponent & Dynamical systems theory. The author has an hindex of 54, co-authored 384 publications receiving 16220 citations. Previous affiliations of Angelo Vulpiani include University of Camerino & International Military Sports Council.
Papers published on a yearly basis
TL;DR: In this paper, it was shown that a dynamical system subject to both periodic forcing and random perturbation may show a resonance (peak in the power spectrum) which is absent when either the forcing or the perturbations is absent.
Abstract: It is shown that a dynamical system subject to both periodic forcing and random perturbation may show a resonance (peak in the power spectrum) which is absent when either the forcing or the perturbation is absent.
TL;DR: In this article, an amplification of random perturbations by the interaction of nonlinearities internal to the climatic system with external, orbital forcing is found, and this stochastic resonance is investigated in a highly simplified, zero-dimensional climate model.
Abstract: An amplification of random perturbations by the interaction of non-linearities internal to the climatic system with external, orbital forcing is found. This stochastic resonance is investigated in a highly simplified, zero-dimensional climate model. It is conceivable that this new type of resonance might play a role in explaining the 10 5 year peak in the power spectra of paleoclimatic records. DOI: 10.1111/j.2153-3490.1982.tb01787.x
TL;DR: In this paper, the authors describe anomalous scaling in terms of multifractal objects, defined by a measure whose scaling properties are characterized by a family of singularities, which are identified by a scaling exponent.
Abstract: Anomalous scaling laws appear in a wide class of phenomena where global dilation invariance fails. In this case, the description of scaling properties requires the introduction of an infinite set of exponents. Numerical and experimental evidence indicates that this description is relevant in the theory of dynamical systems, of fully developed turbulence, in the statistical mechanics of disordered systems, and in some condensed matter problems. We describe anomalous scaling in terms of multifractal objects. They are defined by a measure whose scaling properties are characterized by a family of singularities, which are identified by a scaling exponent. Singularities corresponding to the same exponent are distributed on fractal set. The multifractal object arises as the superposition of these sets, whose fractal dimensions are related to the anomalous scaling exponents via a Legendre transformation. It is thus possible to reconstruct the probability distribution of the singularity exponents. We review the application of this formalism to the description of chaotic attractors in dissipative systems, of the energy dissipating set in fully developed turbulence, of some probability distributions in condensed matter problems. Moreover, a simple extension of the method allows us to treat from the same point of view temporal intermittency in chaotic systems and sample to sample fluctuations in disordered systems. We stress the phenomenological nature of the approach and discuss the few cases in which it was possible to reach a more fundamental understanding of anomalous scaling. We point out the need of a theory which should explain its origin and pave the way to a microscopic calculation of the probability distribution of the singularities.
TL;DR: In this paper, the relation between the relaxation of spontaneous fluctuations and the response to an external perturbation is discussed, and the connection of these works with large deviation theory is analyzed.
Abstract: General aspects of the Fluctuation–Dissipation Relation (FDR), and Response Theory are considered. After analyzing the conceptual and historical relevance of fluctuations in statistical mechanics, we illustrate the relation between the relaxation of spontaneous fluctuations, and the response to an external perturbation. These studies date back to Einstein’s work on Brownian Motion, were continued by Nyquist and Onsager and culminated in Kubo’s linear response theory. The FDR has been originally developed in the framework of statistical mechanics of Hamiltonian systems, nevertheless a generalized FDR holds under rather general hypotheses, regardless of the Hamiltonian, or equilibrium nature of the system. In the last decade, this subject was revived by the works on Fluctuation Relations (FR) concerning far from equilibrium systems. The connection of these works with large deviation theory is analyzed. Some examples, beyond the standard applications of statistical mechanics, where fluctuations play a major role are discussed: fluids, granular media, nanosystems and biological systems.
TL;DR: In this paper, the authors review the concept of multifractal sets in both turbulent flows and dynamical systems using a generalisation of the beta-model and propose that the energy dissipation of three-dimensional turbulent flow is concentrated on a set with non-integer Hausdorff dimension.
Abstract: It is generally argued that the energy dissipation of three-dimensional turbulent flow is concentrated on a set with non-integer Hausdorff dimension. Recently, in order to explain experimental data, it has been proposed that this set does not possess a global dilatation invariance: it can be considered to be a multifractal set. The authors review the concept of multifractal sets in both turbulent flows and dynamical systems using a generalisation of the beta -model.
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …
28 Jul 2005
TL;DR: In this paper, a simple model based on the power-law degree distribution of real networks was proposed, which was able to reproduce the power law degree distribution in real networks and to capture the evolution of networks, not just their static topology.
Abstract: The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology.
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
TL;DR: A comprehensive review of spatiotemporal pattern formation in systems driven away from equilibrium is presented in this article, with emphasis on comparisons between theory and quantitative experiments, and a classification of patterns in terms of the characteristic wave vector q 0 and frequency ω 0 of the instability.
Abstract: A comprehensive review of spatiotemporal pattern formation in systems driven away from equilibrium is presented, with emphasis on comparisons between theory and quantitative experiments. Examples include patterns in hydrodynamic systems such as thermal convection in pure fluids and binary mixtures, Taylor-Couette flow, parametric-wave instabilities, as well as patterns in solidification fronts, nonlinear optics, oscillatory chemical reactions and excitable biological media. The theoretical starting point is usually a set of deterministic equations of motion, typically in the form of nonlinear partial differential equations. These are sometimes supplemented by stochastic terms representing thermal or instrumental noise, but for macroscopic systems and carefully designed experiments the stochastic forces are often negligible. An aim of theory is to describe solutions of the deterministic equations that are likely to be reached starting from typical initial conditions and to persist at long times. A unified description is developed, based on the linear instabilities of a homogeneous state, which leads naturally to a classification of patterns in terms of the characteristic wave vector q0 and frequency ω0 of the instability. Type Is systems (ω0=0, q0≠0) are stationary in time and periodic in space; type IIIo systems (ω0≠0, q0=0) are periodic in time and uniform in space; and type Io systems (ω0≠0, q0≠0) are periodic in both space and time. Near a continuous (or supercritical) instability, the dynamics may be accurately described via "amplitude equations," whose form is universal for each type of instability. The specifics of each system enter only through the nonuniversal coefficients. Far from the instability threshold a different universal description known as the "phase equation" may be derived, but it is restricted to slow distortions of an ideal pattern. For many systems appropriate starting equations are either not known or too complicated to analyze conveniently. It is thus useful to introduce phenomenological order-parameter models, which lead to the correct amplitude equations near threshold, and which may be solved analytically or numerically in the nonlinear regime away from the instability. The above theoretical methods are useful in analyzing "real pattern effects" such as the influence of external boundaries, or the formation and dynamics of defects in ideal structures. An important element in nonequilibrium systems is the appearance of deterministic chaos. A greal deal is known about systems with a small number of degrees of freedom displaying "temporal chaos," where the structure of the phase space can be analyzed in detail. For spatially extended systems with many degrees of freedom, on the other hand, one is dealing with spatiotemporal chaos and appropriate methods of analysis need to be developed. In addition to the general features of nonequilibrium pattern formation discussed above, detailed reviews of theoretical and experimental work on many specific systems are presented. These include Rayleigh-Benard convection in a pure fluid, convection in binary-fluid mixtures, electrohydrodynamic convection in nematic liquid crystals, Taylor-Couette flow between rotating cylinders, parametric surface waves, patterns in certain open flow systems, oscillatory chemical reactions, static and dynamic patterns in biological media, crystallization fronts, and patterns in nonlinear optics. A concluding section summarizes what has and has not been accomplished, and attempts to assess the prospects for the future.