scispace - formally typeset
Search or ask a question

Showing papers by "William G. Macready published in 1997"


Journal ArticleDOI
TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Abstract: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.

10,771 citations


Posted Content
TL;DR: Taking a system's self-dissimilarity over various scales as a complexity "signature" of the system, this work can compare the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. systems involving species densities in a rainforest, vs. capital density in an economy etc.).
Abstract: For systems usually characterized as complex/living/intelligent, the spatio-temporal patterns exhibited on different scales differ markedly from one another. (E.g., the biomass distribution of a human body looks very different depending on the spatial scale at which one examines that biomass.) Conversely, the density patterns at different scales in non-living/simple systems (e.g., gases, mountains, crystal) do not vary significantly from one another. Such self-dissimilarity can be empirically measured on almost any real-world data set involving spatio-temporal densities, be they mass densities, species densities, or symbol densities. Accordingly, taking a system's (empirically measurable) self-dissimilarity over various scales as a complexity "signature" of the system, we can compare the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. systems involving species densities in a rainforest, vs. capital density in an economy etc.). Signatures can also be clustered, to provide an empirically determined taxonomy of kinds of systems that share organizational traits. Many of our candidate self-dissimilarity measures can also be calculated (or at least approximated) for physical models. The measure of dissimilarity between two scales that we finally choose is the amount of extra information on one of the scales beyond that which exists on the other scale. It is natural to determine this "added information" using a maximum entropy inference of the pattern at the second scale, based on the provided patter at the first scale. We briefly discuss using our measure with other inference mechanisms (e.g., Kolmogorov complexity-based inference, fractal-dimension preserving inference, etc.).

9 citations