scispace - formally typeset
Search or ask a question
Author

C. D. Gelatt

Other affiliations: IBM
Bio: C. D. Gelatt is an academic researcher from Harvard University. The author has contributed to research in topics: Simulated annealing & Optimization problem. The author has an hindex of 20, co-authored 25 publications receiving 41523 citations. Previous affiliations of C. D. Gelatt include IBM.

Papers
More filters
Journal ArticleDOI
13 May 1983-Science
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Abstract: There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.

41,772 citations

Journal ArticleDOI
TL;DR: In this article, a conceptual model and a calculational procedure for the study of the electronic structure of metallic compounds are presented, which consists of spherical atoms compressed into finite volumes appropriate to the solid.
Abstract: We present a conceptual model and calculational procedure for the study of the electronic structure of metallic compounds. The model consists of spherical atoms compressed into finite volumes appropriate to the solid. The model involves no adjustable or experimentally derived parameters. All contributions to the total energy (other than the Madelung energy) are obtained from independent compressed-atom calculations. Interatomic interactions enter the calculations through the electronic configuration (the distribution of the valence charge among $s$, $p$, $d$, etc., states) and boundary conditions which give the atomic valence levels a finite width. These environmental constraints, which specify the state of the compressed atoms, are obtained from energy-band calculations. For the latter we introduce a new method, which we call the augmented-spherical-wave (ASW) method to stress its conceptual similarity to Slater's augmented-plane-wave (APW) method. The ASW method is a direct descendant of the linear-muffin-tin-orbitals technique introduced by Andersen; when applied to pure metals, it yields results which closely approximate those of the much more elaborate Korringa-Kohn-Rostoker calculations of Moruzzi, Williams, and Janak. The combined ASW compressed-atom procedure is tested on (i) the empty lattice, (ii) the pure metals Na, Al, Cu, and Mo, and (iii) the ordered stoichiometric compounds NaCl, NiAl, and CuZn. Finally, we demonstrate the utility of the procedure by using it to study the anomalous tendency of Ni and Pd (as compared to their Periodic Table neighbors Co, Cu, Rh, and Ag) to form hydride phases. We have calculated the total energies of the six pure metals and their monohydrides. The total energy differences exhibit the anomaly and an analysis of quantities internal to the calculation reveals its origin.

881 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a theory of the chemical bond in compounds consisting of both transition metals and nontransition metals, and establish chemical trends in the bonding properties by directly comparing the total energy of a large number of such compounds with the total energies of their constituents.
Abstract: We present a theory of the chemical bond in compounds consisting of both transition metals and nontransition metals. Chemical trends in the bonding properties are established by directly comparing the total energies of a large number of such compounds with the total energies of their constituents. These chemical trends are analyzed in terms of the $s$-, $p$-, and $d$-like state densities of the compounds and the constituents. Rather different types of bonding are shown to result when the atomic $s$ and $p$ levels of the nontransition metal lie above, below, and near the energy of the transition-metal $d$ level. The heat of compound formation is shown to result from a competition between two simple physical effects: (1) the weakening of the transition-metal bonds by the lattice dilatation required for the accommodation of the nontransition metal, and (2) the increased bonding which results from the occupation of the bonding members of the hybrid states formed from the interaction between the transition-metal $d$ states and the $s\ensuremath{-}p$ states on the nontransition metal. Our theoretical values for the heats of formation of these compounds are generally similar to those given by Miedema's empirical formula. Distinctive aspects of the variation of the heat of formation with the number of valence electrons reveal, however, that the microscopic picture on which the empirical formula is based is quite different from that given by our self-consistent energy-band theory.

316 citations

Journal ArticleDOI
TL;DR: In this paper, the electronic structure of transition-metal hydrides and the single-particle lifetime of states in nonstoichiometric Cu and Pd hydride were analyzed.
Abstract: Calculations of the electronic structure of transition-metal hydrides are applied to the cohesive energy of $3d$ and $4d$ monohydrides, and the single-particle lifetime of states in nonstoichiometric Cu and Pd hydrides. A simple formula is presented which delineates the principal contributions to the cohesive energy of the hydrides: (i) the formation of a metal-hydrogen bonding level derived of states of the pure metal band structure which have $s$ symmetry about the site of the added proton, (ii) a slight increase in binding of the metal $d$ bands due to the added attractive potential, and (iii) the addition of an extra electron to the metal electron sea. The calculations, corrected for Coulomb repulsion at the hydrogen sites, qualitatively reproduce the experimental trends of the heats of formation of the transition-metal hydrides. The single-particle lifetime calculations are in quantitative agreement with Dingle-temperature measurements and they correctly predict the existence of essentially undamped states on the hole sheets of the $\ensuremath{\alpha}$-phase PdH Fermi surface.

175 citations

Patent
16 Sep 1982
TL;DR: In this article, the overall arrangement of a large number of discrete objects may be optimized with relation to the function of or the space occupied by the arrangement by establishing a suitability measure, or score, for each configuration of the arrangement, in relation to a function of the or volume occupied, generating random local changes in the arrangement.
Abstract: The overall arrangement of a large number of discrete objects may be optimized with relation to the function of or the space occupied by the arrangement by establishing a suitability measure, or score, for each configuration of the arrangement, in relation to the function of or volume occupied, generating random local changes in the arrangement, scoring the effect of the individual changes and subjecting all objects in the arrangement to a random series of incremental changes whose outcome is on average predictable. The procedure lends itself to computer simulation. It may be applied to sequencing and scheduling problems, bin packing types of problems and in complex design problems such as semiconductor chip placement, wiring network routing and logic partitioning.

162 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: AutoDock Vina achieves an approximately two orders of magnitude speed‐up compared with the molecular docking software previously developed in the lab, while also significantly improving the accuracy of the binding mode predictions, judging by tests on the training set used in AutoDock 4 development.
Abstract: AutoDock Vina, a new program for molecular docking and virtual screening, is presented. AutoDock Vina achieves an approximately two orders of magnitude speed-up compared with the molecular docking software previously developed in our lab (AutoDock 4), while also significantly improving the accuracy of the binding mode predictions, judging by our tests on the training set used in AutoDock 4 development. Further speed-up is achieved from parallelism, by using multithreading on multicore machines. AutoDock Vina automatically calculates the grid maps and clusters the results in a way transparent to the user.

20,059 citations

Journal ArticleDOI
TL;DR: The analogy between images and statistical mechanics systems is made and the analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations, creating a highly parallel ``relaxation'' algorithm for MAP estimation.
Abstract: We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.

18,761 citations

Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
TL;DR: An overview of pattern clustering methods from a statistical pattern recognition perspective is presented, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners.
Abstract: Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overview of pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval.

14,054 citations