scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 1983"


Journal ArticleDOI
13 May 1983-Science
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Abstract: There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.

41,772 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the bond-orientational order in molecular-dynamics simulations of supercooled liquids and in models of metallic glasses and found that the order is predominantly icosahedral, although there is also a cubic component which they attribute to the periodic boundary conditions.
Abstract: Bond-orientational order in molecular-dynamics simulations of supercooled liquids and in models of metallic glasses is studied. Quadratic and third-order invariants formed from bond spherical harmonics allow quantitative measures of cluster symmetries in these systems. A state with short-range translational order, but extended correlations in the orientations of particle clusters, starts to develop about 10% below the equilibrium melting temperature in a supercooled Lennard-Jones liquid. The order is predominantly icosahedral, although there is also a cubic component which we attribute to the periodic boundary conditions. Results are obtained for liquids cooled in an icosahedral pair potential as well. Only a modest amount of orientational order appears in a relaxed Finney dense-random-packing model. In contrast, we find essentially perfect icosahedral bond correlations in alternative "amorphon" cluster models of glass structure.

2,832 citations


Journal ArticleDOI
Dexter Kozen1
TL;DR: A propositional μ-calculus L μ is defined and study, which consists essentially of propositional modal logic with a least fixpoint operator that is syntactically simpler yet strictly more expressive than Propositional Dynamic Logic (PDL).

1,946 citations


Journal ArticleDOI
TL;DR: This paper describes a number of statistical models for use in speech recognition, with special attention to determining the parameters for such models from sparse data, and describes two decoding methods appropriate for constrained artificial languages and one appropriate for more realistic decoding tasks.
Abstract: Speech recognition is formulated as a problem of maximum likelihood decoding. This formulation requires statistical models of the speech production process. In this paper, we describe a number of statistical models for use in speech recognition. We give special attention to determining the parameters for such models from sparse data. We also describe two decoding methods, one appropriate for constrained artificial languages and one appropriate for more realistic decoding tasks. To illustrate the usefulness of the methods described, we review a number of decoding results that have been obtained with them.

1,637 citations


Journal ArticleDOI
A.J. Albrecht1, J.E. Gaffney
TL;DR: In this paper, the equivalence between Albrecht's external input/output data flow representative of a program (the function points" metric) and Halstead's [2] "software science" or "software linguistics" model of a programming program as well as the "soft content" variation of Halsteads model suggested by Gaffney [7] was demonstrated.
Abstract: One of the most important problems faced by software developers and users is the prediction of the size of a programming system and its development effort. As an alternative to "size," one might deal with a measure of the "function" that the software is to perform. Albrecht [1] has developed a methodology to estimate the amount of the "function" the software is to perform, in terms of the data it is to use (absorb) and to generate (produce). The "function" is quantified as "function points," essentially, a weighted sum of the numbers of "inputs," "outputs,"master files," and "inquiries" provided to, or generated by, the software. This paper demonstrates the equivalence between Albrecht's external input/output data flow representative of a program (the "function points" metric) and Halstead's [2] "software science" or "software linguistics" model of a program as well as the "soft content" variation of Halstead's model suggested by Gaffney [7].

1,560 citations


Journal ArticleDOI
TL;DR: In this paper, a modified adatom model with 12 adatoms per unit cell and an inhomogeneously relaxed underlying top layer was used for Si(111) reconstruction.
Abstract: The 7× 7 reconstruction on Si(111) was observed in real space by scanning tunneling microscopy. The experiment strongly favors a modified adatom model with 12 adatoms per unit cell and an inhomogeneously relaxed underlying top layer.

1,550 citations


Journal ArticleDOI
Philip Heidelberger1, Peter D. Welch1
TL;DR: A procedure based on Schruben's Brownian bridge model for the detection of nonstationarity and a spectral method for estimating the variance of the sample mean are explored for estimation of the steady state mean of an output sequence from a discrete event simulation.
Abstract: This paper studies the estimation of the steady state mean of an output sequence from a discrete event simulation. It considers the problem of the automatic generation of a confidence interval of prespecified width when there is an initial transient present. It explores a procedure based on Schruben's Brownian bridge model for the detection of nonstationarity and a spectral method for estimating the variance of the sample mean. The procedure is evaluated empirically for a variety of output sequences. The performance measures considered are bias, confidence interval coverage, mean confidence interval width, mean run length, and mean amount of deleted data. If the output sequence contains a strong transient, then inclusion of a test for stationarity in the run length control procedure results in point estimates with lower bias, narrower confidence intervals, and shorter run lengths than when no check for stationarity is performed. If the output sequence contains no initial transient, then the performance measures of the procedure with a stationarity test are only slightly degraded from those of the procedure without such a test. If the run length is short relative to the extent of the initial transient, the stationarity tests may not be powerful enough to detect the transient, resulting in a procedure with unreliable point and interval estimates.

1,237 citations


Journal ArticleDOI
TL;DR: A model of commumcations protocols based on finite-state machines is investigated and it is determined to what extent the problem is solvable, and one approach to solving it is described.
Abstract: A model of commumcations protocols based on finite-state machines is investigated. The problem addressed is how to ensure certain generally desirable properties, which make protocols \"wellformed,\" that is, specify a response to those and only those events that can actually occur. It is determined to what extent the problem is solvable, and one approach to solving it ts described. Categories and SubJect Descriptors' C 2 2 [Computer-Conununication Networks]: Network Protocols-protocol verification; F 1 1 [Computation by Abstract Devices] Models of Computation--automata; G.2.2 [Discrete Mathematics] Graph Theory--graph algoruhms; trees General Terms: Reliability, Verification Additional

1,153 citations


Journal ArticleDOI
TL;DR: A terminological framework is provided for describing different transactionoriented recovery schemes for database systems in a conceptual rather than an implementation-dependent way by introducing the terms materialized database, propagation strategy, and checkpoint, and a means for classifying arbitrary implementations from a unified viewpoint.
Abstract: In this paper, a terminological framework is provided for describing different transactionoriented recovery schemes for database systems in a conceptual rather than an implementation-dependent way. By introducing the terms materialized database, propagation strategy, and checkpoint, we obtain a means for classifying arbitrary implementations from a unified viewpoint. This is complemented by a classification scheme for logging techniques, which are precisely defined by using the other terms. It is shown that these criteria are related to all relevant questions such as speed and scope of recovery and amount of redundant information required. The primary purpose of this paper, however, is to establish an adequate and precise terminology for a topic in which the confusion of concepts and implementational aspects still imposes a lot of problems.

1,117 citations


Journal ArticleDOI
TL;DR: Applying the far-ultraviolet light in short intense pulses permitted us to control the depth of the incision with great precision and it was found that 1 joule/cm2 ablates corneal tissue to a depth of 1 micron.

990 citations


Journal ArticleDOI
TL;DR: In this paper, a superconducting ring of normal metal driven by an external magnetic flux acts like a Josephson junction, except that 2e is replaced by e.g.

Journal ArticleDOI
TL;DR: It is shown that this class of database schemes, called acychc, has a number of desirable properties that have been studied by other researchers and are shown to be eqmvalent to acydicity.
Abstract: A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided.

Journal ArticleDOI
TL;DR: This paper investigates a stochastic model for a software error detection process in which the growth curve of the number of detected software errors for the observed data is S-shaped.
Abstract: This paper investigates a stochastic model for a software error detection process in which the growth curve of the number of detected software errors for the observed data is S-shaped. The software error detection model is a nonhomogeneous Poisson process where the mean-value function has an S-shaped growth curve. The model is applied to actual software error data. Statistical inference on the unknown parameters is discussed. The model fits the observed data better than other models.

Journal ArticleDOI
TL;DR: In this article, the microscopic mechanisms responsible for both the formation and coupling of magnetic moments in Heusler alloys were identified, and it was shown that the $X$ atoms (e.g., Cu, Pd) serve primarily to determine the lattice constant, while the $Y$ atoms mediate the interaction between the $\mathrm{Mn}d$ states.
Abstract: The microscopic mechanisms responsible for both the formation and coupling of magnetic moments in Heusler alloys (${X}_{2}\mathrm{Mn}Y$) are identified. We find that the $X$ atoms (e.g., Cu, Pd) serve primarily to determine the lattice constant, while the $Y$ atoms (e.g., Al, In, Sb) mediate the interaction between the $\mathrm{Mn}d$ states. There is no significant direct interaction between the Mn atoms, but the occupied $d$ states of Mn are delocalized by their strong interaction with the $X$-atom $d$ states. The localized character of the magnetization results from the exclusion of minority-spin (defined locally) electrons from the $\mathrm{Mn}3d$ shell. The coupling between the localized magnetic Mn moments can be described with the Heisenberg Hamiltonian and the sign of the exchange constants results from a competition between the intra-atomic magnetic energy and interatomic $Y$-atom mediated covalent interactions between the the $\mathrm{Mn}d$ states. These effects compete because the covalent mechanism is possible only for antiferromagnetic alignments, but necessarily reduces the magnitude of the local moments. The sensitive dependence of magnetic order on the occupation of the mediating $p\ensuremath{-}d$ hybrid states accounts well for experiments by Webster in which this occupation is varied by alloying. Our analysis is based on self-consistent, spin-polarized energy-band calculations for ${\mathrm{Co}}_{2}$MnAl, ${\mathrm{Co}}_{2}$MnSn, ${\mathrm{Ni}}_{2}$MnSn, ${\mathrm{Cu}}_{2}$MnAl, ${\mathrm{Cu}}_{2}$MnSn, ${\mathrm{Pd}}_{2}$MnIn, ${\mathrm{Pd}}_{2}$MnSn, and ${\mathrm{Pd}}_{2}$MnSb, for both ferromagnetic and antiferromagnetic spin alignments.

Journal ArticleDOI
TL;DR: In this article, a new theory of cluster expansions has been derived, which allows one, for the first time, to estimate the energy of a disordered system from first principles, and the cluster variables are derived from a series of density-functional calculations on ordered compounds.
Abstract: A new theory of cluster expansions has been derived, which allows one, for the first time, to estimate the energy of a disordered system from first principles. The cluster variables are derived from a series of density-functional calculations on ordered compounds. The disordering temperatures calculated with this theory show the correct trends for binary alloys of $4d$ transition metals, and are in excellent agreement with the experimental phase diagrams in most cases.

Journal ArticleDOI
Jorma Rissanen1
TL;DR: A universal data compression algorithm is described which is capable of compressing long strings generated by a "finitely generated" source, with a near optimum per symbol length without prior knowledge of the source.
Abstract: A universal data compression algorithm is described which is capable of compressing long strings generated by a "finitely generated" source, with a near optimum per symbol length without prior knowledge of the source. This class of sources may be viewed as a generalization of Markov sources to random fields. Moreover, the algorithm does not require a working storage much larger than that needed to describe the source generating parameters.

Journal ArticleDOI
TL;DR: The results indicate that cutting-planes related to the facets of the underlying polytope are an indispensable tool for the exact solution of this class of problem.
Abstract: In this paper we report on the solution to optimality of 10 large-scale zero-one linear programming problems. All problem data come from real-world industrial applications and are characterized by sparse constraint matrices with rational data. About half of the sample problems have no apparent special structure; the remainder show structural characteristics that our computational procedures do not exploit directly. By today's standards, our methodology produced impressive computational results, particularly on sparse problems having no apparent special structure. The computational results on problems with up to 2,750 variables strongly confirm our hypothesis that a combination of problem preprocessing, cutting planes, and clever branch-and-bound techniques permit the optimization of sparse large-scale zero-one linear programming problems, even those with no apparent special structure, in reasonable computation times. Our results indicate that cutting-planes related to the facets of the underlying polytope are an indispensable tool for the exact solution of this class of problem. To arrive at these conclusions, we designed an experimental computer system PIPX that uses the IBM linear programming system MPSX/370 and the IBM integer programming system MIP/370 as building blocks. The entire system is automatic and requires no manual intervention.

Journal ArticleDOI
A. X. Widmer1, P. A. Franaszek1
TL;DR: The proposed transmission code translates each source byte into a constrained 10-bit binary sequence which has excellent performance parameters near the theoretical limits for 8B/10B codes.
Abstract: This paperd escribes a byte-oriented binary transmission code and its implementation. This code is particularly well suited for high-speed local area networks and similar data links, where the information format consists of packets, variable in length, from about a dozen up to several hundred 8-bit bytes. The proposed transmission code translates each source byte into a constrained 10-bit binary sequence which hase excellent performance parameters near the theoretical limits for 8B/10B codes. The maximum run length is 5 and the maximum digital sum variation is 6. A single error in the encoded bits can, at most, generate an error burst of length 5 in the decoded domain. A very simple implementation of the code has been accomplished by partitioning the coder into 5B/6B and 3B/4B subordinate coders.

Proceedings ArticleDOI
John D. Gould1, Clayton Lewis1
12 Dec 1983
TL;DR: Gould and Lewis as mentioned in this paper present theoretical considerations and empirical data relevant to attaining these goals, and present survey results that demonstrate that their principles are not really all that obvious, but just seem obvious once presented.
Abstract: Any system designed for people to use should be (a) easy to learn; (b) useful, i.e., contain functions people really need in their work; (c) easy to use; and (d) pleasant to use. In this note we present theoretical considerations and empirical data relevant to attaining these goals. First, we mention four principles for system design which we believe are necessary to attain these goals; Then we present survey results that demonstrate that our principles are not really all that obvious, but just seem obvious once presented. The responses of designers suggest they may sometimes think they are doing what we recommend when in fact they are not. This is consistent with the experience that systems designers do not often recommend or use them themselves. We contrast some of these responses with what we have in mind in order to provide a more useful description of our principles. Lastly, we consider why this might be so. These sections are summaries of those in a longer paper to appear elsewhere (Gould & Lewis, 1983). In that paper we elaborate on our four principles, showing how they form the basis for a general methodology of design, and we describe a successful example of using them in actual system design (IBM's Audio Distribution System).

Journal ArticleDOI
Gerd Binnig1, Heinrich Rohrer1
TL;DR: In this article, the surface topographies in real space and work function profiles on an atomic sale were obtained using scanning tunneling microscopy, a novel technique based on vacuum tunneling.

Journal ArticleDOI
TL;DR: In this paper, the eigenstate of an isolated quantum well subject to an external electric field was analyzed and a quadratic Stark shift was found whose magnitude depended strongly on the finite well depth.
Abstract: We present variational calculations of the eigenstates in an isolated-quantum-well structure subjected to an external electric field. At weak fields a quadratic Stark shift is found whose magnitude depends strongly on the finite well depth. In addition, the electric field induces a spatial shift of the particle wave function along or opposite to the field direction, depending on the sign of the particle mass. This field-induced spatial separation of conduction and valence electrons in GaAs quantum wells decreases the overlap between their associated wave functions, leading to a reduction of interband recombination.

Journal ArticleDOI
Gerald Burns1, F. H. Dacol1
TL;DR: In this paper, the optic index of refraction as a function of temperature was measured in the crystalline ferroelectrics having the simple perovskite structure and it was shown that these crystals possess a local, randomly oriented, nonreversible polarization below a temperature several hundred degrees above the ferroelectric phase-transition temperature.
Abstract: We report measurements of the optic index of refraction as a function of temperature, $n (T)$, in the crystalline ferroelectrics having the simple perovskite ($\mathrm{AB}{\mathrm{O}}_{3}$) structure. We show that these crystals possess a local, randomly oriented, nonreversible polarization below a temperature ${T}_{d}$ several hundred degrees above the ferroelectric phase-transition temperature ${T}_{c}$. Using a simple model, we account for this behavior and understand quantitatively the values of ${T}_{d}$. This model, we believe, contains the basic physical understanding of ferroelectrics with a diffuse phase transition.

Journal ArticleDOI
Gerald Burns1, F. H. Dacol1
TL;DR: In this article, the temperature dependence of the optic index of refraction, n(T), at several wave lengths in two ferroelectric compounds that have the simple perovskite ABO3 structure was investigated.

Journal ArticleDOI
E.M. Genies, G. Bidan, A.F. Diaz1
TL;DR: The chronoabsorptometric results show that polypyrrole film grows linearly with time t and not t 1/2 as mentioned in this paper, which is consistent with the slow step in the growth process being a radical coupling step and not the diffusion of pyrrole to the electrode surface.

Journal ArticleDOI
TL;DR: In this article, a short historical perspective and survey of the frequency modulation spectroscopy work performed to date is presented, and theoretical lineshapes for a variety of experimental conditions are given.
Abstract: Frequency modulation (FM) spectroscopy is a new method of optical heterodyne spectroscopy capable of sensitive and rapid measurement of the absorption or dispersion associated with narrow spectral features. The absorption or dispersion is measured by detecting the heterodyne beat signal that occurs when the FM optical spectrum of the probe wave is distorted by the spectral feature of interest. A short historical perspective and survey of the FM spectroscopy work performed to date is presented. Expressions describing the nature of the beat signal are derived. Theoretical lineshapes for a variety of experimental conditions are given. A signal-to-noise analysis is carried out to determine the ultimate sensitivity limits.

Journal ArticleDOI
Markus Büttiker1
TL;DR: In this article, the Larmor precession was used as a clock to measure the time it takes a particle to traverse a barrier, and three characteristic times describing the interaction of particles with a barrier.
Abstract: Baz' and Rybachenko have proposed the use of the Larmor precession as a clock to measure the time it takes a particle to traverse a barrier. An applied magnetic field is confined to the barrier. The spin of the incident particles is polarized perpendicular to this field. The extent of the Larmor precession occurring during transmission is used as a measurement of the time spent traversing the barrier. However, the particles tunneling through an opaque barrier also acquire a spin component parallel to the field since particles with spin parallel to the field have a higher transmission probability than particles with spin antiparallel to the field. Similar effects are actually used to polarize electrons and neutrons. An interpretation of this experiment compares the results with an approach which determines the traversal time by studying transmission of particles through a time-modulated barrier. This leads to three characteristic times describing the interaction of particles with a barrier. A dwell time measures the average time interval during which a particle interacts with the barrier whether it is reflected or transmitted at the end of its stay, a traversal time measures the time interval during which a particle interacts with the barrier if it is finally transmitted, and a reflection time measures the interaction time of a reflected particle.

Journal ArticleDOI
Ronald Fagin1
TL;DR: Various desirable properties of database schemes are constdered and it is shown that they fall into several equivalence classes, each completely characterized by the degree of acycliclty of the scheme.
Abstract: Database schemes (winch, intuitively, are collecuons of table skeletons) can be wewed as hypergraphs (A hypergraph Is a generalization of an ordinary undirected graph, such that an edge need not contain exactly two nodes, but can instead contain an arbitrary nonzero number of nodes.) A class of "acychc" database schemes was recently introduced. A number of basic desirable propemes of database schemes have been shown to be equivalent to acyclicity This shows the naturalness of the concept. However, unlike the situation for ordinary, undirected graphs, there are several natural, noneqmvalent notions of acyclicity for hypergraphs (and hence for database schemes). Various desirable properties of database schemes are constdered and it is shown that they fall into several equivalence classes, each completely characterized by the degree of acycliclty of the scheme The results are also of interest from a purely graph-theoretic viewpomt. The original notion of aeyclicity has the countermtmtive property that a subhypergraph of an acychc hypergraph can be cyclic. This strange behavior does not occur for the new degrees of acyelicity that are considered.

Journal ArticleDOI
TL;DR: It is shown that all true deadlocks are detected and that no false deadlock reported, and the algorithms can be applied in distributed database and other message communication systems.
Abstract: Distributed deadlock models are presented for resource and communication deadlocks. Simple distributed algorithms for detection of these deadlocks are given. We show that all true deadlocks are detected and that no false deadlocks are reported. In our algorithms, no process maintains global information; all messages have an identical short length. The algorithms can be applied in distributed database and other message communication systems.

Journal ArticleDOI
C.H. Stapper1, F.M. Armstrong1, K. Saji1
01 Apr 1983
TL;DR: In this paper, the random failure statistics for the yield of mass-produced semiconductor integrated circuits are derived by considering defect and fault formation during the manufacturing process, which allows the development of a yield theory that includes many models that have been used previously and also results in a practical control model for integrated circuit manufacturing.
Abstract: The random failure statistics for the yield of mass-produced semiconductor integrated circuits are derived by considering defect and fault formation during the manufacturing process. This approach allows the development of a yield theory that includes many models that have been used previously and also results in a practical control model for integrated circuit manufacturing. Some simpler formulations of yield theory that have been described in the literature are compared to the model. Application of the model to yield management are discussed and examples given.

Journal ArticleDOI
TL;DR: A study of short-term turnaround attempts by mature industrial-product business units found that efficiency-oriented moves, but not entrepreneurial initiatives, were associated with successful turn-around.
Abstract: A study of short-term turnaround attempts by mature industrial-product business units found that efficiency-oriented moves, but not entrepreneurial initiatives, were associated with successful turn...