scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 1984"


Journal ArticleDOI
Scott Kirkpatrick1
TL;DR: Experimental studies of the simulated annealing method are presented and its computational efficiency when applied to graph partitioning and traveling salesman problems are presented.
Abstract: Simulated annealing is a stochastic optimization procedure which is widely applicable and has been found effective in several problems arising in computeraided circuit design. This paper derives the method in the context of traditional optimization heuristics and presents experimental studies of its computational efficiency when applied to graph partitioning and traveling salesman problems.

1,808 citations


Journal ArticleDOI
01 Apr 1984-Nature
TL;DR: In this article, a new method, slit island analysis, is introduced to estimate the fractal dimension, D. The estimate is shown to agree with the value obtained by fracture profile analysis, a spectral method.
Abstract: When a piece of metal is fractured either by tensile or impact loading (pulling or hitting), the fracture surface that is formed is rough and irregular. Its shape is affected by the metal's microstructure (such as grains, inclusions and precipitates, whose characteristic length is large relative to the atomic scale), as well as by ‘macrostructural’ influences (such as the size, the shape of the specimen, and the notch from which the fracture begins). However, repeated observation at various magnifications also reveals a variety of additional structures that fall between the ‘micro’ and the ‘macro’ and have not yet been described satisfactorily in a systematic manner. The experiments reported here reveal the existence of broad and clearly distinct zone of intermediate scales in which the structure is modelled very well by a fractal surface. A new method, slit island analysis, is introduced to estimate the basic quantity called the fractal dimension, D. The estimate is shown to agree with the value obtained by fracture profile analysis, a spectral method. Finally, D is shown to be a measure of toughness in metals.

1,651 citations


Journal ArticleDOI
TL;DR: It is shown that the FM capture phenomenon with slotted ALOHA greatly improves the expected progress over the system without capture due to the more limited area of possibly interfering terminals around the receiver.
Abstract: In multihop packet radio networks with randomly distributed terminals, the optimal transmission radii to maximize the expected progress of packets in desired directions are determined with a variety of transmission protocols and network configurations. It is shown that the FM capture phenomenon with slotted ALOHA greatly improves the expected progress over the system without capture due to the more limited area of possibly interfering terminals around the receiver. The (mini)slotted nonpersistent carrier-sense-multiple-access (CSMA) only slightly outperforms ALOHA, unlike the single-hop case (where a large improvement is available), because of a large area of "hidden" terminals and the long vulnerable period generated by them. As an example of an inhomogeneous terminal distribution, the effect of a gap in an otherwise randomly distributed terminal population on the expected progress of packets crossing the gap is considered. In this case, the disadvantage of using a large transmission radius is demonstrated.

1,367 citations


Journal ArticleDOI
TL;DR: A formal approach to the synthesis of compliant-motion strategies from geometric descriptions of assembly operations and explicit estimates of errors in sensing and control is described.
Abstract: Active compliance enables robots to carry out tasks in the presence of significant sensing and control errors. Compliant motions are quite difficult for humans to specify, however. Furthermore, robot programs are quite sensitive to details of geometry and to error characteristics and must, therefore, be constructed anew for each task. These factors motivate the search for automatic synthesis tools for robot program ming, especially for compliant motion. This paper describes a formal approach to the synthesis of compliant-motion strategies from geometric descriptions of assembly operations and explicit estimates of errors in sensing and control. A key aspect of the approach is that it provides criteriafor correct ness of compliant-motion strategies.

825 citations


Journal ArticleDOI
J. F. Kelley1
TL;DR: A six-step, iterative, empirical human factors design methodology was used to develop CAL, a natural language computer application to help computer-naive business professionals manage their personal calenders and its dictionaries.
Abstract: A six-step, iterative, empirical human factors design methodology was used to develop CAL, a natural language computer application to help computer-naive business professionals manage their personal calenders. Input language is processed by a simple, nonparsing algorithm with limited storage requirements and a quick response time. CAL allows unconstrained English inputs from users with no training (except for a five minute introduction to the keyboard and display) and no manual (except for a two-page overview of the system). In a controlled test of performance, CAL correctly responded to between 86 percent and 97 percent of the storage and retrieval requests it received, according to various criteria. This level of performance could never have been achieved with such a simple processing model were it not for the empirical approach used in the development of the program and its dictionaries. The tools of the engineering psychologist are clearly invaluable in the development of user-friendly software, if that software is to accommodate the unruly language of computer-naive, first-time users. The key is elicit the cooperation of such users as partners in an iterative, empirical development process. 15 references.

684 citations


Journal ArticleDOI
TL;DR: A series of twenty homogeneous mixed crystals has been measured dielectrically between 4.2 and 300 K in the tetragonal phase, and the dielectric constant perpendicular to the $c$ axis becomes peaked above ${x}_{c} = 0.0018, the quantum mechanical onset for displacive ferroelectricity as mentioned in this paper.
Abstract: A series of twenty homogeneous ${\mathrm{Sr}}_{1\ensuremath{-}x}{\mathrm{Ca}}_{x}\mathrm{Ti}{\mathrm{O}}_{3}$ mixed crystals has been measured dielectrically between 4.2 and 300 K. In the tetragonal phase, the dielectric constant perpendicular to the $c$ axis becomes peaked above ${x}_{c}=0.0018$, the quantum mechanical onset for displacive ferroelectricity. The polarization $\ensuremath{\perp}c$ can be switched between the two equivalent $a$ axes, i.e., the system is an $\mathrm{XY}$, $n=2$, quantum ferroelectric. Above ${x}_{r}=0.016\ifmmode\pm\else\textpm\fi{}0.002$, the $\ensuremath{\epsilon}(T)$ peaks round in a distinct manner which we attribute to the onset of a random-field-induced domain state.

652 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review the concept of multifractal sets in both turbulent flows and dynamical systems using a generalisation of the beta-model and propose that the energy dissipation of three-dimensional turbulent flow is concentrated on a set with non-integer Hausdorff dimension.
Abstract: It is generally argued that the energy dissipation of three-dimensional turbulent flow is concentrated on a set with non-integer Hausdorff dimension. Recently, in order to explain experimental data, it has been proposed that this set does not possess a global dilatation invariance: it can be considered to be a multifractal set. The authors review the concept of multifractal sets in both turbulent flows and dynamical systems using a generalisation of the beta -model.

650 citations


Journal ArticleDOI
Glen G. Langdon1
TL;DR: This paper presents the key notions of arithmetic compression coding by means of simple examples and describes how the encoder must have successively partitioned and retained each nested subinterval.
Abstract: Arithmetic coding is a data compression technique that encodes data (the data string) by creating a code string which represents a fractional value on the number line between 0 and 1. The coding algorithm is symbolwise recursive; i.e., it operates upon and encodes (decodes) one data symbol per iteration or recursion. On each recursion, the algorithm successively partitions an interval of the number line between 0 and 1, and retains one of the partitions as the new interval. Thus, the algorithm successively deals with smaller intervals, and the code string, viewed as a magnitude, lies in each of the nested intervals. The data string is recovered by using magnitude comparisons on the code string to recreate how the encoder must have successively partitioned and retained each nested subinterval. Arithmetic coding differs considerably from the more familiar compression coding techniques, such as prefix (Huffman) codes. Also, it should not be confused with error control coding, whose object is to detect and correct errors in computer operations. This paper presents the key notions of arithmetic compression coding by means of simple examples.

602 citations


Journal ArticleDOI
Mitsuru Ohba1
TL;DR: Improvements to conventional software reliability analysis models by making the assumptions on which they are based more realistic are discussed, including the delayed S-shaped growth model, the inflection S- shaped model, and the hyperexponential model.
Abstract: This paper discusses improvements to conventional software reliability analysis models by making the assumptions on which they are based more realistic. In an actual project environment, sometimes no more information is available than reliability data obtained from a test report. The models described here are designed to resolve the problems caused by this constraint on the availability of reliability data. By utilizing the technical knowledge about a program, a test, and test data, we can select an appropriate software reliability analysis model for accurate quality assessment. The delayed S-shaped growth model, the inflection S-shaped model, and the hyperexponential model are proposed.

596 citations


Journal ArticleDOI
C. L. Chen1, M. Y. Hsiao1
TL;DR: The construction of four classes of error-correcting codes appropriate for semiconductor memory designs is described, and for each class of codes the number of check bits required for commonly used data lengths is provided.
Abstract: This paper presents a state-of-the-art review of error-correcting codes for computer semiconductor memory applications. The construction of four classes of error-correcting codes appropriate for semiconductor memory designs is described, and for each class of codes the number of check bits required for commonly used data lengths is provided. The implementation aspects of error correction and error detection are also discussed, and certain algorithms useful in extending the error-correcting capability for the correction of soft errors such as α-particle-induced errors are examined in some detail.

589 citations


Journal ArticleDOI
TL;DR: This article presented des resultats de calculs de structure de bandes en liaison forte pour une chaine polypyrrole deformable, en fonction du dopage.
Abstract: Presentation des resultats de calculs de structure de bandes en liaison forte pour une chaine polypyrrole deformable, en fonction du dopage. Les etats qui sont produits sont des polarons et des bipolarons dont le spectre d'excitation explique les donnees d'absorption optique

Journal ArticleDOI
Bruno Ricco1, M. Ya. Azbel1
TL;DR: In this article, the effect of an externally applied electric field is considered, and it is shown that with fully symmetrical barriers it leads to weaker resonances than otherwise possible, thus contributing to a reduction of resonance effects on the usual experimental time scale.
Abstract: In this work we discuss how the occurrence of resonant tunneling through a one-dimensional (1D) double barrier involves some interesting phenomena which have so far been overlooked. The effect of an externally applied electric field is considered, and it is shown that with fully symmetrical barriers it leads to weaker resonances than otherwise possible. Furthermore, the time required for resonance to be fully established is discussed, and it is shown that, depending on the barrier transmission coefficients and experimental conditions, it can be exceedingly long, thus contributing to a reduction of resonance effects on the usual experimental time scale. We also show that resonant tunneling under the usual experimental conditions implies carrier trapping, hence a buildup of space charge available for modifying the potential-energy barrier. Different current behaviors then result from the inherent feedback mechanism. The effects of temperature on the measured current are finally discussed.

Proceedings ArticleDOI
L. Heller1, W. Griffin, J. Davis, N. Thoma
01 Jan 1984
TL;DR: A differential CMOS Logic family that is well suited to automated logic minimization and placement and routing techniques, yet has comparable performance to conventional CMOS, will be described.
Abstract: A differential CMOS Logic family that is well suited to automated logic minimization and placement and routing techniques, yet has comparable performance to conventional CMOS, will be described. A CMOS circuit using 10,880 NMOS differential pairs has been developed using this approach.

Journal ArticleDOI
TL;DR: It is possible, therefore, that a minimum mean fourth error algorithm can do a better job of least squares estimation than a mean square error algorithm.
Abstract: New steepest descent algorithms for adaptive filtering and have been devised which allow error minimization in the mean fourth and mean sixth, etc., sense. During adaptation, the weights undergo exponential relaxation toward their optimal solutions. Time constants have been derived, and surprisingly they turn out to be proportional to the time constants that would have been obtained if the steepest descent least mean square (LMS) algorithm of Widrow and Hoff had been used. The new gradient algorithms are insignificantly more complicated to program and to compute than the LMS algorithm. Their general form is W_{j+1} = W_{j} + 2 \mu K \epsilon_{j}^{2K-1}X_{j}, where W_{j} is the present weight vector, W_{j+1} is the next weight vector, \epsilon_{j} is the present error, X_{j} is the present input vector, \mu is a constant controlling stability and rate of convergence, and 2K is the exponent of the error being minimized. Conditions have been derived for weight-vector convergence of the mean and of the variance for the new gradient algorithms. The behavior of the least mean fourth (LMF) algorithm is of special interest. In comparing this algorithm to the LMS algorithm, when both are set to have exactly the same time constants for the weight relaxation process, the LMF algorithm, under some circumstances, will have a substantially lower weight noise than the LMS algorithm. It is possible, therefore, that a minimum mean fourth error algorithm can do a better job of least squares estimation than a mean square error algorithm. This intriguing concept has implications for all forms of adaptive algorithms, whether they are based on steepest descent or otherwise.

Journal ArticleDOI
01 Jul 1984-Chest
TL;DR: The Memorial Sloan-Kettering lung cancer screening program was begun in 1974 to evaluate sputum cytology as a supplement to the annual chest x-ray examination for early detection and diagnosis, and it is suggested that the squamous carcinomas detected by cytologic examination alone are very slow growing and tend to remain localized until detectable by x-rays.

Journal ArticleDOI
J. E. Fromm1
TL;DR: In this article, a numerical method that makes use of the complete incompressible flow equations with a free surface is discussed and used to study an impulsively driven laminar jet.
Abstract: A numerical method that makes use of the complete incompressible flow equations with a free surface is discussed and used to study an impulsively driven laminar jet. Flow behavior dependence upon fluid properties (characterized by a Reynolds number over Weber number nondimensionalization) is compared jor drop integrity purposes. Several variations of square wave pressure history applied at a nozzle inlet are discussed in relation to drop velocities produced and structure of ejected drops. Timewise development of flow both interior and exterior to the nozzle is illustrated through computed contour sequences.

Journal ArticleDOI
Jenö Gazdag1, P. Sguazzero1
TL;DR: In this paper, the authors generalize the concepts of the phase shift method to media having lateral velocity variations, and show that the PSPI method is unconditionally stable and lends itself conveniently to migration of three-dimensional data.
Abstract: Under the horizontally layered velocity assumption, migration is defined by a set of independent ordinary differential equations in the wavenumber‐frequency domain. The wave components are extrapolated downward by rotating their phases. This paper shows that one can generalize the concepts of the phase‐shift method to media having lateral velocity variations. The wave extrapolation procedure consists of two steps. In the first step, the wave field is extrapolated by the phase‐shift method using l laterally uniform velocity fields. The intermediate result is l reference wave fields. In the second step, the actual wave field is computed by interpolation from the reference wave fields. The phase shift plus interpolation (PSPI) method is unconditionally stable and lends itself conveniently to migration of three‐dimensional data. The performance of the methods is demonstrated on synthetic examples. The PSPI migration results are then compared with those obtained from a finite‐difference method.

Journal ArticleDOI
TL;DR: In this paper, a constrained space orbital variation (CSOV) with the electrons of the metal member of the complex in the field of frozen ligand is performed, and the electrons are then frozen in the relaxed distribution given by the CSOV SCF wave function and the ligand electrons are allowed to relax.
Abstract: The nature of the bonding of CO and NH3 ligands to Al is analyzed, and the intra-unit charge polarization and inter-unit donation for the interaction of ligands with metals are studied. The consequences of metal-to-ligand and ligand-to-metal charge transfer are separately considered by performing a constrained space orbital variation (CSOV) with the electrons of the metal member of the complex in the field of frozen ligand. The electrons of the metal atoms are then frozen in the relaxed distribution given by the CSOV SCF wave function and the ligand electrons are allowed to relax. Quantitative measures of the importance of inter-unit charge transfers and intra-unit polarization are obtained using results of SCF studies of Al4CO and Al4NH3 clusters chosen to simulate the adsorption of the ligands at an on-top side of the Al(111) surface. The electrostatic attraction of the effective dipole moments of the metal and ligand units makes an important contribution to the bond.

Journal ArticleDOI
TL;DR: In this article, the energy levels of electrons in GaAs and their sensitivity to various parameters, including acceptor doping level in the GaAs, heterojunction barrier height, effective mass and dielectric-constant discontinuities, interface grading, and ambient temperature are examined.
Abstract: Calculated results for energy levels of electrons in GaAs-${\mathrm{Ga}}_{1\ensuremath{-}x}{\mathrm{Al}}_{x}$ As heterojunctions are presented and their sensitivity to various parameters\char22{}including acceptor doping level in the GaAs, heterojunction barrier height, effective-mass and dielectric-constant discontinuities, interface grading, and ambient temperature\char22{}is examined.

Journal ArticleDOI
Robert Earl Morrison1
TL;DR: A model with direct control and parallel programming of saccades is proposed to explain the data and eye movements in reading in general and indicates that fixation duration is under direct control.
Abstract: On-line eye movement recording of 12 subjects who read short stories on a cathode ray tube enabled a test of direct control and preprogramming models of eye movements in reading. Contingent upon eye position, a mask was displayed in place of the letters in central vision after each saccade, delaying the onset of the stimulus in each eye fixation. The duration of the delay was manipulated in fixed or randomized blocks. Although the length of the delay strongly affected the duration of the fixations, there was no difference due to the conditions of delay manipulation, indicating that fixation duration is under direct control. However, not all fixations were lengthened by the period of the delay. Some ended while the mask was still present, suggesting they had been preprogrammed. But these "anticipation" eye movements could not have been completely determined before the fixation was processed because their fixation durations and saccade lengths were affected by the spatial extent of the mask, which varied randomly. Neither preprogramming nor existing serial direct control models of eye guidance can adequately account for these data. Instead, a model with direct control and parallel programming of saccades is proposed to explain the data and eye movements in reading in general.

Journal ArticleDOI
TL;DR: In this article, the electronic structure of the insulating antiferromagnetic transition-metal compounds MnO, FeO, CoO, and NiO is discussed with use of energy-band theory based on the local-spin-density treatment of exchange and correlation.
Abstract: The electronic structure of the insulating antiferromagnetic transition-metal compounds MnO, FeO, CoO, and NiO, which have been regarded as the prototypes of the concept of a Mott insulator, is discussed with use of energy-band theory based on the local-spin-density treatment of exchange and correlation. It is shown that the band structure is very sensitive to the magnetic ordering and that the ground-state magnetic ordering is special in the sense that it makes the ${e}_{g} ({x}^{2}\ensuremath{-}{y}^{2},3{z}^{2}\ensuremath{-}{r}^{2})$ band particularly narrow, which is crucial to the insulating nature of NiO. A detailed analysis is made of this particular aspect of the ground-state magnetic ordering. As for FeO and CoO, it is suggested that the population imbalance among the ${t}_{2g} (xy, yz, zx)$ orbitals induced by the intra-atomic exchange interaction may cause a gap to open at the Fermi level.

Journal ArticleDOI
Jeffrey M. Jaffe1
01 Mar 1984-Networks
TL;DR: Two approaches to meeting or approximating the length and weight constraints are given and an objective function is defined which evaluates a path's distance from meeting the constraints.
Abstract: Let G = (V, E) be a graph with weight function w:E rightarrow Z+ and length function l:E /rightarrow Z+. The problem of determining for v1, V2 /in V whether there is a path from v1 to v2 with weight at most W and length at most L is NP-complete. This paper gives two approaches to meeting or approximating the length and weight constraints. The first approach is to use a pseudopolynomial-time algorithm which determines whether a path meets the constraints. Its running time is O (n5b log nb) where n = |V| and b is the largest length or weight. If tables with O (n3b) entries are kept then all instances of multiple constraints may be decided. Table size may be substantially decreased if one is willing to tolerate incorrect answers to rare instances. The algorithm is suitable for distributed execution. In the second approach, an objective function is defined which evaluates a path's distance from meeting the constraints. Polynomial-time algorithms attempt to find good paths in terms of the objective function. One algorithm is at most 1.62 times worst than optimal. A notion of “average worst-case behavior” is defined. The algorithm's “average” behavior is 1.51 times worse than optimal.

Journal ArticleDOI
G. Baccarani1, M.R. Wordeman1, R.H. Dennard1
TL;DR: In this paper, a generalized scaling theory was proposed to allow for independent scaling of the FET physical dimensions and applied voltages, while still maintaining constant the shape of the electric field pattern.
Abstract: In this paper we present a generalized scaling theory which allows for an independent scaling of the FET physical dimensions and applied voltages, while still maintaining constant the shape of the electric-field pattern. Thus two-dimensional effects are kept under control even though the intensity of the field is allowed to increase. The resulting design flexibility allows the design of FET's with quarter-micrometer channel length to be made, for either room temperature or liquid-nitrogen temperature. The physical limitations of the scaling theory are then investigated in detail, leading to the conclusion that the limiting FET performances are not reached at the 0.25-µm channel length. Further improvements are possible in the future, provided certain technology breakthroughs are achieved.

Journal ArticleDOI
J.A. Barker1, Daniel J. Auerbach1
TL;DR: In this article, a review of recent advances in the use of thermal energy atomic and molecular beam methods to explore the dynamics of physical and chemical processes occuring at surfaces is presented.

Journal ArticleDOI
Edward N. Adams1
TL;DR: It is found that most of the benefit to be realized by preventive service comes from removing a relatively small number of high-rate defects that are found early in the service life of the code.
Abstract: The implementer of a large, complex software system cannot make it completely defect free, so he must normally provide fixes for defects found after the code is put into service. A system user may do preventive service by installing these fixes before the defects cause him problems. Preventive service can benefit both the software developer and the software user to the extent that it reduces the number of operational problems caused by software errors, but it requires the expenditure of the resources required to prepare, disseminate, and install fixes; and it can be the cause of additional software problems caused by design errors introduced into the code by fixes. The benefit from removing a given defect depends on how many problems it would otherwise cause. Benefits may be estimated by modeling problem occurrence as a random process in execution time governed by a distribution of characteristic rates. It is found that most of the benefit to be realized by preventive service comes from removing a relatively small number of high-rate defects that are found early in the service life of the code. For the typical user corrective service would seem preferable to preventive service as a way of dealing with most defects found after code has had some hundreds of months of usage.

Journal ArticleDOI
TL;DR: A Delphi study, using leading information systems professionals, was used to identify and rank ten key IS management issues for the 1980's, and the number one issue identified was improved IS planning, followed by facilitation and management of end user computing.
Abstract: Knowledge of the most important issues in the information systems field would help focus research and educational efforts. A Delphi study, using leading information systems (IS) professionals, was used to identify and rank ten key IS management issues for the 1980's. Measures were also taken to the amount of agreement achieved on these issues and rankings. The number one issue identified was improved IS planning, followed by facilitation and management of end user computing. This article describes the research approach involved and discusses the results.

Journal ArticleDOI
C.H. Stapper1
TL;DR: This paper shows how to calculate the critical areas and probability of failure for dense arrays of wiring and the results are used to determine the nature of the defect size distribution with electronic defect monitors.
Abstract: In a previous paper by the same author the foundation was laid for the theory of photolithographic defects in integrated circuits. This paper expands on the earlier one and shows how to calculate the critical areas and probability of failure for dense arrays of wiring. The results are used to determine the nature of the defect size distribution with electronic defect monitors. Several statistical techniques for doing this are described and examples are given.

Journal ArticleDOI
TL;DR: In this article, a new phenomenological description of grain growth based on the concept of the variation in mobility of individual grain boundaries allows three characteristic regimes of growth to be distinguished, and a new model for grain growth allied with surface diffusion mechanisms can explain in a consistent manner the experimentally observed grain structures.

Journal ArticleDOI
Don Coppersmith1
TL;DR: The ideas give a dramatic improvement even for moderate-sized fields such as GF (2^{127}) , and make (barely) possible computations in fields of size around 2^{400} .
Abstract: A method for determining logarithms in GF (2^{n}) is presented. Its asymptotic running time is O(\exp (cn^{1/3} \log^{2/3} n)) for a small constant c , while, by comparison, Adleman's scheme runs in time O(\exp (c^{'}n^{1/2} \log^{1/2} n )) . The ideas give a dramatic improvement even for moderate-sized fields such as GF (2^{127}) , and make (barely) possible computations in fields of size around 2^{400} . The method is not applicable to GF (q) for a large prime q .

Journal ArticleDOI
TL;DR: This paper discusses the implementation of a database buffer manager as a component of a DBMS, the interface between calling components of higher system layers and the buffer manager is described, and the principal differences between virtual memory paging and database buffer management are outlined.
Abstract: This paper discusses the implementation of a database buffer manager as a component of a DBMS. The interface between calling components of higher system layers and the buffer manager is described; the principal differences between virtual memory paging and database buffer management are outlined; the notion of referencing versus addressing of database pages is introduced; and the concept of fixing pages in the buffer to prevent uncontrolled replacement is explained.Three basic tasks have to be performed by the buffer manager: buffer search, allocation of frames to concurrent transactions, and page replacement. For each of these tasks, implementation alternatives are discussed and illustrated by examples from a performance evaluation project of a CODASYL DBMS.