scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 1987"


Journal ArticleDOI
Roger Y. Tsai1
01 Aug 1987
TL;DR: In this paper, a two-stage technique for 3D camera calibration using TV cameras and lenses is described, aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters.
Abstract: A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.

5,940 citations


Journal ArticleDOI
John A. Zachman1
TL;DR: Information systems architecture is defined by creating a descriptive framework from disciplines quite independent of information systems, then by analogy specifies information systems architecture based upon the neutral, objective framework.
Abstract: With increasing size and complexity of the implementations of information systems, it is necessary to use some logical construct (or architecture) for defining and controlling the interfaces and the integration of all of the components of the system. This paper defines information systems architecture by creating a descriptive framework from disciplines quite independent of information systems, then by analogy specifies information systems architecture based upon the neutral, objective framework. Also, some preliminary conclusions about the implications of the resultant descriptive framework are drawn. The discussion is limited to architecture and does not include a strategic planning methodology.

3,219 citations


Journal ArticleDOI
TL;DR: An intermediate program representation, called the program dependence graph (PDG), that makes explicit both the data and control dependences for each operation in a program, allowing transformations to be triggered by one another and applied only to affected dependences.
Abstract: In this paper we present an intermediate program representation, called the program dependence graph (PDG), that makes explicit both the data and control dependences for each operation in a program. Data dependences have been used to represent only the relevant data flow relationships of a program. Control dependences are introduced to analogously represent only the essential control flow relationships of a program. Control dependences are derived from the usual control flow graph. Many traditional optimizations operate more efficiently on the PDG. Since dependences in the PDG connect computationally related parts of the program, a single walk of these dependences is sufficient to perform many optimizations. The PDG allows transformations such as vectorization, that previously required special treatment of control dependence, to be performed in a manner that is uniform for both control and data dependences. Program transformations that require interaction of the two dependence types can also be easily handled with our representation. As an example, an incremental approach to modifying data dependences resulting from branch deletion or loop unrolling is introduced. The PDG supports incremental optimization, permitting transformations to be triggered by one another and applied only to affected dependences.

2,631 citations


Journal ArticleDOI
S. Katz1
TL;DR: The model offers, via a nonlinear recursive procedure, a computation and space efficient solution to the problem of estimating probabilities from sparse data, and compares favorably to other proposed methods.
Abstract: The description of a novel type of m-gram language model is given. The model offers, via a nonlinear recursive procedure, a computation and space efficient solution to the problem of estimating probabilities from sparse data. This solution compares favorably to other proposed methods. While the method has been developed for and successfully implemented in the IBM Real Time Speech Recognizers, its generality makes it applicable in other areas where the problem of estimating probabilities from sparse data arises.

2,038 citations


Journal ArticleDOI
TL;DR: Using an atomic force microscope, atomic-scale features on the frictional force acting on a tungsten wire tip sliding on the basal plane of a graphite surface at low loads are observed.
Abstract: Using an atomic force microscope, we have observed atomic-scale features on the frictional force acting on a tungsten wire tip sliding on the basal plane of a graphite surface at low loads, < 10-4 N. The atomic features have the periodicity of the graphite surface and are discussed in terms of a phenomenological model for the tip motion described by the sum of a periodic tip-surface force and the spring force exerted by the wire.

1,541 citations


Proceedings ArticleDOI
Don Coppersmith1, Shmuel Winograd1
01 Jan 1987
TL;DR: A new method for accelerating matrix multiplication asymptotically is presented, by using a basic trilinear form which is not a matrix product, and making novel use of the Salem-Spencer Theorem.
Abstract: We present a new method for accelerating matrix multiplication asymptotically. This work builds on recent ideas of Volker Strassen, by using a basic trilinear form which is not a matrix product. We make novel use of the Salem-Spencer Theorem, which gives a fairly dense set of integers with no three-term arithmetic progression. Our resulting matrix exponent is 2.376.

1,413 citations


Journal ArticleDOI
TL;DR: In this paper, a modified version of the atomic force microscope is introduced that enables a precise measurement of the force between a tip and a sample over a tip-sample distance range of 30-150 A.
Abstract: A modified version of the atomic force microscope is introduced that enables a precise measurement of the force between a tip and a sample over a tip‐sample distance range of 30–150 A. As an application, the force signal is used to maintain the tip‐sample spacing constant, so that profiling can be achieved with a spatial resolution of 50 A. A second scheme allows the simultaneous measurement of force and surface profile; this scheme has been used to obtain material‐dependent information from surfaces of electronic materials.

1,405 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that unless the sample size is 500 or more, estimators derived by either the method of moments or probability-weighted moments are more reliable.
Abstract: The generalized Pareto distribution is a two-parameter distribution that contains uniform, exponential, and Pareto distributions as special cases. It has applications in a number of fields, including reliability studies and the analysis of environmental extreme events. Maximum likelihood estimation of the generalized Pareto distribution has previously been considered in the literature, but we show, using computer simulation, that, unless the sample size is 500 or more, estimators derived by the method of moments or the method of probability-weighted moments are more reliable. We also use computer simulation to assess the accuracy of confidence intervals for the parameters and quantiles of the generalized Pareto distribution.

1,233 citations


Book
Richard E. Blahut1
01 Jan 1987

1,048 citations


Book
01 Jan 1987
TL;DR: Digital Testing and the Need for Testable design Principles of Testable Design Pseudorandom Sequence Generators Test Response Compression Techniques and Limitations and Other Concerns of Random Pattern Testing Test System Requirements for Built-In Test Appendix References Index.
Abstract: Digital Testing and the Need for Testable Design Principles of Testable Design Pseudorandom Sequence Generators Test Response Compression Techniques Shift-Register Polynomial Division Special-Purpose Shift-Register Circuits Random Pattern Built-In Test Built-In Test Structures Limitations and Other Concerns of Random Pattern Testing Test System Requirements for Built-In Test Appendix References Index.

1,004 citations


Book
Gregory J. Chaitin1
30 Oct 1987
TL;DR: This paper reviews algorithmic information theory, which is an attempt to apply information-theoretic and probabilistic ideas to recursive function theory.
Abstract: This paper reviews algorithmic information theory, which is an attempt to apply information-theoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.

Journal ArticleDOI
G. Ungerboeck1
TL;DR: An introduction into TCM is given, reasons for the development of TCM are reviewed, and examples of simple TCM schemes are discussed.
Abstract: rellis-Coded Modulation (TCM) has evolved over the past decade as a combined coding and modulation technique for digital transmission over band-limited channels. Its main attraction comes from the fact that it allows the achievement of significant coding gains over conventional uncoded multilevel modulation without compromising bandwidth efficiency. T h e first TCM schemes were proposed in 1976 [I]. Following a more detailed publication [2] in 1982, an explosion of research and actual implementations of TCM took place, to the point where today there is a good understanding of the theory and capabilities of TCM methods. In Part 1 of this two-part article, an introduction into TCM is given. T h e reasons for the development of TCM are reviewed, and examples of simple TCM schemes are discussed. Part I1 [I51 provides further insight into code design and performance, and addresses. recent advances in TCM. TCM schemes employ redundant nonbinary modulation in combination with a finite-state encoder which governs the selection of modulation signals to generate coded signal sequences. In the receiver, the noisy signals are decoded by a soft-decision maximum-likelihood sequence decoder. Simple four-state TCM schemes can improve. the robustness of digital transmission against additive noise by 3 dB, compared to conventional , uncoded modulation. With more complex TCM schemes, the coding gain can reach 6 dB or more. These gains are obtained without bandwidth expansion or reduction of the effective information rate as required by traditional error-correction schemes. Shannon's information theory predicted the existence of coded modulation schemes with these characteristics more than three decades ago. T h e development of effective TCM techniques and today's signal-processing technology now allow these ,gains to be obtained in practice. Signal waveforms representing information sequences ~ are most impervious to noise-induced detection errors if they are very different from each other. Mathematically, this translates into therequirement that signal sequences should have large distance in Euclidean signal space. ~ T h e essential new concept of TCM that led to the afore-1 mentioned gains was to use signal-set expansion to I provide redundancy for coding, and to design coding and ' signal-mapping functions jointly so as to maximize ~ directly the \" free distance \" (minimum Euclidean distance) between coded signal sequences. This allowed the construction of modulation codes whose free distance significantly exceeded the minimum distance between uncoded modulation signals, at the same information rate, bandwidth, and signal power. The term \" …

Journal ArticleDOI
TL;DR: In this article, the diamagnetism observed in the zero-field--cooled state is considerably larger than under field cooling, indicating the existence of a superconductive glass state.
Abstract: Susceptibility and magnetic-moment measurements from 1.9 to 35 K in magnetic fields up to 1.5 T in powder samples of La/sub 2/CuO/sub 4-//sub y/:Ba are reported. The diamagnetism observed in the zero-field--cooled state is considerably larger than under field cooling. The former is metastable like the magnetic moment induced after switching the field off. These observations indicate the existence of a superconductive glass state.

Journal ArticleDOI
A. P. Malozemoff1
TL;DR: In this article, a field-asymmetric offset of the hysteresis loop in ferromagnetic-antiferromagnetic sandwiches, one of the manifestations of exchange anisotropy, can be predicted from the presence of random interface roughness giving rise to a random field acting on the interface spins.
Abstract: A field-asymmetric offset of the hysteresis loop in ferromagnetic-antiferromagnetic sandwiches, one of the manifestations of so-called exchange anisotropy, can be predicted from the presence of random interface roughness giving rise to a random field acting on the interface spins. The antiferromagnet breaks up into domains of size determined by the competition of exchange and an additional uniaxial in-plane anisotropy, and this size sets the scale for averaging of the random field.

Journal ArticleDOI
Gerd Binnig1, Heinrich Rohrer1
TL;DR: Muller et al. as discussed by the authors presented the historic development of Scanning Tunneling Microscopy (STM) and the physical and technical aspects have already been covered in a few recent reviews and two conference proceedings' and many others are expected to follow in the near future.
Abstract: We present here the historic development of Scanning Tunneling Microscopy (STM); the physical and technical aspects have already been covered in a few recent reviews and two conference proceedings' and many others are expected to follow in the near future. A technical summary is given by the sequence of figures, which stands alone. Our narrative is by no means a recommendation of how research should be done; it simply reflects what we thought, how we acted, and what we felt. However, it would certainly be gratifying if it encouraged a more relaxed attitude towards doing science. Perhaps we were fortunate in having common training in superconductivity, a field which radiates beauty and elegance. For scanning tunneling microscopy, we brought along some experience in tunneling (Binnig and Hoenig, 1978) and angstroms (Rohrer, 1960), but none in microscopy or surface science. This probably gave us the courage and lightheartedness to start something which should "not have worked in principle, " as we were so often told. "After havn on another occasion, I had been involved for a short time with tunneling between very small metallic grains in bistable resistors, and later I matched my colleagues struggle with tolerance problems in the fabrication of Josephson junctions. So the local study of growth and electrical properties of thin insulating layers appeared to me an interesting problem, and I was given the opportunity to hire a new research staff member, Gerd Binnig, who found it interesting, too, and accepted the offer. Incidentally, Gerd and I would have missed each other, had it not been for K. Alex Muller,

Journal ArticleDOI
R.B. King1
TL;DR: In this article, the problem of flat-ended cylindrical, quadrilateral, and triangular punches indenting a layered isotropic elastic half-space is considered, and solutions are obtained numerically.

Journal ArticleDOI
G. Ungerboeck1
TL;DR: The effects of carrier-phase offset in carrier-modulated TCM systems are discussed, and recent advances in TCM schemes that use signal sets defined in more than two dimensions are described, and other work related to trellis-coded modulation is mentioned.
Abstract: I the,art in trellis-coded modulation (TCM) is given for the more interested reader. First, the general structure of TCM schemes and the principles of code construction are reviewed. Next, the effects of carrier-phase offset in carrier-modulated TCM systems are discussed. The topic i s important, since TCM schemes turn out to be more sensitive to phase offset than uncoded modulation systems. Also, TCM schemes are generally not phase invariant to the same extent as their signal sets. Finally, recent advances in TCM schemes that use signal sets defined in more than two dimensions are described, and other work related to trellis-coded modulation is mentioned. The best codes currently known for one-, two-, four-, and eight-dimensional signal sets are given in an Appendix. T h e trellis structure of the early hand-designed TCM schemes and the heuristic rules used to assign signals to trellis transitions suggested that TCM schemes should have an interpretation in terms of convolutional codes with a special signal mapping. This mapping should be based on grouping signals into subsets with large distance between the subset signals. Attempts to explain TCM schemes in this manner led to the general structure of TCM encoders/modulators depicted in Fig. 1. According to this figure, TCM signals are generated as follows: When m bits are to be transmitted per encoder/modulator operation, m 5 m bits are expanded by a rate-rYd(m-t 1) binary convolutional encoder into rii-t 1 coded bits. These bits are used to select one of 2' \" + I subsets of a redundant 2'11+1-ary signal set. The remaining mm uncoded bits determine which of the 2 \" '-' \" signals in this subset is to be transmitted. The concept of set partitioning is of central significance for TCM schemes. Figure 2 shows this concept for a 32-CROSS signal set [ 11, a signal set of lattice type \" Z2 \". Generally, the notation \" Zk \" is used to denote an infinite \" lattice \" of points in k-dimensional space with integer coordinates. Lattice-type signal sets are finite subsets of lattice points, which are centered around the origin and have a minimum spacing of A,. Set partitioning divides a signal set successively into smaller subsets with maximally increasing smallest two-way. The partitioning is repeated iii 4-1 times until A,,+, is equal to or greater than the desired free distance of the TCM scheme to be designed. T h e finally …


Journal ArticleDOI
TL;DR: The proofs expose general heuristic principles that explain why consensus is possible in certain models but not possible in others, and several critical system parameters, including various synchronicity conditions, are identified.
Abstract: Reaching agreement is a primitive of distributed computing. Whereas this poses no problem in an ideal, failure-free environment, it imposes certain constraints on the capabilities of an actual system: A system is viable only if it permits the existence of consensus protocols tolerant to some number of failures. Fischer et al. have shown that in a completely asynchronous model, even one failure cannot be tolerated. In this paper their work is extended: Several critical system parameters, including various synchrony conditions, are identified and how varying these affects the number of faults that can be tolerated is examined. The proofs expose general heuristic principles that explain why consensus is possible in certain models but not possible in others.

Journal ArticleDOI
TL;DR: This work model the speckle according to the exact physical process of coherent image formation and accurately represents the higher order statistical properties of speckel that are important to the restoration procedure.
Abstract: Speckle is a granular noise that inherently exists in all types of coherent imaging systems. The presence of speckle in an image reduces the resolution of the image and the detectability of the target. Many speckle reduction algorithms assume speckle noise is multiplicative. We instead model the speckle according to the exact physical process of coherent image formation. Thus, the model includes signal-dependent effects and accurately represents the higher order statistical properties of speckle that are important to the restoration procedure. Various adaptive restoration filters for intensity speckle images are derived based on different model assumptions and a nonstationary image model. These filters respond adaptively to the signal-dependent speckle noise and the nonstationary statistics of the original image.

Journal ArticleDOI
TL;DR: The superconducting critical current in these films at 77 K is in excess of ${10}^{5}$ A/${\mathrm{cm}}^{2}$ and at 4.2 K in addition to this.
Abstract: We have grown epitaxial films of the ${\mathrm{YBa}}_{2}$${\mathrm{Cu}}_{3}$${\mathrm{O}}_{7\mathrm{\ensuremath{-}}\mathrm{x}}$ compound on ${\mathrm{SrTiO}}_{3}$ substrates. The superconducting critical current in these films at 77 K is in excess of ${10}^{5}$ A/${\mathrm{cm}}^{2}$ and at 4.2 K in excess of ${10}^{6}$ A/${\mathrm{cm}}^{2}$.

Journal ArticleDOI
Ronald Fagin1, Joseph Y. Halpern1
TL;DR: In these logics, the set of beliefs of an agent does not necessarily contain all valid formulas, which makes them more suitable than traditional logics for modelling beliefs of humans (or machines) with limited reasoning capabilities.

Journal ArticleDOI
David R. Clarke1
TL;DR: In this paper, it was shown that there will exist a stable thickness for the intergranular film and that it will be of the order of 1 nm, a value commensurate with that observed experimentally in a wide range of materials.
Abstract: The fundamental question as to whether thin intergranular films can adopt an equilibrium thickness in polycrystalline ceramics is addressed. Two continuum approaches are presented, one based on interfacial energies and the other on the force balance normal to the boundary. These indicate that there will exist a stable thickness for the intergranular film and that it will be of the order of 1 nm. The origin of an equilibrium thickness is shown to be the result of two competing interactions, an attractive van der Waals-disperson interaction between the grains on either side of the boundary acting to thin the film and a repulsive term, due to the structure of the intergranular liquid, opposing this attraction. As both of these interactions are of short range (

Journal ArticleDOI
TL;DR: Critical-field and critical-current measurements performed on single crystals show anisotropic electronic behavior of the high-temperature superconductor with anisotropies of 10 and greater.
Abstract: We report direct observation of the anisotropic electronic behavior of the high-temperature superconductor ${\mathrm{Y}}_{1}$${\mathrm{Ba}}_{2}$${\mathrm{Cu}}_{3}$${\mathrm{O}}_{7\mathrm{\ensuremath{-}}\mathrm{x}}$. Critical-field and critical-current measurements performed on single crystals show anisotropies of 10 and greater. Critical supercurrent densities in favorable directions in single crystals are 3\ifmmode\times\else\texttimes\fi{}${10}^{6}$ A/${\mathrm{cm}}^{2}$ in low fields at 4.5 K and remain above ${10}^{6}$ A/${\mathrm{cm}}^{2}$ to beyond 40 kG.

Book
John M. Carroll1, Mary Beth Rosson1
01 Jan 1987
TL;DR: This chapter discusses two empirical phenomena of computer use: (1) people have considerable trouble learning to use computers, and (2) their skill tends to asymptote at relative mediocrity.
Abstract: One of the most sweeping changes ever in the ecology of human cognition may be taking place today. People are beginning to learn and use very powerful and sophisticated information processing technology as a matter of daily life. From the perspective of human history, this could be a transitional point dividing a period when machines merely helped us do things from a period when machines will seriously help us think about things. But if this is so, we are indeed still very much within the transition. For most people, computers have more possibility than they have real practical utility. In this chapter we discuss two empirical phenomena of computer use: (1) people have considerable trouble learning to use computers (e.g., Mack, Lewis and Carroll, 1983; Mantei and Haskell, 1983), and (2) their skill tends to asymptote at relative mediocrity (Nielsen, Mack, Bergendorff, and Grischkowsky, 1986; Pope, 1985; Rosson, 1983). These phenomena could be viewed as being due merely to “bad” design in current systems. We argue that they are in part more fundamental than this, deriving from conflicting motivational and cognitive strategies. Accordingly, (1) and (2) are best viewed not as design problems to be solved, but as true paradoxes that necessitate programmatic tradeoff solutions. A motivational paradox arises in the “production bias” people bring to the task of learning and using computing equipment. Their paramount goal is throughput. This is a desirable state of affairs in that it gives users a focus for their activity with a system, and it increases their likelihood of receiving concrete reinforcement from their work. But on the other hand, it reduces their motivation to spend any time just learning about the system, so that when situations appear that could be more effectively handled by new procedures, they are likely to stick with the procedures they already know, regardless of their efficacy. A second, cognitive paradox devolves from the “assimilation bias”: people apply what they already know to interpret new situations. This bias can be helpful, when there are useful similarities between the new and old information (as when a person learns to use a word processor taking it to be a super typewriter or an electronic desktop). But irrelevant and misleading similarities between new and old information can also blind learners to what they are actually seeing and doing, leading them to draw erroneous comparisons and conclusions, or preventing them from recognizing possibilities for new function. It is our view that these cognitive and motivational conflicts are mutually reinforcing, thus exaggerating the effect either problem might separately have on early and longterm learning. These paradoxes are not defects in human learning to be remediated. They are fundamental properties of learning. If learning were not at least this complex, then

Journal ArticleDOI
TL;DR: The short coherence length of high-${\mathrm{T}$ oxides is shown to induce considerable weakening of the pair potential at surfaces and interfaces, and it is argued that this effect is responsible for the existence of internal Josephson junctions at twin boundaries.
Abstract: The short coherence length of high-${\mathrm{T}}_{\mathrm{c}}$ oxides is shown to induce considerable weakening of the pair potential at surfaces and interfaces. It is argued that this effect is responsible for the existence of internal Josephson junctions at twin boundaries, which are at the origin of the superconductive glassy state, as well as for gapless tunneling characteristics.

Journal ArticleDOI
Gerd Binnig1, Ch. Gerber1, E. Stoll1, T. R. Albrecht2, Calvin F. Quate2 
15 Jun 1987-EPL
TL;DR: The atomic force microscope (AFM) is a promising new method for studying the surface structure of both conductors and insulators as discussed by the authors, achieving a resolution better than 2.5 A.
Abstract: The atomic force microscope (AFM) is a promising new method for studying the surface structure of both conductors and insulators. In mapping a graphite surface with an insulating stylus, we have achieved a resolution better than 2.5 A.

Journal ArticleDOI
TL;DR: The Θ(m) bound on finding the maxima of wide totally monotone matrices is used to speed up several geometric algorithms by a factor of logn.
Abstract: LetA be a matrix with real entries and letj(i) be the index of the leftmost column containing the maximum value in rowi ofA.A is said to bemonotone ifi 1 >i 2 implies thatj(i 1) ≥J(i 2).A istotally monotone if all of its submatrices are monotone. We show that finding the maximum entry in each row of an arbitraryn xm monotone matrix requires Θ(m logn) time, whereas if the matrix is totally monotone the time is Θ(m) whenm≥n and is Θ(m(1 + log(n/m))) whenm

Journal ArticleDOI
TL;DR: Using a scanning tunneling microscope, the tunneling current versus voltage is measured at fixed values of separation between a tungsten probe-tip and a Si(111)2 × 1 surface as mentioned in this paper.

Journal ArticleDOI
TL;DR: The goals of Em-erald are outlined, Emerald to previous work is related, its type system and distribution support are described, and a prototype implementation of Emerald is constructed.
Abstract: Emerald is an object-based language for programming distributed subsystems and applications. Its novel features include 1) a single object model that is used both for programming in the small and in the large, 2) support for abstract types, and 3) an explicit notion of object location and mobility. This paper outlines the goals of Em-erald, relates Emerald to previous work, and describes its type system and distribution support. We are currently constructing a prototype implementation of Emerald.