scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 1982"


Journal ArticleDOI
TL;DR: In this paper, the electronic properties of inversion and accumulation layers at semiconductor-insulator interfaces and of other systems that exhibit two-dimensional or quasi-two-dimensional behavior, such as electrons in semiconductor heterojunctions and superlattices and on liquid helium, are reviewed.
Abstract: The electronic properties of inversion and accumulation layers at semiconductor-insulator interfaces and of other systems that exhibit two-dimensional or quasi-two-dimensional behavior, such as electrons in semiconductor heterojunctions and superlattices and on liquid helium, are reviewed. Energy levels, transport properties, and optical properties are considered in some detail, especially for electrons at the (100) silicon-silicon dioxide interface. Other systems are discussed more briefly.

5,638 citations


Journal ArticleDOI
TL;DR: In this paper, surface microscopy using vacuum tunneling has been demonstrated for the first time, and topographic pictures of surfaces on an atomic scale have been obtained for CaIrSn 4 and Au.
Abstract: Surface microscopy using vacuum tunneling is demonstrated for the first time. Topographic pictures of surfaces on an atomic scale have been obtained. Examples of resolved monoatomic steps and surface reconstructions are shown for (110) surfaces of CaIrSn 4 and Au.

4,290 citations


Journal ArticleDOI
G. Ungerboeck1
TL;DR: A coding technique is described which improves error performance of synchronous data links without sacrificing data rate or requiring more bandwidth by channel coding with expanded sets of multilevel/phase signals in a manner which increases free Euclidean distance.
Abstract: A coding technique is described which improves error performance of synchronous data links without sacrificing data rate or requiring more bandwidth. This is achieved by channel coding with expanded sets of multilevel/phase signals in a manner which increases free Euclidean distance. Soft maximum--likelihood (ML) decoding using the Viterbi algorithm is assumed. Following a discussion of channel capacity, simple hand-designed trellis codes are presented for 8 phase-shift keying (PSK) and 16 quadrature amplitude-shift keying (QASK) modulation. These simple codes achieve coding gains in the order of 3-4 dB. It is then shown that the codes can be interpreted as binary convolutional codes with a mapping of coded bits into channel signals, which we call "mapping by set partitioning." Based on a new distance measure between binary code sequences which efficiently lower-bounds the Euclidean distance between the corresponding channel signal sequences, a search procedure for more powerful codes is developed. Codes with coding gains up to 6 dB are obtained for a variety of multilevel/phase modulation schemes. Simulation results are presented and an example of carrier-phase tracking is discussed.

4,091 citations


Journal ArticleDOI
Charles H. Bennett1
TL;DR: In this paper, the authors consider the problem of rendering a computation logically reversible (e.g., creation and annihilation of a history file) in a Brownian computer, and show that it is not the making of a measurement that prevents the demon from breaking the second law but rather the logically irreversible act of erasing the record of one measurement to make room for the next.
Abstract: Computers may be thought of as engines for transforming free energy into waste heat and mathematical work. Existing electronic computers dissipate energy vastly in excess of the mean thermal energykT, for purposes such as maintaining volatile storage devices in a bistable condition, synchronizing and standardizing signals, and maximizing switching speed. On the other hand, recent models due to Fredkin and Toffoli show that in principle a computer could compute at finite speed with zero energy dissipation and zero error. In these models, a simple assemblage of simple but idealized mechanical parts (e.g., hard spheres and flat plates) determines a ballistic trajectory isomorphic with the desired computation, a trajectory therefore not foreseen in detail by the builder of the computer. In a classical or semiclassical setting, ballistic models are unrealistic because they require the parts to be assembled with perfect precision and isolated from thermal noise, which would eventually randomize the trajectory and lead to errors. Possibly quantum effects could be exploited to prevent this undesired equipartition of the kinetic energy. Another family of models may be called Brownian computers, because they allow thermal noise to influence the trajectory so strongly that it becomes a random walk through the entire accessible (low-potential-energy) portion of the computer's configuration space. In these computers, a simple assemblage of simple parts determines a low-energy labyrinth isomorphic to the desired computation, through which the system executes its random walk, with a slight drift velocity due to a weak driving force in the direction of forward computation. In return for their greater realism, Brownian models are more dissipative than ballistic ones: the drift velocity is proportional to the driving force, and hence the energy dissipated approaches zero only in the limit of zero speed. In this regard Brownian models resemble the traditional apparatus of thermodynamic thought experiments, where reversibility is also typically only attainable in the limit of zero speed. The enzymatic apparatus of DNA replication, transcription, and translation appear to be nature's closest approach to a Brownian computer, dissipating 20–100kT per step. Both the ballistic and Brownian computers require a change in programming style: computations must be renderedlogically reversible, so that no machine state has more than one logical predecessor. In a ballistic computer, the merging of two trajectories clearly cannot be brought about by purely conservative forces; in a Brownian computer, any extensive amount of merging of computation paths would cause the Brownian computer to spend most of its time bogged down in extraneous predecessors of states on the intended path, unless an extra driving force ofkTln2 were applied (and dissipated) at each merge point. The mathematical means of rendering a computation logically reversible (e.g., creation and annihilation of a history file) will be discussed. The old Maxwell's demon problem is discussed in the light of the relation between logical and thermodynamic reversibility: the essential irreversible step, which prevents the demon from breaking the second law, is not the making of a measurement (which in principle can be done reversibly) but rather the logically irreversible act of erasing the record of one measurement to make room for the next. Converse to the rule that logically irreversible operations on data require an entropy increase elsewhere in the computer is the fact that a tape full of zeros, or one containing some computable pseudorandom sequence such as pi, has fuel value and can be made to do useful thermodynamic work as it randomizes itself. A tape containing an algorithmically random sequence lacks this ability.

1,637 citations


Book
01 Dec 1982
TL;DR: A handbook technical writer's handbook as discussed by the authors is not a good choice for the general reader, as the book's design and page format and the writer's style do not permit easy access to the wealth of information available.
Abstract: A handbook Technical writer's handbook is not. The book's design and page format and the writer's style do not permit easy access to the wealth of information available in this book. However, for the selective and patient reader, useful and thought provoking material can be found.

966 citations


Proceedings ArticleDOI
Gregory J. Chaitin1
01 Jun 1982
TL;DR: This work has discovered how to extend the graph coloring approach so that it naturally solves the spilling problem, and produces better object code and takes much less compile time.
Abstract: In a previous paper we reported the successful use of graph coloring techniques for doing global register allocation in an experimental PL/I optimizing compiler. When the compiler cannot color the register conflict graph with a number of colors equal to the number of available machine registers, it must add code to spill and reload registers to and from storage. Previously the compiler produced spill code whose quality sometimes left much to be desired, and the ad hoe techniques used took considerable amounts of compile time. We have now discovered how to extend the graph coloring approach so that it naturally solves the spilling problem. Spill decisions are now made on the basis of the register conflict graph and cost estimates of the value of keeping the result of a computation in a register rather than in storage. This new approach produces better object code and takes much less compile time.

895 citations


Journal ArticleDOI
TL;DR: Sufficient conditions for convergence of the WR method are proposed and examples in MOS digital integrated circuits are given to show that these conditions are very mild in practice.
Abstract: The Waveform Relaxation (WR) method is an iterative method for analyzing nonlinear dynamical systems in the time domain. The method, at each iteration, decomposes the system into several dynamical subsystems each of which is analyzed for the entire given time interval. Sufficient conditions for convergence of the WR method are proposed and examples in MOS digital integrated circuits are given to show that these conditions are very mild in practice. Theoretical and computational studies show the method to be efficient and reliable.

834 citations


Journal ArticleDOI
TL;DR: The requirements and components for a proposed Document Analysis System, which assists a user in encoding printed documents for computer processing, are outlined and several critical functions have been investigated and the technical approaches are discussed.
Abstract: This paper outlines the requirements and components for a proposed Document Analysis System, which assists a user in encoding printed documents for computer processing. Several critical functions have been investigated and the technical approaches are discussed. The first is the segmentation and classification of digitized printed documents into regions of text and images. A nonlinear, run-length smoothing algorithm has been used for this purpose. By using the regular features of text lines, a linear adaptive classification scheme discriminates text regions from others. The second technique studied is an adaptive approach to the recognition of the hundreds of font styles and sizes that can occur on printed documents. A preclassifier is constructed during the input process and used to speed up a well-known pattern-matching method for clustering characters from an arbitrary print source into a small sample of prototypes. Experimental results are included.

718 citations


Journal ArticleDOI
Markus Büttiker1, Rolf Landauer1
TL;DR: In this article, it was shown that at low modulation frequencies the traversing particle sees a static barrier and at high frequencies the particle tunnels through the time-averaged potential, but can do it inelastically, losing or gaining modulation quanta.
Abstract: One of several contradictory existing results for the time a tunneling particle interacts with its barrier is confirmed, by considering tunneling through a time-modulated barrier. At low modulation frequencies the traversing particle sees a static barrier. At high frequencies the particle tunnels through the time-averaged potential, but can do it inelastically, losing or gaining modulation quanta. The transition between the two regimes yields $\ensuremath{\int}\mathrm{dx}{[\frac{m}{2(V\ensuremath{-}E)}]}^{\frac{1}{2}}$ for the traversal time.

699 citations


Journal ArticleDOI
TL;DR: The phase-shifting mask as mentioned in this paper consists of a normal transmission mask that has been coated with a transparent layer patterned to ensure that the optical phases of nearest apertures are opposite.
Abstract: The phase-shifting mask consists of a normal transmission mask that has been coated with a transparent layer patterned to ensure that the optical phases of nearest apertures are opposite. Destructive interference between waves from adjacent apertures cancels some diffraction effects and increases the spatial resolution with which such patterns can be projected. A simple theory predicts a near doubling of resolution for illumination with partial incoherence σ < 0.3, and substantial improvements in resolution for σ < 0.7. Initial results obtained with a phase-shifting mask patterned with typical device structures by electron-beam lithography and exposed using a Mann 4800 10X tool reveals a 40-percent increase in usuable resolution with some structures printed at a resolution of 1000 lines/mm. Phase-shifting mask structures can be used to facilitate proximity printing with larger gaps between mask and wafer. Theory indicates that the increase in resolution is accompanied by a minimal decrease in depth of focus. Thus the phase-shifting mask may be the most desirable device for enhancing optical lithography resolution in the VLSI/VHSIC era.

667 citations


Journal ArticleDOI
TL;DR: It is conjectured that the barrier of O(log n) cannot be surpassed by any polynomial number of processors and that this performance cannot be achieved in the weaker model.

Journal ArticleDOI
I. Ingemarsson, D. Tang1, C. Wong1
TL;DR: This work has shown how to use CKDS in connection with public key ciphers and an authorization scheme and reveals two important aspects of any conference key distribution system: the multitap resistance and the choice of a suitable symmetric function of the private keys.
Abstract: Encryption is used in a communication system to safeguard information in the transmitted messages from anyone other than the intended receiver(s). To perform the encryption and decryption the transmitter and receiver(s) ought to have matching encryption and decryption keys. A clever way to generate these keys is to use the public key distribution system invented by Diffie and Hellman. That system, however, admits only one pair of communication stations to share a particular pair of encryption and decryption keys, The public key distribution system is generalized to a conference key distribution system (CKDS) which admits any group of stations to share the same encryption and decryption keys. The analysis reveals two important aspects of any conference key distribution system. One is the multitap resistance, which is a measure of the information security in the communication system. The other is the separation of the problem into two parts: the choice of a suitable symmetric function of the private keys and the choice of a suitable one-way mapping thereof. We have also shown how to use CKDS in connection with public key ciphers and an authorization scheme.

Journal ArticleDOI
TL;DR: In this paper, the ground exciton state in quantum wells has been investigated and the results obtained from a trial wave function not separable in spatial coordinates are shown to be valid throughout the entire well-thickness range, corresponding in the thin and thick limits to two and three-dimensional situations, respectively.
Abstract: Variational calculations are presented of the ground exciton state in quantum wells. For the GaAs-GaAlAs system, the results obtained from a trial wave function not separable in spatial coordinates are shown to be valid throughout the entire well-thickness range, corresponding in the thin and thick limits to two- and three-dimensional situations, respectively. For the InAs-GaSb system, in which electrons and holes are present in spatially separated regions, the exciton binding is substantially reduced. In the limit of thin wells, the binding energy is only about one-fourth of the two-dimensional value.

Journal ArticleDOI
Ashok K. Chandra1, David Harel1
TL;DR: In this article, a fixpoint query hierarchy is proposed to classify queries on relational data bases according to their structure and their computational complexity using the operations of composition and fixpoints, and a Σ-π hierarchy of height ω2 is defined, and its properties investigated.

Patent
23 Aug 1982
TL;DR: In this paper, a photo-initiator is used to generate acid upon radiolysis in a polymer having recurrent pendant groups such as tert-butyl ester or tertbutyl carbonates.
Abstract: Resists sensitive to UV, electron beam and X-ray radiation with positive or negative tone upon proper choice of a developer are formulated from a polymer having recurrent pendant groups such as tert-butyl ester or tert-butyl carbonates that undergo efficient acidolysis with concomitant changes in polarity (solubility) together with a photoinitiator which generates acid upon radiolysis A sensitizer component that alters wavelength sensitivity may also be added

Journal ArticleDOI
TL;DR: In this paper, it is proposed that interfacial reaction barriers in binary A/B diffusion couples lead to the absence of phases predicted by the equilibrium phase diagram, provided that the diffusion zones are sufficiently thin.
Abstract: It is proposed that interfacial reaction barriers in binary A/B diffusion couples lead to the absence of phases predicted by the equilibrium phase diagram, provided that the diffusion zones are sufficiently thin (thin‐film case). With increasing thickness of the diffusion zones the influence of interfacial reaction barriers decreases and the simultaneous existence of diffusion‐controlled growth of all equilibrium phases is expected (bulk case). Selective growth of the first and second phases and the effect of impurities are discussed with the influence of interfacial reaction barriers and with references to the known cases of silicide formation.

Journal ArticleDOI
Won Kim1
TL;DR: An SQL-like query nested to an arbitrary depth is shown to be composed of five basic types of nesting, four of which have not been well understood and more work needs to be done to improve their execution efficiency.
Abstract: SQL is a high-level nonprocedural data language which has received wide recognition in relational databases. One of the most interesting features of SQL is the nesting of query blocks to an arbitrary depth. An SQL-like query nested to an arbitrary depth is shown to be composed of five basic types of nesting. Four of them have not been well understood and more work needs to be done to improve their execution efficiency. Algorithms are developed that transform queries involving these basic types of nesting into semantically equivalent queries that are amenable to efficient processing by existing query-processing subsystems. These algorithms are then combined into a coherent strategy for processing a general nested query of arbitrary complexity.

Journal ArticleDOI
Gérald Bastard1
TL;DR: In this paper, the band structure of HgTe-CdTe superlattices was investigated and it was shown that these materials can be either semiconducting or zero-gap semiconductors, i.e., behave exactly like the ternary Hg{1\ensuremath{-}x}{\mathrm{Cd{x}$ Te random alloys.
Abstract: We extend our previous investigations on the band structure of superlattices by applying the envelope-function approximation to four distinct problems. We calculate the band structure of HgTe-CdTe superlattices and show that these materials can be either semiconducting or zero-gap semiconductors, i.e., behave exactly like the ternary ${\mathrm{Hg}}_{1\ensuremath{-}x}{\mathrm{Cd}}_{x}$ Te random alloys. We analyze the superlattice dispersion relations in the layer planes (Landau superlattice subbands) and we compare the longitudinal and transverse effective masses of semiconducting InAs-GaSb superlattices. We calculate the general equation for the bound states due to aperiodic layers, taking account of the band structure of the host materials. We finally derive the dispersion relations of polytype ($\mathrm{ABC}$ or $\mathrm{ABCD}$) superlattices.

Patent
30 Jun 1982
TL;DR: In this paper, a binary DC balanced code and an encoder circuit for effecting same is described, which translates an 8 bit byte of information into 10 binary digits for transmission over electromagnetic or optical transmission lines subject to timing and low frequency constraints.
Abstract: A binary DC balanced code and an encoder circuit for effecting same is described, which translates an 8 bit byte of information into 10 binary digits for transmission over electromagnetic or optical transmission lines subject to timing and low frequency constraints. The significance of this code is that it combines a low circuit count for implementation with excellent performance near the theoretical limits, when measured with the commonly accepted criteria. The 8B/10B coder is partitioned into a 5B/6B plus a 3B/4B coder. The input code points are assigned to the output code points so the number of bit changes required for translation is minimized and can be grouped into a few classes.

Journal ArticleDOI
King-Ning Tu1, R. D. Thompson1
TL;DR: In this paper, the formation of Cu6Su5 between Cu and Sn thin films at room temperature and of Cu3Sn between Cu6Sn5 and Cu at temperatures from 115° to 150°'C were measured by Rutherford backscattering spectroscopy and glancing-incidence X-ray diffraction.

Journal ArticleDOI
TL;DR: It is shown that a constrained run length algorithm is well suited to partition most documents into areas of text lines, solid black lines, and rectangular ☐es enclosing graphics and halftone images.

Journal ArticleDOI
Williams1, Parker
TL;DR: The different techniques of design for testability are discussed in detail, including techniques which can be applied to today's technologies and techniques which have been recently introduced and will soon appear in new designs.
Abstract: This paper discusses the basics of design for testability. A short review of testing is given along with some reasons why one should test. The different techniques of design for testability are discussed in detail. These include techniques which can be applied to today's technologies and techniques which have been recently introduced and will soon appear in new designs.

Journal ArticleDOI
TL;DR: It is shown that the proposed hybrid ARQ scheme provides both high system throughput and high system reliability, particularly attractive for error control in high-speed data communication systems with significant roundtrip delays, such as satellite channels.
Abstract: This paper presents a new type of hybrid ARQ scheme for error control in data communication systems. The new scheme is based on the concept that the parity-check digits for error correction are sent to the receiver only when they are needed. Normally, data blocks with some parity-check bits for error detection are transmitted. When a data block D is detected in errors, the retransmissions are not simply repetitions of D , but alternate repetitions of a parity block P(D) and D . The parity block P(D) is formed based on D and a half-rate invertible code which is capable of correcting t or fewer errors and simultaneously detecting d (d > t) or fewer errors. When a parity block is received, it is used to recover the originally transmitted data block either by inversion or by decoding operation. The repetitions of the parity block P(D) and the data block D are alternately stored in the receiver buffer for error correction until D is recovered. We show that the proposed hybrid ARQ scheme provides both high system throughput and high system reliability. It is particularly attractive for error control in high-speed data communication systems with significant roundtrip delays, such as satellite channels.

Proceedings ArticleDOI
G. Jaeschke1, Hans-Jörg Schek1
29 Mar 1982
TL;DR: An extension of the relational model is proposed consisting of Non First Normal Form (NF2) relations, enriched mainly by so called nest and unnest operations which transform between NF2 relations and the usual ones.
Abstract: Usually, the first normal form condition of the relational model of data is imposed. Presently, a broader class of data base applications like office information systems is considered where this restriction is not convenient. Therefore, an extension of the relational model is proposed consisting of Non First Normal Form (NF2) relations. The relational algebra is enriched mainly by so called nest and unnest operations which transform between NF2 relations and the usual ones. We state some properties of these operations and some rules which occur in combination with the operations of the usual relational algebra. Since we propose to use the NF2 model also for the internal data model these rules are important not only for theoretical reasons but also for a practical implementation.

Journal ArticleDOI
G.B. Street1, T. C. Clarke1, M. Krounbi1, K. Keiji Kanazawa1, Victor Y. Lee1, P. Pfluger1, J. C. Scott1, G. Weiser1 
TL;DR: In this article, optical spectroscopy studies of both the electrochemically oxidized and the neutral polymer suggest the presence of highly mobile spins, consistent with the α,α' bonding in these polymers.
Abstract: Oxidized and neutral films of polypyrrole have been prepared electrochemically in the absence of oxygen and water. The neutral films are insulating and can be readily oxidized by chemical oxidizing agents to give films of greater conductivity than can be achieved by electrochemical oxidation. Optical spectroscopy provides evidence for the similarity of the polymeric carbonium ion produced by both types of oxidation. NMR studies are consistent with the α,α’ bonding in these polymers; they also show the expected downfield shifts relative to the neutral polymer on both chemical and electrochemical oxidation. ESR studies of both the electrochemically oxidized and the neutral polymer suggest the presence of highly mobile spins.

Journal ArticleDOI
Ronald Fagin1
TL;DR: A new concept is mtroduced, called "faithfulness (with respect to direct product)," which enables powerful results to be proved about the existence of "Armstrong relations" in the presence of these new dependencies.
Abstract: Certain first-order sentences, called "dependencies," about relations in a database are defined and studied. These dependencies seem to include all prewously defined dependencies as special cases A new concept is mtroduced, called "faithfulness (with respect to direct product)," which enables powerful results to be proved about the existence of "Armstrong relations" in the presence of these new dependencies. (An Armstrong relaUon is a relation that obeys precisely those dependencies that are the logical consequences of a given set of dependencies.) Results are also obtained about characterizing the class of projections of those relations that obey a given set of dependencies.


Journal ArticleDOI
E. F. Codd1
TL;DR: This chapter discusses a series of arguments to support the claim that relational database technology offers dramatic improvements in productivity both for end users and for application programers.
Abstract: It is well known that the growth in demands from end users for new applications is outstripping the capability of data processing departments to implement the corresponding application programs. There are two complementary approaches to attacking this problem (and both approaches are needed): one is to put end users into direct touch with the information stored in computers; the other is to increase the productivity of data processing professionals in the development of application programs. It is less well known that a single technology, relational database management, provides a practical foundation for both approaches. It is explained why this is so.While developing this productivity theme, it is noted that the time has come to draw a very sharp line between relational and nonrelational database systems, so that the label "relational" will not be used in misleading ways. The key to drawing this line is something called a "relational processing capability."

Journal ArticleDOI
TL;DR: This system has successfully detected all but a few timing problems for the IBM 3081 Processor Unit (consisting of almost 800 000 circuits) prior to the hardware debugging of timing.
Abstract: Timing Analysis is a design automation program that assists computer design engineers in locating problem timing in a clocked, sequential machine. The program is effective for large machines because, in part, the running time is proportional to the number of circuits. This is in contrast to alternative techniques such as delay simulation, which requires large numbers of test patterns, and path tracing, which requires tracing of all paths. The output of Timing Analysis includes "Slack" at each block to provide a measure of the severity of any timing problem. The program also generates standard deviations for the times so that a statistical timing design can be produced rather than a worst case approach. This system has successfully detected all but a few timing problems for the IBM 3081 Processor Unit (consisting of almost 800 000 circuits) prior to the hardware debugging of timing. The 3081 is characterized by a tight statistical timing design.

Journal ArticleDOI
Bo N. J. Persson, Norton D. Lang1
TL;DR: In this paper, the nonradiative damping of a dipole outside a metal surface is calculated using a realistic surface potential for the metal conduction electrons in contrast to most earlier studies.
Abstract: We present a calculation of the nonradiative damping of a dipole outside a metal surface. The calculation uses a realistic surface potential for the metal conduction electrons in contrast to most earlier studies. A simple experiment is suggested to test the theoretical predictions.