scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 1991"


Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations


Journal ArticleDOI
Simon J. Sheather, M. C. Jones1
TL;DR: The key to the success of the current procedure is the reintroduction of a non- stochastic term which was previously omitted together with use of the bandwidth to reduce bias in estimation without inflating variance.
Abstract: We present a new method for data-based selection of the bandwidth in kernel density estimation which has excellent properties It improves on a recent procedure of Park and Marron (which itself is a good method) in various ways First, the new method has superior theoretical performance; second, it also has a computational advantage; third, the new method has reliably good performance for smooth densities in simulations, performance that is second to none in the existing literature These methods are based on choosing the bandwidth to (approximately) minimize good quality estimates of the mean integrated squared error The key to the success of the current procedure is the reintroduction of a non- stochastic term which was previously omitted together with use of the bandwidth to reduce bias in estimation without inflating variance

2,475 citations


Journal ArticleDOI
TL;DR: In this article, the authors present new algorithms that efficiently compute static single assignment forms and control dependence graphs for arbitrary control flow graphs using the concept of {\em dominance frontiers} and give analytical and experimental evidence that these data structures are usually linear in the size of the original program.
Abstract: In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.

2,198 citations


Journal ArticleDOI
TL;DR: On montre que la resistance magnetique dans le plan, des sandwiches de couches ferromagnetiques non couplees separees par des couches metalliques ultrafines non magnetiques (Cu, Ag, Au, Au), est fortement accrue lorsque les aimantations des deux couchettes sont antiparalleles.
Abstract: We show that the in-plane magnetoresistance of sandwiches of uncoupled ferromagnetic (${\mathrm{Ni}}_{81}$${\mathrm{Fe}}_{19}$,${\mathrm{Ni}}_{80}$${\mathrm{Co}}_{20}$,Ni) layers separated by ultrathin nonmagnetic metallic (Cu,Ag,Au) layers is strongly increased when the magnetizations of the two ferromagnetic layers are aligned antiparallel. Using NiFe layers, we report a relative change of resistance of 5.0% in 10 Oe at room temperature. The comparison between different ferromagnetic materials (alloys or pure elements) leads us to emphasize the role of bulk rather than interfacial spin-dependent scattering in these structures, in contrast to Fe/Cr multilayers.

1,690 citations


Journal ArticleDOI
TL;DR: The effects of thermal fluctuations, quenched disorder, and anisotropy on the phases and phase transitions in type-II superconductors are examined, focusing on linear and nonlinear transport properties.
Abstract: The effects of thermal fluctuations, quenched disorder, and anisotropy on the phases and phase transitions in type-II superconductors are examined, focusing on linear and nonlinear transport properties. In zero magnetic field there are two crossovers upon approaching ${\mathit{T}}_{\mathit{c}}$, first the ``Ginzburg'' crossover from mean-field behavior to the universality class of an uncharged superfluid, and then, much closer to ${\mathit{T}}_{\mathit{c}}$ for strongly type-II systems, a crossover to the universality class of a charged superfluid. The primary focus of this paper is on the behavior in the presence of a penetrating magnetic field. In a clean system the vortex-lattice phase can melt due to thermal fluctuations; we estimate the phase boundary in a variety of regimes. Pinning of vortices due to impurities or other defects destroys the long-range correlations of the vortex lattice, probably replacing it with a new vortex-glass phase that has spin-glasslike off-diagonal long-range order and is truly superconducting, in contrast to conventional theories of ``flux creep.'' The properties of this vortex-glass phase are examined, as well as the critical behavior near the transition from the vortex-glass to the vortex-fluid phase. The crossover from lattice to vortex-glass behavior for weak pinning is also examined. Linear and nonlinear conductivity measurements and other experiments on the high-${\mathit{T}}_{\mathit{c}}$ superconductors Y-Ba-Cu-O and Bi-Sr-Ca-Cu-O are discussed in light of the results. The latter is found to exhibit strongly two-dimensional behavior over large portions of its phase diagram.

1,523 citations


Journal ArticleDOI
TL;DR: The authors propose a computationally simple approximate expression to provide a unified metric to represent the effective bandwidth used by connections and the corresponding effective load of network links, which can then be used for efficient bandwidth management, routing, and call control procedures aimed at optimizing network usage.
Abstract: The authors propose a computationally simple approximate expression for the equivalent capacity or bandwidth requirement of both individual and multiplexed connections, based on their statistical characteristics and the desired grade-of-service (GOS). The purpose of such an expression is to provide a unified metric to represent the effective bandwidth used by connections and the corresponding effective load of network links. These link metrics can then be used for efficient bandwidth management, routing, and call control procedures aimed at optimizing network usage. While the methodology proposed can provide an exact approach to the computation of the equivalent capacity, the associated complexity makes it infeasible for real-time network traffic control applications. Hence, an approximation is required. The validity of the approximation developed is verified by comparison to both exact computations and simulation results. >

1,442 citations


Book ChapterDOI
01 Feb 1991
TL;DR: Initial results from an apparatus and protocol designed to implement quantum public key distribution are described, by which two users exchange a random quantum transmission, consisting of very faint flashes of polarized light, which remains secure against an adversary with unlimited computing power.
Abstract: We describe initial results from an apparatus and protocol designed to implement quantum public key distribution, by which two users, who share no secret information initially: 1) exchange a random quantum transmission, consisting of very faint flashes of polarized light; 2) by subsequent public discussion of the sent and received versions of this transmission estimate the extent of eavesdropping that might have taken place on it, and finally 3) if this estimate is small enough, can distill from the sent and received versions a smaller body of shared random information (key), which is certifiably secret in the sense that any third party's expected information on it is an exponentially small fraction of one bit. Because the system depends on the uncertainty principle of quantum physics, instead of usual mathematical assumptions such as the difficulty of factoring, it remains secure against an adversary with unlimited computing power.

1,390 citations


Proceedings ArticleDOI
Danny Dolev1, Cynthia Dwork1, Moni Naor1
03 Jan 1991
TL;DR: Non-malleable schemes for each of the contexts of string commitment and zero-knowledge proofs of possession of knowledge, where a user need not know anything about the number or identity of other system users are presented.
Abstract: The notion of non-malleable cryptography, an extension of semantically secure cryptography, is defined. Informally, the additional requirement is that given the ciphertext it is impossible to generate a different ciphertext so that the respective plaintexts are related. The same concept makes sense in the contexts of string commitment and zero-knowledge proofs of possession of knowledge. Non-malleable schemes for each of these three problems are presented. The schemes do not assume a trusted center; a user need not know anything about the number or identity of other system users.

1,180 citations


Journal ArticleDOI
TL;DR: It is shown that Co slabs are indirectly exchanged coupled via thin Cu layers with a coupling that alternates back and forth between antiferromagnetic and ferromagnetic superlattices, confirming theoretical predictions more than 25 years old.
Abstract: Confirming theoretical predictions more than 25 years old, we show that Co slabs are indirectly exchange coupled via thin Cu layers with a coupling that alternates back and forth between antiferromagnetic and ferromagnetic Four oscillations are observed with a period of \ensuremath{\simeq}10 \AA{} Moreover, the antiferromagnetically coupled Co/Cu superlattices exhibit extraordinarily large saturation magnetoresistances at 300 K of more than 65%

1,141 citations


Journal ArticleDOI
29 Nov 1991-Science
TL;DR: Recent developments in scanning tunneling microscopy have made atom and molecular manipulation a reality; single atoms can be placed at selected positions and structures can be built to a particular design atom-by-atom.
Abstract: The prospect of manipulating matter on the atomic scale has fascinated scientists for decades. This fascination may be motivated by scientific and technological opportunities, or from a curiosity about the consequences of being able to place atoms in a particular location. Advances in scanning tunneling microscopy have made this prospect a reality; single atoms can be placed at selected positions and structures can be built to a particular design atom-by-atom. Atoms and molecules may be manipulated in a variety of ways by using the interactions present in the tunnel junction of a scanning tunneling microscope. Some of these recent developments and some of the possible uses of atomic and molecular manipulation as a tool for science are discussed.

952 citations


Journal ArticleDOI
Stuart S. P. Parkin1
TL;DR: The exchange-coupling strength is found to increase systematically from the 5d to 4d to 3d metals and exponentially with increasing number of d electrons along each period.
Abstract: We show that oscillatory indirect magnetic exchange coupling via transition metals sandwiched between ferromagnetic layers of Fe, Co, Ni, or Ni alloys is a general phenomeonon. Surprisingly, the oscillation period is approximately the same, \ensuremath{\simeq}10 \AA{}, in all the transition metals in which we observe coupling with the single exception of Cr, for which it is much longer. Furthermore, the exchange-coupling strength is found to increase systematically from the 5d to 4d to 3d metals and exponentially with increasing number of d electrons along each period.

Journal ArticleDOI
Moni Naor1
TL;DR: It is shown how a pseudorandom generator can provide a bit-commitment protocol and the number of bits communicated when parties commit to many bits simultaneously, and the assumption of the existence of pseudorRandom generators suffices to assure amortized O(1) bits of communication per bit commitment.
Abstract: We show how a pseudorandom generator can provide a bit-commitment protocol. We also analyze the number of bits communicated when parties commit to many bits simultaneously, and show that the assumption of the existence of pseudorandom generators suffices to assure amortized O(1) bits of communication per bit commitment.

Proceedings ArticleDOI
20 May 1991
TL;DR: In this article, the authors adapted the techniques of mathematical epidemiology to the study of computer virus propagation and concluded that an imperfect defense against computer viruses can still be highly effective in preventing their widespread proliferation, provided that the infection rate does not exceed a well-defined critical epidemic threshold.
Abstract: The strong analogy between biological viruses and their computational counterparts has motivated the authors to adapt the techniques of mathematical epidemiology to the study of computer virus propagation. In order to allow for the most general patterns of program sharing, a standard epidemiological model is extended by placing it on a directed graph and a combination of analysis and simulation is used to study its behavior. The conditions under which epidemics are likely to occur are determined, and, in cases where they do, the dynamics of the expected number of infected individuals are examined as a function of time. It is concluded that an imperfect defense against computer viruses can still be highly effective in preventing their widespread proliferation, provided that the infection rate does not exceed a well-defined critical epidemic threshold. >

Book
01 May 1991
TL;DR: Using a combined experimental-theoretical approach unique in neuroscience, the authors present important new techniques for the physiological reconstruction of a large biological neuronal network in the CA3 hippocampal region in vitro.
Abstract: From the Publisher: The questions of how a large population of neurons in the brain functions, how synchronized firing of neurons is achieved, and what factors regulate how many and which neurons fire under different conditions form the central theme of this book. Using a combined experimental-theoretical approach unique in neuroscience, the authors present important new techniques for the physiological reconstruction of a large biological neuronal network. They begin by discussing experimental studies of the CA3 hippocampal region in vitro, focusing on single-cell and synaptic electrophysiology, particularly the effects a single neuron exerts on its neighbors. This is followed by a description of a computer model of the system, first for individual cells then for the entire detailed network, and the model is compared with experiments under a variety of conditions. The results shed significant light into the mechanisms of epilepsy, electroencephalograms, and biological oscillations and provide an excellent test case for theories of neural networks.

Journal ArticleDOI
15 Aug 1991-Nature
TL;DR: In this paper, the operation of a bistable switch that derives its function from the motion of a single atom is reported. But the switch is a prototype of a new class of potentially very small electronic devices which we will call atom switches.
Abstract: THE scanning tunnelling microscope1 (STM) has been employed in recent years in attempts to develop atomic-scale electronic devices, both by examining device-like characteristics in preexisting structures2,3 and by creating new structures by the precise manipulation of atoms and molecules with the STM tip4–6. Here we report the operation of a bistable switch that derives its function from the motion of a single atom. A xenon atom is moved reversibly between stable positions on each of two stationary conducting 'leads', corresponding to the STM tip and a nickel surface. The state of the switch is set (that is, the xenon atom is moved to the desired location) by the application of a voltage pulse of the appropriate sign across the leads. The state of the switch is identified by measuring the conductance across the leads. This switch is a prototype of a new class of potentially very small electronic devices which we will call atom switches.

Journal ArticleDOI
TL;DR: After these alterations, tonic depolarization of the soma leads to adapting repetitive firing, whereas stimulation of the distal dendrites leads to bursting, and a critical set of parameters concerns the regulation of the pool of intracellular [Ca2+] that interacts with membrane channels (gK(C) and gK(AHP)), particularly in the dendrite.
Abstract: 1. We have developed a 19-compartment cable model of a guinea pig CA3 pyramidal neuron. Each compartment is allowed to contain six active ionic conductances: gNa, gCa, gK(DR) (where DR stands for delayed rectifier), gK(A), gK(AHP), and gK(C). THe conductance gCa is of the high-voltage activated type. The model kinetics for the first five of these conductances incorporate voltage-clamp data obtained from isolated hippocampal pyramidal neurons. The kinetics of gK(C) are based on data from bullfrog sympathetic neurons. The time constant for decay of submembrane calcium derives from optical imaging of Ca signals in Purkinje cell dendrites. 2. To construct the model from available voltage-clamp data, we first reproduced current-clamp records from a model isolated neuron (soma plus proximal dendrites). We next assumed that ionic channel kinetics in the dendrites were the same as in the soma. In accord with dendritic recordings and calcium-imaging data, we also assumed that significant gCa occurs in dendrites. We then attached sections of basilar and apical dendritic cable. By trial and error, we found a distribution (not necessarily unique) of ionic conductance densities that was consistent with current-clamp records from the soma and dendrites of whole neurons and from isolated apical dendrites. 3. The resulting model reproduces the Ca(2+)-dependent spike depolarizing afterpotential (DAP) recorded after a stimulus subthreshold for burst elicitation. 4. The model also reproduces the behavior of CA3 pyramidal neurons injected with increasing somatic depolarizing currents: low-frequency (0.3-1.0 Hz) rhythmic bursting for small currents, with burst frequency increasing with current magnitude; then more irregular bursts followed by afterhyperpolarizations (AHPs) interspersed with brief bursts without AHPs; and finally, rhythmic action potentials without bursts. 5. The model predicts the existence of still another firing pattern during tonic depolarizing dendritic stimulation: brief bursts at less than 1 to approximately 12 Hz, a pattern not observed during somatic stimulation. These bursts correspond to rhythmic dendritic calcium spikes. 6. The model CA3 pyramidal neuron can be made to resemble functionally a CA1 pyramidal neuron by increasing gK(DR) and decreasing dendritic gCa and gK(C). Specifically, after these alterations, tonic depolarization of the soma leads to adapting repetitive firing, whereas stimulation of the distal dendrites leads to bursting. 7. A critical set of parameters concerns the regulation of the pool of intracellular [Ca2+] that interacts with membrane channels (gK(C) and gK(AHP)), particularly in the dendrites.(ABSTRACT TRUNCATED AT 400 WORDS)

Journal ArticleDOI
Rao Tummala1
TL;DR: In this paper, a broad overview of packaging involving interconnecting, powering, protecting, and cooling semiconductor chips to meet a variety of computer system needs is presented, both for high-performance and low-performance applications.
Abstract: A broad overview of packaging involving interconnecting, powering, protecting, and cooling semiconductor chips to meet a variety of computer system needs is presented. The general requirements for ceramics in terms of their thermal, mechanical, electrical, and dimensional control requirements are presented, both for high-performance and low-performance applications. Glass-ceramics are identified as the best candidates for high-performance systems, and aluminum nitride, alumina, or mullite are identified for low-performance systems. Glass-ceramic/copper substrate technology is discussed as an example of high-performance ceramic packaging for use in 1990s. Lower-dielectric-constant ceramics such as composites of silica, borosilicate, and cordierite, with or without polymers and porosity, are projected as potential ceramic substrate materials by the year 2000.


Journal ArticleDOI
TL;DR: It is shown that in order to achieve optimal successive refinement the necessary and sufficient conditions are that the solutions of the rate distortion problem can be written as a Markov chain and all finite alphabet signals with Hamming distortion satisfy these requirements.
Abstract: The successive refinement of information consists of first approximating data using a few bits of information, then iteratively improving the approximation as more and more information is supplied. The goal is to achieve an optimal description at each stage. In general, an ongoing description which is rate-distortion optimal whenever it is interrupted is sought. It is shown that in order to achieve optimal successive refinement the necessary and sufficient conditions are that the solutions of the rate distortion problem can be written as a Markov chain. In particular, all finite alphabet signals with Hamming distortion satisfy these requirements. It is also shown that the same is true for Gaussian signals with squared error distortion and for Laplacian signals with absolute error distortion. A simple counterexample with absolute error distortion and a symmetric source distribution which shows that successive refinement is not always achievable is presented. >

Journal ArticleDOI
18 Oct 1991-Science
TL;DR: Electron diffraction patterns of the fullerene C60 in the gaseous state have been obtained by volatilizing it from a newly designed oven-nozzle at 730�C using least-squares refinement of a model incorporating all possible interatomic distances.
Abstract: Electron diffraction patterns of the fullerene C 60 in the gaseous state have been obtained by volatilizing it from a newly designed oven-nozzle at 730°C. The many peaks of the experimental radial distribution curve calculated from the scattered intensity are completely consistent with icosahedral symmetry for the free molecule. On the basis of this symmetry assumption, least-squares refinement of a model incorporating all possible interatomic distances led to the values r g (C 1 -C 2 ) = 1.458(6) angstroms (A) for the thermal average bond length within the five-member ring (that is, for the bond fusing five- and six-member rings) and r g (C 1 -C 6 ) = 1.401(10) A for that connecting five-member rings (the bond fusing six-member rings). The weighted average of the two bond lengths and the difference between them are the values 1.439(2) A and 0.057(6) A, respectively. The diameter of the icosahedral sphere is 7.113(10) A. The uncertainties in parentheses are estimated 2σ values.

Journal ArticleDOI
Daniel Rugar1, Peter Grutter1
TL;DR: A mechanical degenerate parametric amplifier has been devised which greatly increases the motional response of a microcantilever for small harmonic force excitations and can improve force detection sensitivity for measurements dominated by sensor noise or backaction effects.
Abstract: A mechanical degenerate parametric amplifier has been devised which greatly increases the motional response of a microcantilever for small harmonic force excitations. The amplifier can improve force detection sensitivity for measurements dominated by sensor noise or backaction effects and can also produce mechanical squeezed states. In an initial squeezing demonstration, the thermal noise (Brownian motion) of the cantilever was reduced in one phase by 4.9 dB.

Journal ArticleDOI
TL;DR: A queuing-theoretical formulation of the imprecise scheduling problem is presented and workload models that quantify the tradeoff between result quality and computation time are reviewed.
Abstract: The imprecise computation technique, which prevents timing faults and achieves graceful degradation by giving the user an approximate result of acceptable quality whenever the system cannot produce the exact result in time, is considered. Different approaches for scheduling imprecise computations in hard real-time environments are discussed. Workload models that quantify the tradeoff between result quality and computation time are reviewed. Scheduling algorithms that exploit this tradeoff are described. These include algorithms for scheduling to minimize total error, scheduling periodic jobs, and scheduling parallelizable tasks. A queuing-theoretical formulation of the imprecise scheduling problem is presented. >

Journal ArticleDOI
Massimo V. Fischetti1
TL;DR: In this article, Monte Carlo simulations of electron transport in seven semiconductors of the diamond and zinc-blende structure (Ge, Si, GaAs, InP, AlAs, AlP, InAs, GaP) and some of their alloys were performed at two lattice temperatures (77 and 300 K).
Abstract: Monte Carlo simulations of electron transport in seven semiconductors of the diamond and zinc-blende structure (Ge, Si, GaAs, InP, AlAs, InAs, GaP) and some of their alloys (Al/sub x/Ga/sub 1-x/As, In/sub x/Ga/sub 1-x/As, Ga/sub x/In/sub 1-x/P) and hole transport in Si were performed at two lattice temperatures (77 and 300 K). The model uses band structures obtained from local empirical pseudopotential calculations and particle-lattice scattering rates computed from the Fermi golden rule to account for band-structure effects. Intervalley deformation potentials significantly lower than those which have been previously reported are needed to reproduce available experimental data. This is attributed to the more complicated band structures, particularly around the L- and X-symmetry points in most materials. Satisfactory agreement is obtained between Monte Carlo results and some experiments. >

Journal ArticleDOI
TL;DR: The magnetic response of single, isolated, micron-size Au loops is found to contain a component which oscillates with the applied magnetic flux with a fundamental period of Φ 0 h/e, which corresponds to a persistent current of 0.3-2.0 F /L.
Abstract: Measurements have been made of the low-temperature magnetic response of single, isolated, micron-size Au loops. The magnetic response is found to contain a component which oscillates with the applied magnetic flux with a fundamental period of ${\mathrm{\ensuremath{\Phi}}}_{0}$=h/e. The amplitude of the oscillatory component corresponds to a persistent current of \ensuremath{\simeq}(0.3--2.0)${\mathit{ev}}_{\mathit{F}}$/L, 1 to 2 orders of magnitude larger than predicted by current theories.

Journal ArticleDOI
TL;DR: In this paper, the vibrational Raman and infrared spectra for chemically separated C 60 and C 70 fullerenes were reported and compared with the calculated spectrum for buckminsterfullerene.

Proceedings ArticleDOI
18 Jun 1991
TL;DR: This paper describes a statistical technique for aligning sentences with their translations in two parallel corpora and shows that even without the benefit of anchor points the correlation between the lengths of aligned sentences is strong enough that it should be expected to achieve an accuracy of between 96% and 97%.
Abstract: In this paper we describe a statistical technique for aligning sentences with their translations in two parallel corpora. In addition to certain anchor points that are available in our data, the only information about the sentences that we use for calculating alignments is the number of tokens that they contain. Because we make no use of the lexical details of the sentence, the alignment computation is fast and therefore practical for application to very large collections of text. We have used this technique to align several million sentences in the English-French Hansard corpora and have achieved an accuracy in excess of 99% in a random selected set of 1000 sentence pairs that we checked by hand. We show that even without the benefit of anchor points the correlation between the lengths of aligned sentences is strong enough that we should expect to achieve an accuracy of between 96% and 97%. Thus, the technique may be applicable to a wider variety of texts than we have yet tried.

Journal ArticleDOI
Mark N. Wegman1, F. Kenneth Zadeck1
TL;DR: Four algorithms, all conservitive in the sense that all constants may not be found, but each constant found is constant over all possible executions of the program, are presented.
Abstract: Constant propagation is a well-known global flow analysis problem. The goal of constant propagation is to discover values that are constant on all possible executions of a program and to propagate these constant values as far foward through the program as possible. Expressions whose operands are all constants can be evaluated at compile time and the results propagated further. Using the algorithms presented in this paper can produce smaller and faster compiled programs. The same algorithms can be used for other kinds of analyses (e.g., type of determination). We present four algorithms in this paper, all conservitive in the sense that all constants may not be found, but each constant found is constant over all possible executions of the program. These algorithms are among the simplest, fastest, and most powerful global constant propagation algorithms known. We also present a new algorithm that performs a form of interprocedural data flow analysis in which aliasing information is gathered in conjunction with constant progagation. Several variants of this algorithm are considered.

Journal ArticleDOI
TL;DR: In this article, a chemical-mechanical polish process for metal interconnects is described, in terms of competition between an etching reaction which dissolves W and a passivation reaction to reform on the surface of the W. Mechanical action to continually disrupt a surface passivating film on W, and chemical action, to remove W, appear to be requirements for workability of the process.
Abstract: Interconnect features of W metal, recessed in an dielectric, can be formed using a novel chemical‐mechanical polish process. Mechanical action, to continually disrupt a surface passivating film on W, and chemical action, to remove W, appear to be requirements for workability of the process. A trial process chemistry using a ferricyanide etchant is described. Removal of the W is discussed in terms of competition between an etching reaction which dissolves W and a passivation reaction to reform on the surface of the W. This novel processing technology is compared with earlier methods of fabricating metal interconnect structures.

Journal ArticleDOI
TL;DR: In this paper, the authors reported giant values of saturation magnetoresistance in sputtered antiferromagnetic Co/Cu multilayers containing thin Co and Cu layers 8-10 A.
Abstract: We report giant values of saturation magnetoresistance in sputtered antiferromagnetic Co/Cu multilayers containing thin Co and Cu layers 8–10 A thick. We discuss the key importance of the buffer layer in controlling the growth of flat Co and Cu layers. As shown by cross‐section transmission electron microscopy high‐quality structures are found for growth on Fe buffer layers. Such structures display saturation magnetoresistance at 300 K of more than 65% with saturation fields of ≂10 kOe. These values are several times larger than previously found for any magnetic material at room temperature.

Journal ArticleDOI
Joachim Krug1
TL;DR: Steady states of driven lattice gases with open boundaries are investigated, and two types of phase transitions involving nonanalytic changes in the density profiles and the particles number fluctuation spectra are encountered upon varying the feeding rate and the particle interactions.
Abstract: Steady states of driven lattice gases with open boundaries are investigated. Particles are red into the system at one edge, travel under the action of an external field, and leave the system at the opposite edge. Two types of phase transitions involving nonanalytic changes in the density profiles and the particle number fluctuation spectra are encountered upon varying the feeding rate and the particle interactions, and associated diverging length scales are identified. The principle governing the transitions is the tendency of the system to maximize the transported current.