scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 1985"


Book
Bjarne Stroustrup1
01 Jan 1985
TL;DR: Bjarne Stroustrup makes C even more accessible to those new to the language, while adding advanced information and techniques that even expert C programmers will find invaluable.
Abstract: From the Publisher: Written by Bjarne Stroustrup, the creator of C, this is the world's most trusted and widely read book on C. For this special hardcover edition, two new appendixes on locales and standard library exception safety have been added. The result is complete, authoritative coverage of the C language, its standard library, and key design techniques. Based on the ANSI/ISO C standard, The C Programming Language provides current and comprehensive coverage of all C language features and standard library components. For example: abstract classes as interfaces class hierarchies for object-oriented programming templates as the basis for type-safe generic software exceptions for regular error handling namespaces for modularity in large-scale software run-time type identification for loosely coupled systems the C subset of C for C compatibility and system-level work standard containers and algorithms standard strings, I/O streams, and numerics C compatibility, internationalization, and exception safety Bjarne Stroustrup makes C even more accessible to those new to the language, while adding advanced information and techniques that even expert C programmers will find invaluable.

6,795 citations


Journal ArticleDOI
TL;DR: Results of computer simulations of a network designed to solve a difficult but well-defined optimization problem-the Traveling-Salesman Problem-are presented and used to illustrate the computational power of the networks.
Abstract: Highly-interconnected networks of nonlinear analog neurons are shown to be extremely effective in computing. The networks can rapidly provide a collectively-computed solution (a digital output) to a problem on the basis of analog input information. The problems to be solved must be formulated in terms of desired optima, often subject to constraints. The general principles involved in constructing networks to solve specific problems are discussed. Results of computer simulations of a network designed to solve a difficult but well-defined optimization problem-the Traveling-Salesman Problem-are presented and used to illustrate the computational power of the networks. Good solutions to this problem are collectively computed within an elapsed time of only a few neural time constants. The effectiveness of the computation involves both the nonlinear analog response of the neurons and the large connectivity among them. Dedicated networks of biological or microelectronic neurons could provide the computational capabilities described for a wide class of problems having combinatorial complexity. The power and speed naturally displayed by such collective networks may contribute to the effectiveness of biological information processing.

5,328 citations


Journal ArticleDOI
TL;DR: A model potential-energy function comprising both two- and three-atom contributions is proposed to describe interactions in solid and liquid forms of Si, suggesting a temperature-independent inherent structure underlies the liquid phase, just as for ``simple'' liquids with only pair interactions.
Abstract: A model potential-energy function comprising both two- and three-atom contributions is proposed to describe interactions in solid and liquid forms of Si. Implications of this potential are then explored by molecular-dynamics computer simulation, using 216 atoms with periodic boundary conditions. Starting with the diamond-structure crystal at low temperature, heating causes spontaneous nucleation and melting. The resulting liquid structurally resembles the real Si melt. By carrying out steepest-descent mappings of system configurations onto potential-energy minima, two main conclusions emerge: (1) a temperature-independent inherent structure underlies the liquid phase, just as for ``simple'' liquids with only pair interactions; (2) the Lindemann melting criterion for the crystal apparently can be supplemented by a freezing criterion for the liquid, where both involve critical values of appropriately defined mean displacements from potential minima.

4,345 citations


Journal ArticleDOI
Jerry Tersoff1, D. R. Hamann1
TL;DR: In this paper, a metal tip is scanned along the surface while ad justing its height to maintain constant vacuum tunneling current, and a contour map of the surface is generated.
Abstract: The recent development of the “scanning tunneling microscope” (STM) by Binnig et al. [8.1–5] has made possible the direct real-space imaging of surface topography. In this technique, a metal tip is scanned along the surface while ad justing its height to maintain constant vacuum tunneling current. The result is essentially a contour map of the surface. This contribution reviews the the ory [8.6–8] of STM, with illustrative examples. Because the microscopic structure of the tip is unknown, the tip wave functions are modeled as s-wave functions in the present approach [8.6, 7]. This approximation works best for small effective tip size. The tunneling current is found to be proportional to the surface local density of states (at the Fermi level), evaluated at the position of the tip. The effective resolution is roughly [2A(R+d)]1/2, where R is the effective tip radius and d is the gap distance. When applied to the 2x1 and 3x1 reconstructions of the Au(l10) surface, the theory gives excellent agreement with experiment [8.4] if a 9 A tip radius is assumed. For dealing with more complex or aperiodic surfaces, a crude but convenient calculational technique based on atom charge superposition is introduced; it reproduces the Au(l10) results reasonably well. This method is used to test the structure-sensitivity of STM. The Au(l10) image is found to be rather insensitive to the position of atoms beyond the first atomic layer.

3,192 citations


Journal ArticleDOI
Jr. L.J. Cimini1
TL;DR: The analysis and simulation of a technique for combating the effects of multipath propagation and cochannel interference on a narrow-band digital mobile channel using the discrete Fourier transform to orthogonally frequency multiplex many narrow subchannels, each signaling at a very low rate, into one high-rate channel is discussed.
Abstract: This paper discusses the analysis and simulation of a technique for combating the effects of multipath propagation and cochannel interference on a narrow-band digital mobile channel. This system uses the discrete Fourier transform to orthogonally frequency multiplex many narrow subchannels, each signaling at a very low rate, into one high-rate channel. When this technique is used with pilot-based correction, the effects of flat Rayleigh fading can be reduced significantly. An improvement in signal-to-interference ratio of 6 dB can be obtained over the bursty Rayleigh channel. In addition, with each subchannel signaling at a low rate, this technique can provide added protection against delay spread. To enhance the behavior of the technique in a heavily frequency-selective environment, interpolated pilots are used. A frequency offset reference scheme is employed for the pilots to improve protection against cochannel interference.

2,627 citations


Journal ArticleDOI
TL;DR: This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.
Abstract: In this article we study the amortized efficiency of the “move-to-front” and similar rules for dynamically maintaining a linear list. Under the assumption that accessing the ith element from the front of the list takes t(i) time, we show that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules. Other natural heuristics, such as the transpose and frequency count rules, do not share this property. We generalize our results to show that move-to-front is within a constant factor of optimum as long as the access cost is a convex function. We also study paging, a setting in which the access cost is not convex. The paging rule corresponding to move-to-front is the “least recently used” (LRU) replacement rule. We analyze the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule (Belady's MIN algorithm) by a factor that depends on the size of fast memory. No on-line paging algorithm has better amortized performance.

2,378 citations


Journal ArticleDOI
TL;DR: A λ-calculus-based model for type systems that allows us to explore the interaction among the concepts of type, data abstraction, and polymorphism in a simple setting, unencumbered by complexities of production programming languages is developed.
Abstract: Our objective is to understand the notion of type in programming languages, present a model of typed, polymorphic programming languages that reflects recent research in type theory, and examine the relevance of recent research to the design of practical programming languages.Object-oriented languages provide both a framework and a motivation for exploring the interaction among the concepts of type, data abstraction, and polymorphism, since they extend the notion of type to data abstraction and since type inheritance is an important form of polymorphism. We develop a l-calculus-based model for type systems that allows us to explore these interactions in a simple setting, unencumbered by complexities of production programming languages.The evolution of languages from untyped universes to monomorphic and then polymorphic type systems is reviewed. Mechanisms for polymorphism such as overloading, coercion, subtyping, and parameterization are examined. A unifying framework for polymorphic type systems is developed in terms of the typed l-calculus augmented to include binding of types by quantification as well as binding of values by abstraction.The typed l-calculus is augmented by universal quantification to model generic functions with type parameters, existential quantification and packaging (information hiding) to model abstract data types, and bounded quantification to model subtypes and type inheritance. In this way we obtain a simple and precise characterization of a powerful type system that includes abstract data types, parametric polymorphism, and multiple inheritance in a single consistent framework. The mechanisms for type checking for the augmented l-calculus are discussed.The augmented typed l-calculus is used as a programming language for a variety of illustrative examples. We christen this language Fun because fun instead of l is the functional abstraction keyword and because it is pleasant to deal with.Fun is mathematically simple and can serve as a basis for the design and implementation of real programming languages with type facilities that are more powerful and expressive than those of existing programming languages. In particular, it provides a basis for the design of strongly typed object-oriented languages.

1,875 citations


Journal ArticleDOI
TL;DR: Detailed calculations of the shift of exciton peaks are presented including (i) exact solutions for single particles in infinite wells, (ii) tunneling resonance calculations for finite wells, and (iii) variational calculations ofexciton binding energy in a field.
Abstract: We report experiments and theory on the effects of electric fields on the optical absorption near the band edge in GaAs/AlGaAs quantum-well structures. We find distinct physical effects for fields parallel and perpendicular to the quantum-well layers. In both cases, we observe large changes in the absorption near the exciton peaks. In the parallel-field case, the excitons broaden with field, disappearing at fields \ensuremath{\sim}${10}^{4}$ V/cm; this behavior is in qualitative agreement with previous theory and in order-of-magnitude agreement with direct theoretical calculations of field ionization rates reported in this paper. This behavior is also qualitatively similar to that seen with three-dimensional semiconductors. For the perpendicular-field case, we see shifts of the exciton peaks to lower energies by up to 2.5 times the zero-field binding energy with the excitons remaining resolved at up to \ensuremath{\sim}${10}^{5}$ V/cm: This behavior is qualitatively different from that of bulk semiconductors and is explained through a mechanism previously briefly described by us [D. A. B. Miller et al., Phys. Rev. Lett. 53, 2173 (1984)] called the quantum-confined Stark effect. In this mechanism the quantum confinement of carriers inhibits the exciton field ionization. To support this mechanism we present detailed calculations of the shift of exciton peaks including (i) exact solutions for single particles in infinite wells, (ii) tunneling resonance calculations for finite wells, and (iii) variational calculations of exciton binding energy in a field. We also calculate the tunneling lifetimes of particles in the wells to check the inhibition of field ionization. The calculations are performed using both the 85:15 split of band-gap discontinuity between conduction and valence bands and the recently proposed 57:43 split. Although the detailed calculations differ in the two cases, the overall shift of the exciton peaks is not very sensitive to split ratio. We find excellent agreement with experiment with no fitted parameters.

1,731 citations


Book
William S. Cleveland1
01 Jan 1985
TL;DR: The power of graphs, improving graphs, and history as mentioned in this paper, and the power of graph construction, improving graph, and improving graph history, are discussed in detail in Section 2.1.
Abstract: The power of graphs, improving graphs, and history. Principles of graph construction. Graphical methods. Graphical perception.

1,421 citations


Journal ArticleDOI
TL;DR: The splay tree, a self-adjusting form of binary search tree, is developed and analyzed and is found to be as efficient as balanced trees when total running time is the measure of interest.
Abstract: The splay tree, a self-adjusting form of binary search tree, is developed and analyzed. The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy. On an n-node splay tree, all the standard search tree operations have an amortized time bound of O(log n) per operation, where by “amortized time” is meant the time per operation averaged over a worst-case sequence of operations. Thus splay trees are as efficient as balanced trees when total running time is the measure of interest. In addition, for sufficiently long access sequences, splay trees are as efficient, to within a constant factor, as static optimum search trees. The efficiency of splay trees comes not from an explicit structural constraint, as with balanced trees, but from applying a simple restructuring heuristic, called splaying, whenever the tree is accessed. Extensions of splaying give simplified forms of two other data structures: lexicographic or multidimensional search trees and link/cut trees.

1,321 citations


Journal ArticleDOI
R. E. Slusher1, Leo W. Hollberg1, Bernard Yurke1, Jerome Mertz1, J. F. Valley1 
TL;DR: In this paper, a balanced homodyne detector was used to measure the optical noise in the cavity, comprised of primarily vacuum fluctuations and a small component of spontaneous emission from the pumped Na atoms.
Abstract: Squeezed states of the electromagnetic field have been generated by nondegenerate four-wave mixing due to Na atoms in an optical cavity. The optical noise in the cavity, comprised of primarily vacuum fluctuations and a small component of spontaneous emission from the pumped Na atoms, is amplified in one quadrature of the optical field and deamplified in the other quadrature. These quadrature components are measured with a balanced homodyne detector. The total noise level in the deamplified quadrature drops below the vacuum noise level.

Book
01 Jan 1985
TL;DR: CMOS Circuit and Logic Design: The Complemenatry CMOS Inverter-DC Characteristics and Design Strategies.
Abstract: Introduction to CMOS Circuits. Introduction. MOS Transistors. MOS Transistor Switches. CMOS Logic. Circuit Representations. CMOS Summary. MOS Transistor Theory. Introduction. MOS Device Design Equation. The Complemenatry CMOS Inverter-DC Characteristics. Alternate CMOS Inverters. The Differential Stage. The Transmission Gate. Bipolar Devices. CMOS Processing Technology. Silicon Semiconductor Technology: An Overview. CMOS Technologies. Layout Design Rules. CAD Issues. Circuit Characterization and Performance Estimation. Introduction. Resistance Estimation. Capacitance Estimation. Inductance. Switching Characteristics. CMOS Gate Transistor Sizing. Power Consumption. Determination of Conductor Size. Charge Sharing. Design Margining. Yield. Scaling of MOS Transistor Dimensions. CMOS Circuit and Logic Design. Introduction. CMOS Logic Structures. Basic Physical Design of Simple Logic Gates. Clocking Strategies. Physical and Electrical Design of Logic Gates. 10 Structures. Structured Design Strategies. Introduction. Design Economics. Design Strategies. Design Methods. CMOS Chip Design Options. Design Capture Tools. Design Verification Tools. CMOS Test Methodolgies. Introduction. Fault Models. Design for Testability. Automatic Test Pattern Generation. Design for Manufacturability. CMOS Subsystem Design. Introduction. Adders and Related Functions. Binary Counters. Multipliers and Filter Structures. Random Access and Serial Memory. Datapaths. FIR and IIR Filters. Finite State Machines. Programmable Logic Arrays. Random Control Logic.

Book ChapterDOI
Jerry Tersoff1, D. R. Hamann1
01 Jan 1985
TL;DR: A theory for tunneling between a real surface and a model probe tip, applicable to the recently developed "scanning tunneling microscope" is presented and it is concluded that for the AuOlO measurements the experimental "image" is relatively insensitive to the positions of atoms beyond the first atomic layer.
Abstract: The recent development of the “scanning tunneling microscope” (STM) by Binnig et al. [8.1–5] has made possible the direct real-space imaging of surface topography. In this technique, a metal tip is scanned along the surface while ad justing its height to maintain constant vacuum tunneling current. The result is essentially a contour map of the surface. This contribution reviews the the ory [8.6–8] of STM, with illustrative examples. Because the microscopic structure of the tip is unknown, the tip wave functions are modeled as s-wave functions in the present approach [8.6, 7]. This approximation works best for small effective tip size. The tunneling current is found to be proportional to the surface local density of states (at the Fermi level), evaluated at the position of the tip. The effective resolution is roughly [2A(R+d)]1/2, where R is the effective tip radius and d is the gap distance. When applied to the 2x1 and 3x1 reconstructions of the Au(l10) surface, the theory gives excellent agreement with experiment [8.4] if a 9 A tip radius is assumed. For dealing with more complex or aperiodic surfaces, a crude but convenient calculational technique based on atom charge superposition is introduced; it reproduces the Au(l10) results reasonably well. This method is used to test the structure-sensitivity of STM. The Au(l10) image is found to be rather insensitive to the position of atoms beyond the first atomic layer.

Journal ArticleDOI
TL;DR: In this article, the authors propose a simple model of equilibrium asset pricing in which there are differences in the amounts of information available for developing inferences about the returns parameters of alternative securities.
Abstract: We propose a simple model of equilibrium asset pricing in which there are differences in the amounts of information available for developing inferences about the returns parameters of alternative securities. In contrast with earlier work, we show that parameter uncertainty, or estimation risk, can have an effect upon market equilibrium. Under reasonable conditions, securities for which there is relatively little information are shown to have relatively higher systematic risk when that risk is properly measured, ceteris paribus. The initially very limited model is shown to be robust with respect to relaxation of a number of its principal assumptions. We provide theoretical support for the empirical examination of at least three proxies for relative information: period of listing, number of security returns observations available, and divergence of analyst opinion.

Journal ArticleDOI
TL;DR: For instance, this article found that infants seek out and use facial expressions to disambiguate situations by 12 months of age, and that facial expressions regulate behavior most clearly in contexts of uncertainty.
Abstract: Facial expressions of emotion are not merely responses indicative of internal states, they are also stimulus patterns that regulate the behavior of others. A series of four studies indicate that, by 12 months of age, human infants seek out and use such facial expressions to disambiguate situations. The deep side of a visual cliff was adjusted to a height that produced no clear avoidance and much referencing of the mother. If a mother posed joy or interest while her infant referenced, most infants crossed the deep side. If a mother posed fear or anger, very few infants crossed. If a mother posed sadness, an intermediate number crossed. These findings are not interpretable as a discrepancy reaction to an odd pose: in the absence of any depth whatsoever, few infants referenced the mother and those who did while the mother was posing fear hesitated but crossed nonetheless. The latter finding suggests that facial expressions regulate behavior most clearly in contexts of uncertainty.

Journal ArticleDOI
TL;DR: In this article, a mathematical model tailored to the physics of positron emissions is presented, and the model is used to describe the image reconstruction problem of PET as a standard problem in statistical estimation from incomplete data.
Abstract: Positron emission tomography (PET)—still in its research stages—is a technique that promises to open new medical frontiers by enabling physicians to study the metabolic activity of the body in a pictorial manner. Much as in X-ray transmission tomography and other modes of computerized tomography, the quality of the reconstructed image in PET is very sensitive to the mathematical algorithm to be used for reconstruction. In this article, we tailor a mathematical model to the physics of positron emissions, and we use the model to describe the basic image reconstruction problem of PET as a standard problem in statistical estimation from incomplete data. We describe various estimation procedures, such as the maximum likelihood (ML) method (using the EM algorithm), the method of moments, and the least squares method. A computer simulation of a PET experiment is then used to demonstrate the ML and the least squares reconstructions. The main purposes of this article are to report on what we believe is an...

Journal ArticleDOI
Steven Chu1, Leo W. Hollberg1, John E. Bjorkholm1, Alex E. Cable1, Arthur Ashkin1 
TL;DR: The confinement and cooling of atoms with laser light is reported, in which the atoms are localized in a 0.2 cm volume for a time in excess of 0.1 second and cooled to a temperature of T = 2.4 × 10−4K.
Abstract: The scattering force due to resonance radiation pressure was first detected by Frisch in 1933.[1] Later, Ashkin[2] pointed out that laser light can exert a substantial force suitable for the optical manipulation of atoms, and numerous proposals to cool and trap neutral atoms with laser light.[3] Atoms in an atomic beam have been stopped by light,[4] in which the final velocity spread corresponds to a temperature of 50−100 mK. We report here the confinement and cooling of atoms with laser light, in which the atoms are localized in a 0.2 cm volume for a time in excess of 0.1 second and cooled to a temperature of T = 2.4 × 10−4K.[5]

Journal ArticleDOI
Peter McCullagh1
TL;DR: In this paper, the analysis of ordinal categorical data tells that any book will give certain knowledge to take all benefits and add more knowledge of you to life and work better.
Abstract: From the combination of knowledge and actions, someone can improve their skill and ability. It will lead them to live and work much better. This is why, the students, workers, or even employers should have reading habit for books. Any book will give certain knowledge to take all benefits. This is what this analysis of ordinal categorical data tells you. It will add more knowledge of you to life and work better. Try it and prove it.

Journal ArticleDOI
TL;DR: In this article, the authors considered a class of M/G/1 queueing models with a server who is unavailable for occasional intervals of time and showed that the stationary number of customers present in the system at a random point in time is distributed as the sum of two or more independent random variables.
Abstract: This paper considers a class of M/G/1 queueing models with a server who is unavailable for occasional intervals of time. As has been noted by other researchers, for several specific models of this type, the stationary number of customers present in the system at a random point in time is distributed as the sum of two or more independent random variables, one of which is the stationary number of customers present in the standard M/G/1 queue i.e., the server is always available at a random point in time. In this paper we demonstrate that this type of decomposition holds, in fact, for a very general class of M/G/1 queueing models. The arguments employed are both direct and intuitive. In the course of this work, moreover, we obtain two new results that can lead to remarkable simplifications when solving complex M/G/1 queueing models.

Journal ArticleDOI
TL;DR: La structure cristalline de AlMnSi-α est tres proche de celle des alliages icosaedriques Al−Mn−Si, en utilisant une modification de the methode de «projection» pour generer les structures icosairedriques a partir de reseaux a 6 dimensions.
Abstract: We show that the \ensuremath{\alpha}-(AlMnSi) crystal structure is closely (and systematically) related to that of the icosahedral Al-Mn-Si alloys Using a modification of the ``projection'' method of generating icosahedral structures from six-dimensional lattices, we find a simple description of the \ensuremath{\alpha}-(AlMnSi) structure This structure, and (we conjecture) the icosahedral one, can also be described as a packing of 54-atom icosahedral clusters

Journal ArticleDOI
TL;DR: In this article, the effects of an exciton gas and an electron-hole plasma on the excitonic optical absorption in a two-dimensional semiconductor and compare these with recent experimental results on absorption saturation in single and multiple-quantum-well structures.
Abstract: We present theoretical results for the effects of an exciton gas and an electron-hole plasma on the excitonic optical absorption in a two-dimensional semiconductor and compare these with recent experimental results on absorption saturation in single- and multiple-quantum-well structures. A simple theoretical description of the nonlinear optical properties of these microstructures is developed for the case of low-density optical excitation near and above the band edge. We argue that the effects of Coulomb screening of excitons by the plasma are relatively weak in these structures but that the consequences of phase-space filling and exchange are significant in each case. We are able to explain the recent unexpected experimental result that ``cold'' excitons are more effective than ``hot'' carriers in saturating the excitonic absorption. Good agreement with the experimental data is obtained without adjustable parameters.

Journal ArticleDOI
James C. Candy1
TL;DR: A modulator that employs double integration and two-level quantization is easy to implement and is tolerant of parameter variation.
Abstract: Sigma delta modulation is viewed as a technique that employs integration and feedback to move quantization noise out of baseband. This technique may be iterated by placing feedback loop around feedback loop, but when three or more loops are used the circuit can latch into undesirable overloading modes. In the desired mode, a simple linear theory gives a good description of the modulation even when the quantization has only two levels. A modulator that employs double integration and two-level quantization is easy to implement and is tolerant of parameter variation. At sampling rates of 1 MHz it provides resolution equivalent to 16 bit PCM for voiceband signals. Digital filters that are suitable for converting the modulation to PCM are also described.

Book ChapterDOI
Raghu N. Kackar1
TL;DR: The Ina Tile Company found that increasing the content of lime in the tile formulation from 1% to 5% reduced the tile size variation by a factor of ten as discussed by the authors, which was a breakthrough for the tile industry.
Abstract: A Japanese ceramic tile manufacturer knew in 1953 that is more costly to control causes of manufacturing variations than to make a process insensitive to these variations. The Ina Tile Company knew that an uneven temperature distribution in the kiln caused variation in the size of the tiles. Since uneven temperature distribution was an assignable cause of variation, a process quality control approach would have increased manufacturing cost. The company wanted to reduce the size variation without increasing cost. Therefore, instead of controlling temperature distribution they tried to find a tile formulation that reduced the effect of uneven temperature distribution on the uniformity of tiles. Through a designed experiment, the Ina Tile Company found a cost-effective method for reducing tile size variation caused by uneven temperature distribution in the kiln. The company found that increasing the content of lime in the tile formulation from 1% to 5% reduced the tile size variation by a factor of ten. This discovery was a breakthrough for the ceramic tile industry.

Journal ArticleDOI
Veit Elser1
TL;DR: Various features of quasicrystal diffraction patterns are discussed and the projection scheme is used throughout and applied in some detail to the pattern formed by icosahedral Al-Mn.
Abstract: Various features of quasicrystal diffraction patterns are discussed. The projection scheme is used throughout and applied in some detail to the pattern formed by icosahedral Al-Mn. Comparison with the diffraction pattern formed by the vertices of a three-dimensional Penrose tiling leads to the value 4.60 A\r{} for the rhombohedron edge length.

Journal ArticleDOI
30 Aug 1985-Science
TL;DR: The computer graphics revolution has stimulated the invention of many graphical methods for analyzing and presenting scientific data, such as box plots, two-tiered error bars, scatterplot smoothing, dot charts, and graphing on a log base 2 scale.
Abstract: Graphical perception is the visual decoding of the quantitative and qualitative information encoded on graphs Recent investigations have uncovered basic principles of human graphical perception that have important implications for the display of data The computer graphics revolution has stimulated the invention of many graphical methods for analyzing and presenting scientific data, such as box plots, two-tiered error bars, scatterplot smoothing, dot charts, and graphing on a log base 2 scale

Journal ArticleDOI
TL;DR: Des impuretes distribuees aleatoirement qui modifient les couplages d'echange locaux mais ne creent pas de champs aleatoires and ne detruisent pas l'ordre a longue distance rendent rugueuses les parois de domaines de systemes d'Ising de dimensionnalite 5/3.
Abstract: Randomly placed impurities that alter the local exchange couplings, but do not generate random fields or destroy the long-range order, roughen domain walls in Ising systems for dimensionality $\frac{5}{3}ldl5$. They also pin (localize) the walls in energetically favorable positions. This drastically slows down the kinetics of ordering. The pinned domain wall is a new critical phenomenon governed by a zero-temperature fixed point. For $d=2$, the critical exponents for domain-wall pinning energies and roughness as a function of length scale are estimated from numerically generated ground states.

Journal ArticleDOI
TL;DR: In this article, the quantum well self-electrooptic effect devices with a CW laser diode as the light source were shown to have bistability at room temperature with 18 nW of incident power, or with 30 ns switching time at 1.6 mW with a reciprocal relation between switching power and speed.
Abstract: We report extended experimental and theoretical results for the quantum well self-electrooptic effect devices. Four modes of operation are demonstrated: 1) optical bistability, 2) electrical bistability, 3) simultaneous optical and electronic self-oscillation, and 4) self-linearized modulation and optical level shifting. All of these can be observed at room-temperature with a CW laser diode as the light source. Bistability can be observed with 18 nW of incident power, or with 30 ns switching time at 1.6 mW with a reciprocal relation between switching power and speed. We also now report bistability with low electrical bias voltages (e.g., 2 V) using a constant current load. Negative resistance self-oscillation is observed with an inductive load; this imposes a self-modulation on the transmitted optical beam. With current bias, self-linearized modulation is obtained, with absorbed optical power linearly proportional to current. This is extended to demonstrate light-by-light modulation and incoherent-to-incoherent conversion using a separate photodiode. The nature of the optoelectronic feedback underlying the operation of the devices is discussed, and the physical mechanisms which give rise to the very low optical switching energy (∼4 fJ/ μm2) are discussed.

Journal ArticleDOI
Andrew T. Ogielski1
TL;DR: The dynamic scaling hypothesis and finite-size scaling explain well the observed temperature and size dependence of the data, and the functional form of the correlation functions is com- patible with the scaling form if corrections to scaling are taken into account.
Abstract: I present an analysis of the dynamic behavior of short-range Ising spin glasses observed in stochastic simulations. The time dependence of the order parameter q(t)=〈${S}_{x}$(0)${S}_{x}$(t)〉\ifmmode\bar\else\textasciimacron\fi{}---which is the same as that of the structure factor---and the time dependence of the related dynamic correlation functions have been recorded with good statistics and very long observation times. The spin-glass model with a symmetric distribution of discrete nearest-neighbor \ifmmode\pm\else\textpm\fi{}J interactions on a simple-cubic lattice was used. Simulations were performed with a special fast computer, allowing for the first-time investigation of the equilibrium dynamics for a wide range of temperatures (0.7\ensuremath{\le}kT/J\ensuremath{\le}5.0) and lattice sizes (${8}^{3}$, ${16}^{3}$, ${32}^{3}$, and ${64}^{3}$). I have found that the empirical formula q(t)=${\mathrm{ct}}^{\mathrm{\ensuremath{-}}x}$exp(-\ensuremath{\omega}${t}^{\ensuremath{\beta}}$) with temperature-dependent exponents x(T) and \ensuremath{\beta}(T) describes the decay very well at all temperatures above the spin-glass transition. In the spin-glass phase, only the algebraic decay q(t)=${\mathrm{ct}}^{\mathrm{\ensuremath{-}}x}$ could be observed, with different temperature dependences of the exponent x(T). The dynamic scaling hypothesis and finite-size scaling explain well the observed temperature and size dependence of the data, and the functional form of the correlation functions is com- patible with the scaling form if corrections to scaling are taken into account. The scaling behavior and the dynamic and static critical exponents found in my simulations are in reasonable agreement with recent experiments performed on insulating spin glasses, showing that despite its simplicity the discrete model of spin glasses analyzed in this work displays behavior similar to that seen in nature.

Journal ArticleDOI
Yung-Terng Wang1, Morris2
TL;DR: A taxonomy of load sharing algorithms is proposed that draws a basic dichotomy between source- initiative and server-initiative approaches and a performance metric called the Q-factor (quality of load share) is defined which summarizes both overall efficiency and fairness of an algorithm.
Abstract: An important part of a distributed system design is the choice of a load sharing or global scheduling strategy. A comprehensive literature survey on this topic is presented. We propose a taxonomy of load sharing algorithms that draws a basic dichotomy between source-initiative and server-initiative approaches. The taxonomy enables ten representative algorithms to be selected for performance evaluation. A performance metric called the Q-factor (quality of load sharing) is defined which summarizes both overall efficiency and fairness of an algorithm and allows algorithms to be ranked by performance. We then evaluate the algorithms using both mathematical and simulation techniques. The results of the study show that: i) the choice of load sharing algorithm is a critical design decision; ii) for the same level of scheduling information exchange, server-initiative has the potential of outperforming source-initiative algorithms (whether this potential is realized depends on factors such as communication overhead); iii) the Q-factor is a useful yardstick; iv) some algorithms, which have previously received little attention, e.g., multiserver cyclic service, may provide effective solutions.

Journal ArticleDOI
A.E. Dunlop1, B.W. Kernighan2
TL;DR: A method of automatic placement for standard cells (polycells) that yields areas within 10-20 percent of careful hand placements is described, based on graph partitioning to identify groups of modules that ought to be close to each other.
Abstract: This paper describes a method of automatic placement for standard cells (polycells) that yields areas within 10-20 percent of careful hand placements. The method is based on graph partitioning to identify groups of modules that ought to be close to each other, and a technique for properly accounting for external connections at each level of partitioning. The placement procedure is in production use as part of an automated design system; it has been used in the design of more than 40 chips, in CMOS, NMOS, and bipolar technologies.