scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 1996"


Journal ArticleDOI
Gerard J. Foschini1
TL;DR: This paper addresses digital communication in a Rayleigh fading environment when the channel characteristic is unknown at the transmitter but is known (tracked) at the receiver with the aim of leveraging the already highly developed 1-D codec technology.
Abstract: This paper addresses digital communication in a Rayleigh fading environment when the channel characteristic is unknown at the transmitter but is known (tracked) at the receiver. Inventing a codec architecture that can realize a significant portion of the great capacity promised by information theory is essential to a standout long-term position in highly competitive arenas like fixed and indoor wireless. Use (n T , n R ) to express the number of antenna elements at the transmitter and receiver. An (n, n) analysis shows that despite the n received waves interfering randomly, capacity grows linearly with n and is enormous. With n = 8 at 1% outage and 21-dB average SNR at each receiving element, 42 b/s/Hz is achieved. The capacity is more than 40 times that of a (1, 1) system at the same total radiated transmitter power and bandwidth. Moreover, in some applications, n could be much larger than 8. In striving for significant fractions of such huge capacities, the question arises: Can one construct an (n, n) system whose capacity scales linearly with n, using as building blocks n separately coded one-dimensional (1-D) subsystems of equal capacity? With the aim of leveraging the already highly developed 1-D codec technology, this paper reports just such an invention. In this new architecture, signals are layered in space and time as suggested by a tight capacity bound.

6,812 citations


Proceedings ArticleDOI
Lov K. Grover1
01 Jul 1996
TL;DR: In this paper, it was shown that a quantum mechanical computer can solve integer factorization problem in a finite power of O(log n) time, where n is the number of elements in a given integer.
Abstract: were proposed in the early 1980’s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80’s and early 90’s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ----------------------------------------------

6,335 citations


Proceedings Article
03 Dec 1996
TL;DR: This work compares support vector regression (SVR) with a committee regression technique (bagging) based on regression trees and ridge regression done in feature space and expects that SVR will have advantages in high dimensionality space because SVR optimization does not depend on the dimensionality of the input space.
Abstract: A new regression technique based on Vapnik's concept of support vectors is introduced. We compare support vector regression (SVR) with a committee regression technique (bagging) based on regression trees and ridge regression done in feature space. On the basis of these experiments, it is expected that SVR will have advantages in high dimensionality space because SVR optimization does not depend on the dimensionality of the input space.

4,009 citations


Proceedings Article
03 Dec 1996
TL;DR: This presentation reports results of applying the Support Vector method to problems of estimating regressions, constructing multidimensional splines, and solving linear operator equations.
Abstract: The Support Vector (SV) method was recently proposed for estimating regressions, constructing multidimensional splines, and solving linear operator equations [Vapnik, 1995]. In this presentation we report results of applying the SV method to these problems.

2,632 citations


Journal ArticleDOI
Wim Sweldens1
TL;DR: In this paper, a lifting scheme is proposed for constructing compactly supported wavelets with compactly support duals, which can also speed up the fast wavelet transform and is shown to be useful in the construction of wavelets using interpolating scaling functions.

2,322 citations


Proceedings ArticleDOI
06 May 1996
TL;DR: This paper presents a comprehensive approach to trust management, based on a simple language for specifying trusted actions and trust relationships, and describes a prototype implementation of a new trust management system, called PolicyMaker, that will facilitate the development of security features in a wide range of network services.
Abstract: We identify the trust management problem as a distinct and important component of security in network services. Aspects of the trust management problem include formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Existing systems that support security in networked applications, including X.509 and PGP, address only narrow subsets of the overall trust management problem and often do so in a manner that is appropriate to only one application. This paper presents a comprehensive approach to trust management, based on a simple language for specifying trusted actions and trust relationships. It also describes a prototype implementation of a new trust management system, called PolicyMaker, that will facilitate the development of security features in a wide range of network services.

2,247 citations


Posted Content
Lov K. Grover1
TL;DR: In early 1994, it was demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) .
Abstract: Imagine a phone directory containing N names arranged in completely random order. In order to find someone's phone number with a 50% probability, any classical algorithm (whether deterministic or probabilistic) will need to look at a minimum of N/2 names. Quantum mechanical systems can be in a superposition of states and simultaneously examine multiple names. By properly adjusting the phases of various operations, successful computations reinforce each other while others interfere randomly. As a result, the desired phone number can be obtained in only O(sqrt(N)) steps. The algorithm is within a small constant factor of the fastest possible quantum mechanical algorithm.

1,481 citations


Book
15 Dec 1996

1,322 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: It turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space.
Abstract: The frequency moments of a sequence containing mi elements of type i, for 1 i n, are the numbers Fk = P n=1 m k . We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space. Applications to data bases are mentioned as well.

1,279 citations


Journal ArticleDOI
David Lee1, Mihalis Yannakakis1
01 Aug 1996
TL;DR: The fundamental problems in testing finite state machines and techniques for solving these problems are reviewed, tracing progress in the area from its inception to the present and the stare of the art is traced.
Abstract: With advanced computer technology, systems are getting larger to fulfill more complicated tasks: however, they are also becoming less reliable. Consequently, testing is an indispensable part of system design and implementation; yet it has proved to be a formidable task for complex systems. This motivates the study of testing finite stare machines to ensure the correct functioning of systems and to discover aspects of their behavior. A finite state machine contains a finite number of states and produces outputs on state transitions after receiving inputs. Finite state machines are widely used to model systems in diverse areas, including sequential circuits, certain types of programs, and, more recently, communication protocols. In a testing problem we have a machine about which we lack some information; we would like to deduce this information by providing a sequence of inputs to the machine and observing the outputs produced. Because of its practical importance and theoretical interest, the problem of testing finite state machines has been studied in different areas and at various times. The earliest published literature on this topic dates back to the 1950's. Activities in the 1960's mid early 1970's were motivated mainly by automata theory and sequential circuit testing. The area seemed to have mostly died down until a few years ago when the testing problem was resurrected and is now being studied anew due to its applications to conformance testing of communication protocols. While some old problems which had been open for decades were resolved recently, new concepts and more intriguing problems from new applications emerge. We review the fundamental problems in testing finite state machines and techniques for solving these problems, tracing progress in the area from its inception to the present and the stare of the art. In addition, we discuss extensions of finite state machines and some other topics related to testing.

1,273 citations


Journal ArticleDOI
02 Feb 1996-Science
TL;DR: In this article, the transition from the quasi-long-range order in a chain of antiferromagnetically coupled S = 1/2 spins to the true longrange order that occurs in a plane is not at all smooth.
Abstract: To make the transition from the quasi-long-range order in a chain of antiferromagnetically coupled S = 1/2 spins to the true long-range order that occurs in a plane, one can assemble chains to make ladders of increasing width. Surprisingly, this crossover between one and two dimensions is not at all smooth. Ladders with an even number of legs have purely short-range magnetic order and a finite energy gap to all magnetic excitations. Predictions of this ground state have now been verified experimentally. Holes doped into these ladders are predicted to pair and possibly superconduct.

Book
Michael R. Lyu1
30 Apr 1996
TL;DR: Technical foundations introduction software reliability and system reliability the operational profile software reliability modelling survey model evaluation and recalibration techniques practices and experiences and best current practice of SRE software reliability measurement experience.
Abstract: Technical foundations introduction software reliability and system reliability the operational profile software reliability modelling survey model evaluation and recalibration techniques practices and experiences best current practice of SRE software reliability measurement experience measurement-based analysis of software reliability software fault and failure classification techniques trend analysis in validation and maintenance software reliability and field data analysis software reliability process assessment emerging techniques software reliability prediction metrics software reliability and testing fault-tolerant SRE software reliability using fault trees software reliability process simulation neural networks and software reliability. Appendices: software reliability tools software failure data set repository.

Journal ArticleDOI
TL;DR: A model which incorporates the physics of dynamic Jahn-Teller and double-exchange effects is presented and solved via a dynamical mean field approximation to reproduce the behavior of the resistivity and magnetic transition temperature observed in Sr_x MnO_3.
Abstract: A model for the doped rare-earth manganites such as ${\mathrm{La}}_{1\ensuremath{-}x}{\mathrm{Sr}}_{x}{\mathrm{MnO}}_{3}$ incorporating the physics of dynamic Jahn-Teller and double-exchange effects is presented and solved via a dynamical mean field approximation. The interplay of these two effects as the electron phonon coupling is varied reproduces the observed behavior of the resistivity and magnetic transition temperature.

Journal ArticleDOI
Behzad Razavi1
TL;DR: In this paper, the phase noise in two inductorless CMOS oscillators is analyzed and a new definition of phase noise is defined, and two prototypes fabricated in a 0.5/spl mu/m CMOS technology are used to investigate the accuracy of the theoretical predictions.
Abstract: This paper presents a study of phase noise in two inductorless CMOS oscillators. First-order analysis of a linear oscillatory system leads to a noise shaping function and a new definition of Q. A linear model of CMOS ring oscillators is used to calculate their phase noise, and three phase noise phenomena, namely, additive noise, high-frequency multiplicative noise, and low-frequency multiplicative noise, are identified and formulated. Based on the same concepts, a CMOS relaxation oscillator is also analyzed. Issues and techniques related to simulation of noise in the time domain are described, and two prototypes fabricated in a 0.5-/spl mu/m CMOS technology are used to investigate the accuracy of the theoretical predictions. Compared with the measured results, the calculated phase noise values of a 2-GHz ring oscillator and a 900-MHz relaxation oscillator at 5 MHz offset have an error of approximately 4 dB.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the technology necessary to perform terahertz "T-ray" imaging, novel imaging techniques, and commercial applications of T-ray imaging.
Abstract: The use of terahertz pulses for imaging has opened new possibilities for scientific and industrial applications in the terahertz frequency range. In this article, we describe the technology necessary to perform terahertz "T-ray" imaging, novel imaging techniques, and commercial applications of T-ray imaging.

Journal ArticleDOI
TL;DR: The focus of the paper is on studying subjective measures of interestingness, which are classified into actionable and unexpected, and the relationship between them is examined.
Abstract: One of the central problems in the field of knowledge discovery is the development of good measures of interestingness of discovered patterns. Such measures of interestingness are divided into objective measures-those that depend only on the structure of a pattern and the underlying data used in the discovery process, and the subjective measures-those that also depend on the class of users who examine the pattern. The focus of the paper is on studying subjective measures of interestingness. These measures are classified into actionable and unexpected, and the relationship between them is examined. The unexpected measure of interestingness is defined in terms of the belief system that the user has. Interestingness of a pattern is expressed in terms of how it affects the belief system. The paper also discusses how this unexpected measure of interestingness can be used in the discovery process.

Journal ArticleDOI
TL;DR: This article describes conventional A/D conversion, as well as its performance modeling, and examines the use of sigma-delta converters to convert narrowband bandpass signals with high resolution.
Abstract: Using sigma-delta A/D methods, high resolution can be obtained for only low to medium signal bandwidths. This article describes conventional A/D conversion, as well as its performance modeling. We then look at the technique of oversampling, which can be used to improve the resolution of classical A/D methods. We discuss how sigma-delta converters use the technique of noise shaping in addition to oversampling to allow high resolution conversion of relatively low bandwidth signals. We examine the use of sigma-delta converters to convert narrowband bandpass signals with high resolution. Several parallel sigma-delta converters, which offer the potential of extending high resolution conversion to signals with higher bandwidths, are also described.

Proceedings ArticleDOI
02 Dec 1996
TL;DR: A new algorithm for path profiling is described, which selects and places profile instrumentation to minimize run-time overhead and identifies longer paths than a previous technique, which predicted paths from edge profiles.
Abstract: A path profile determines how many times each acyclic path in a routine executes. This type of profiling subsumes the more common basic block and edge profiling, which only approximate path frequencies. Path profiles have many potential uses in program performance tuning, profile-directed compilation, and software test coverage. This paper describes a new algorithm for path profiling. This simple, fast algorithm selects and places profile instrumentation to minimize run-time overhead. Instrumented programs run with overhead comparable to the best previous profiling techniques. On the SPEC95 benchmarks, path profiling overhead averaged 31%, as compared to 16% for efficient edge profiling. Path profiling also identifies longer paths than a previous technique, which predicted paths from edge profiles (average of 88, versus 34 instructions). Moreover, profiling shows that the SPEC95 train input datasets covered most of the paths executed in the ref datasets.

Proceedings ArticleDOI
24 Mar 1996
TL;DR: RMTP provides sequenced, lossless delivery of bulk data from one sender to a group of receivers by using a packet-based selective repeat retransmission scheme, in which each acknowledgment packet carries a sequence number and a bitmap.
Abstract: This paper describes the design and implementation of a multi-cast transport protocol called RMTP. RMTP provides sequenced, lossless delivery of bulk data from one sender to a group of receivers. RMTP achieves reliability by using a packet-based selective repeat retransmission scheme, in which each acknowledgment (ACK) packet carries a sequence number and a bitmap. ACK handling is based on a multi-level hierarchical approach, in which the receivers are grouped into a hierarchy of local regions, with a designated receiver (DR) in each local region. Receivers in each local region periodically send ACKs to their corresponding DR, DRs send ACKs to the higher-level DRs, until the DRs in the highest level send ACKs to the sender, thereby avoiding the ACK-implosion problem. DRs cache received data and respond to retransmission requests of the receivers in their corresponding local regions, thereby decreasing end-to-end latency and improving resource usage. This paper also provides the measurements of RMTP's performance with receivers located at various sites in the Internet.

Journal ArticleDOI
TL;DR: In this article, a temporal language that can constrain the time difference between events with finite, yet arbitrary, precision is introduced and proved to be EXPSPACE-complete.
Abstract: The most natural, compositional, way of modeling real-time systems uses a dense domain for time. The satisfiability of timing constraints that are capable of expressing punctuality in this model, however, is known to be undecidable. We introduce a temporal language that can constrain the time difference between events only with finite, yet arbitrary, precision and show the resulting logic to be EXPSPACE-complete. This result allows us to develop an algorithm for the verification of timing properties of real-time systems with a dense semantics.

Journal ArticleDOI
TL;DR: The first experimental observation of nonlinear propagation effects in fiber Bragg gratings, resulting in nonlinear optical pulse compression and soliton propagation, is reported.
Abstract: We report the first experimental observation of nonlinear propagation effects in fiber Bragg gratings, resulting in nonlinear optical pulse compression and soliton propagation. The solitons occur at frequencies near the photonic band gap of the grating; they are due to a combination of the negative dispersion of the grating, which dominates the material dispersion, and self-phase modulation. The solitons propagate at velocities well below the speed of light in the uniform medium.

Journal ArticleDOI
TL;DR: The connection between cliques and efficient multi-prover interaction proofs, is shown to yield hardness results on the complexity of approximating the size of the largest clique in a graph.
Abstract: The contribution of this paper is two-fold. First, a connection is established between approximating the size of the largest clique in a graph and multi-prover interactive proofs. Second, an efficient multi-prover interactive proof for NP languages is constructed, where the verifier uses very few random bits and communication bits. Last, the connection between cliques and efficient multi-prover interaction proofs, is shown to yield hardness results on the complexity of approximating the size of the largest clique in a graph.Of independent interest is our proof of correctness for the multilinearity test of functions.

Journal ArticleDOI
TL;DR: Long-period fiber gratings are used to f latten the gain spectrum of erbium-doped fiber amplifiers and it is shown that a chain of amplifiers can be equalized, leading to a bandwidth enhancement by a factor of 3.
Abstract: Long-period fiber gratings are used to flatten the gain spectrum of erbium-doped fiber amplifiers. A broadband amplifier with <0.2-dB gain variation over 30 nm is presented. We also show that a chain of amplifiers can be equalized, leading to a bandwidth enhancement by a factor of 3.

Journal ArticleDOI
TL;DR: The model-checking procedure and the implementation of the verification procedure-implemented in the Cornell Hybrid Technology tool, HyTech-applies to hybrid automata whose continuous dynamics is governed by linear constraints on the variables and their derivatives.
Abstract: Presents a model-checking procedure and its implementation for the automatic verification of embedded systems. The system components are described as hybrid automata-communicating machines with finite control and real-valued variables that represent continuous environment parameters such as time, pressure and temperature. The system requirements are specified in a temporal logic with stop-watches, and verified by symbolic fixpoint computation. The verification procedure-implemented in the Cornell Hybrid Technology tool, HyTech-applies to hybrid automata whose continuous dynamics is governed by linear constraints on the variables and their derivatives. We illustrate the method and the tool by checking safety, liveness, time-bounded and duration requirements of digital controllers, schedulers and distributed algorithms.

Book ChapterDOI
01 Jan 1996
TL;DR: In this paper, the authors review the history of local regression and discuss four basic components that must be chosen in using local regression in practice: the weight function, the parametric family that is fitted locally, the bandwidth, and the assumptions about the distribution of the response.
Abstract: Local regression is an old method for smoothing data, having origins in the graduation of mortality data and the smoothing of time series in the late 19th century and the early 20th century. Still, new work in local regression continues at a rapid pace. We review the history of local regression. We discuss four of its basic components that must be chosen in using local regression in practice — the weight function, the parametric family that is fitted locally, the bandwidth, and the assumptions about the distribution of the response. A major theme of the paper is that these choices represent a modeling of the data; different data sets deserve different choices. We describe polynomial mixing, a method for enlarging polynomial parametric families. We introduce an approach to adaptive fitting,assessment of parametric localization. We describe the use of this approach to design two adaptive procedures: one automatically chooses the mixing degree of mixing polynomials at each x using cross-validation, and the other chooses the bandwidth at each x using C p . Finally, we comment on the efficacy of using asymptotics to provide guidance for methods of local regression.

Proceedings Article
22 Jan 1996
TL;DR: In this article, the authors modify an interrupt-driven networking implementation to eliminate receive livelock without degrading other aspects of system performance, and present measurements demonstrating the success of their approach.
Abstract: Most operating systems use interface interrupts to schedule network tasks. Interrupt-driven systems can provide low overhead and good latency at low offered load, but degrade significantly at higher arrival rates unless care is taken to prevent several pathologies. These are various forms of receive livelock, in which the system spends all its time processing interrupts, to the exclusion of other necessary tasks. Under extreme conditions, no packets are delivered to the user application or the output of the system. To avoid livelock and related problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution. We modified an interrupt-driven networking implementation to do so; this eliminates receive livelock without degrading other aspects of system performance. We present measurements demonstrating the success of our approach.

Journal ArticleDOI
TL;DR: This paper describes architectural techniques for energy efficient implementation of programmable computation, particularly focussing on the computation needed in portable devices where event-driven user interfaces, communication protocols, and signal processing play a dominant role.
Abstract: With the popularity of portable devices such as personal digital assistants and personal communicators, as well as with increasing awareness of the economic and environmental costs of power consumption by desktop computers, energy efficiency has emerged as an important issue in the design of electronic systems. While power efficient ASIC's with dedicated architectures have addressed the energy efficiency issue for niche applications such as DSP, much of the computation continues to be implemented as software running on programmable processors such as microprocessors, microcontrollers, and programmable DSP's. Not only is this true for general purpose computation on personal computers and workstations, but also for portable devices, application-specific systems etc. In fact, firmware and embedded software executing on RISC and DSP processor cores that are embedded in ASIC's has emerged as a leading implementation methodology for speech coding, modem functionality, video compression, communication protocol processing etc. This paper describes architectural techniques for energy efficient implementation of programmable computation, particularly focussing on the computation needed in portable devices where event-driven user interfaces, communication protocols, and signal processing play a dominant role. Two key approaches described here are predictive system shutdown and extended voltage scaling. Results indicate that a large reduction in power consumption can be achieved over current day solutions with little or no loss in system performance.


Journal ArticleDOI
Thomas Ball1, Stephen G. Eick1
TL;DR: The invisible nature of software hides system complexity, particularly for large team-oriented projects, and four innovative visual representations of code have evolved to help solve this problem: line representation; pixel representation; file summary representation; and hierarchical representation.
Abstract: The invisible nature of software hides system complexity, particularly for large team-oriented projects. The authors have evolved four innovative visual representations of code to help solve this problem: line representation; pixel representation; file summary representation; and hierarchical representation. We first describe these four visual code representations and then discuss the interaction techniques for manipulating them. We illustrate our software visualization techniques through five case studies. The first three focus on software history and static software characteristics; the last two discuss execution behavior. The software library and its implementation are then described. Finally, we briefly review some related work and compare and contrast our different techniques for visualizing software.

Journal ArticleDOI
12 Apr 1996-Science
TL;DR: In this paper, the authors used far-field microscopy to measure the room-temperature optical properties of single dye molecules located on a polymer-air interface, and found that the lifetime dependence on dipole orientation was a consequence of the electromagnetic boundary conditions on the fluorescent radiation at the polymer air interface.
Abstract: Far-field microscopy was used to noninvasively measure the room-temperature optical properties of single dye molecules located on a polymer-air interface. Shifts in the fluorescence spectrum, due to perturbation by the locally varying molecular environment, and the orientation of the transition dipole moment were correlated to variation in the excited-state lifetime. The lifetime dependence on spectral shift is argued to result from the frequency dependence of the spontaneous emission rate; the lifetime dependence on dipole orientation was found to be a consequence of the electromagnetic boundary conditions on the fluorescent radiation at the polymer-air interface.