scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 1999"


Journal ArticleDOI
TL;DR: Using this joint space-time approach, spectral efficiencies ranging from 20-40 bit/s/Hz have been demonstrated in the laboratory under flat fading conditions at indoor fading rates.
Abstract: The signal detection algorithm of the vertical BLAST (Bell Laboratories Layered Space-Time) wireless communications architecture is briefly described. Using this joint space-time approach, spectral efficiencies ranging from 20-40 bit/s/Hz have been demonstrated in the laboratory under flat fading conditions at indoor fading rates. Early results are presented.

1,791 citations


Journal ArticleDOI
TL;DR: Analysis of a mobile wireless link comprising M transmitter and N receiver antennas operating in a Rayleigh flat-fading environment concludes that, for a fixed number of antennas, the capacity approaches the capacity obtained as if the receiver knew the propagation coefficients.
Abstract: We analyze a mobile wireless link comprising M transmitter and N receiver antennas operating in a Rayleigh flat-fading environment. The propagation coefficients between pairs of transmitter and receiver antennas are statistically independent and unknown; they remain constant for a coherence interval of T symbol periods, after which they change to new independent values which they maintain for another T symbol periods, and so on. Computing the link capacity, associated with channel coding over multiple fading intervals, requires an optimization over the joint density of T/spl middot/M complex transmitted signals. We prove that there is no point in making the number of transmitter antennas greater than the length of the coherence interval: the capacity for M>T is equal to the capacity for M=T. Capacity is achieved when the T/spl times/M transmitted signal matrix is equal to the product of two statistically independent matrices: a T/spl times/T isotropically distributed unitary matrix times a certain T/spl times/M random matrix that is diagonal, real, and nonnegative. This result enables us to determine capacity for many interesting cases. We conclude that, for a fixed number of antennas, as the length of the coherence interval increases, the capacity approaches the capacity obtained as if the receiver knew the propagation coefficients.

1,480 citations


Proceedings ArticleDOI
23 Mar 1999
TL;DR: This work develops a robust hierarchical clustering algorithm, ROCK, that employs links and not distances when merging clusters, and shows that ROCK not only generates better quality clusters than traditional algorithms, but also exhibits good scalability properties.
Abstract: We study clustering algorithms for data with Boolean and categorical attributes. We show that traditional clustering algorithms that use distances between points for clustering are not appropriate for Boolean and categorical attributes. Instead, we propose a novel concept of links to measure the similarity/proximity between a pair of data points. We develop a robust hierarchical clustering algorithm, ROCK, that employs links and not distances when merging clusters. Our methods naturally extend to non-metric similarity measures that are relevant in situations where a domain expert/similarity table is the only source of knowledge. In addition to presenting detailed complexity results for ROCK, we also conduct an experimental study with real-life as well as synthetic data sets. Our study shows that ROCK not only generates better quality clusters than traditional algorithms, but also exhibits good scalability properties.

1,322 citations


Journal ArticleDOI
TL;DR: It is shown that robust wireless communication in high-scattering propagation environments using multi-element antenna arrays (MEAs) at both transmit and receive sites using a simplified, but highly spectrally efficient space-time communication processing method can offer no more than about 40% more capacity than the simple architecture presented.
Abstract: We investigate robust wireless communication in high-scattering propagation environments using multi-element antenna arrays (MEAs) at both transmit and receive sites. A simplified, but highly spectrally efficient space-time communication processing method is presented. The user's bit stream is mapped to a vector of independently modulated equal bit-rate signal components that are simultaneously transmitted in the same band. A detection algorithm similar to multiuser detection is employed to detect the signal components in white Gaussian noise (WGN). For a large number of antennas, a more efficient architecture can offer no more than about 40% more capacity than the simple architecture presented. A testbed that is now being completed operates at 1.9 GHz with up to 16 quadrature amplitude modulation (QAM) transmitters and 16 receive antennas. Under ideal operation at 18 dB signal-to-noise ratio (SNR), using 12 transmit antennas and 16 receive antennas (even with uncoded communication), the theoretical spectral efficiency is 36 bit/s/Hz, whereas the Shannon capacity is 71.1 bit/s/Hz. The 36 bits per vector symbol, which corresponds to over 200 billion constellation points, assumes a 5% block error rate (BLER) for 100 vector symbol bursts.

1,258 citations


Journal ArticleDOI
TL;DR: A finite-state Markov channel model to represent Rayleigh fading channels is formed and a methodology to partition the received signal-to-noise ratio (SNR) into a finite number of states according to the time duration of each state is developed.
Abstract: We form a finite-state Markov channel model to represent Rayleigh fading channels. We develop and analyze a methodology to partition the received signal-to-noise ratio (SNR) into a finite number of states according to the time duration of each state. Each state corresponds to a different channel quality indicated by the bit-error rate (BER). The number of states and SNR partitions are determined by the fading speed of the channel. Computer simulations are performed to verify the accuracy of the model.

871 citations


Proceedings ArticleDOI
21 Mar 1999
TL;DR: A taxonomy of multicast scenarios on the Internet and an improved solution to the key revocation problem are presented, which can be regarded as a 'midpoint' between traditional message authentication codes and digital signatures.
Abstract: Multicast communication is becoming the basis for a growing number of applications. It is therefore critical to provide sound security mechanisms for multicast communication. Yet, existing security protocols for multicast offer only partial solutions. We first present a taxonomy of multicast scenarios on the Internet and point out relevant security concerns. Next we address two major security problems of multicast communication: source authentication, and key revocation. Maintaining authenticity in multicast protocols is a much more complex problem than for unicast; in particular, known solutions are prohibitively inefficient in many cases. We present a solution that is reasonable for a range of scenarios. This approach can be regarded as a 'midpoint' between traditional message authentication codes and digital signatures. We also present an improved solution to the key revocation problem.

833 citations


Proceedings ArticleDOI
21 Mar 1999
TL;DR: It is shown that candidate rows thus identified indeed have a high posterior probability of taking a larger than average amount of bandwidth, and the mechanism can be used to identify flows that may be misbehaving.
Abstract: This paper describes a mechanism we call "SRED" (stabilized random early drop). Like RED (random early detection) SRED pre-emptively discards packets with a load-dependent probability when a buffer in a router in the Internet or an intranet seems congested. SRED has an additional feature that over a wide range of load levels helps it stabilize its buffer occupation at a level independent of the number of active connections. SRED does this by estimating the number of active connections or flows. This estimate is obtained without collecting or analyzing state information on individual flows. The same mechanism can be used to identify flows that may be misbehaving, i.e. are taking more than their fair share of bandwidth. Since the mechanism is statistical in nature, the next step must be to collect state information of the candidates for "misbehaving", and to analyze that information. We show that candidate rows thus identified indeed have a high posterior probability of taking a larger than average amount of bandwidth.

746 citations


Book ChapterDOI
Volker Heine1
01 Jan 1999
TL;DR: In this paper, it was shown that virtual or resonance surface states can exist which behave for practical purposes in the same way as the tails of the metal wave functions rather than separate states.
Abstract: The properties of metal-to-semiconductor junctions and of free semiconductor surfaces are usually explained on the basis of surface states. The theory of the metal contacts is discussed critically, because strictly speaking localized surface states cannot exist in such junctions. However, it is shown that virtual or resonance surface states can exist which behave for practical purposes in the same way. They are really the tails of the metal wave functions rather than separate states. In the past, the length of this tail has often been ignored. Some estimates of its length are made and its consequences pointed out. A semiquantitative discussion is given of various recent data, including the effect of an oxide layer on barrier height, the variation of barrier height with the metal, the work function of a free surface at high doping, and the effect of a cesium layer on the work function.

654 citations


Proceedings ArticleDOI
01 Jul 1999
TL;DR: This work generalizes basic signal processing tools to irregular connectivity triangle meshes through the design of a non-uniform relaxation procedure whose weights depend on the geometry and shows its superiority over existing schemes whose weights depended only on connectivity.
Abstract: We generalize basic signal processing tools such as downsampling, upsampling, and filters to irregular connectivity triangle meshes. This is accomplished through the design of a non-uniform relaxation procedure whose weights depend on the geometry and we show its superiority over existing schemes whose weights depend only on connectivity. This is combined with known mesh simplification methods to build subdivision and pyramid algorithms. We demonstrate the power of these algorithms through a number of application examples including smoothing, enhancement, editing, and texture mapping.

572 citations


MonographDOI
01 May 1999
TL;DR: In this article, the authors present a unified mathematical framework for a wide range of problems in estimation and control, and discuss two most commonly used methodologies: the stochastic H^2 approach and the deterministic (worst-case) H∞ approach.
Abstract: This monograph presents a unified mathematical framework for a wide range of problems in estimation and control. The authors discuss the two most commonly used methodologies: the stochastic H^2 approach and the deterministic (worst-case) H∞ approach. Despite the fundamental differences in the philosophies of these two approaches, the authors have discovered that, if indefinite metric spaces are considered, they can be treated in the same way and are essentially the same. The benefits and consequences of this unification are pursued in detail, with discussions of how to generalize well-known results from H^2 theory to H∞ setting, as well as new results and insight, the development of new algorithms, and applications to adaptive signal processing.

548 citations


Proceedings Article
L.K. Grover1
01 Jan 1999
TL;DR: This paper introduces quantum mechanics and shows how this can be used for computation in devices designed to carry out classical functions.
Abstract: As device structures get smaller quantum mechanical effects predominate. About twenty years ago it was shown that it was possible to redesign devices so that they could still carry out the same functions. Recently it has been shown that the processing speed of computers based on quantum mechanics is indeed far superior to their classical counterparts for some important applications. This paper introduces quantum mechanics and shows how this can be used for computation.

Proceedings Article
07 Sep 1999
TL;DR: In this article, the use of Regular Expressions (REs) as a flexible constraint specification tool that enables user-controlled focus to be incorporated into the pattern mining process is proposed.
Abstract: Discovering sequential patterns is an important problem in data mining with a host of application domains including medicine, telecommunications, and the World Wide Web. Conventional mining systems provide users with only a very restricted mechanism (based on minimum support) for specifying patterns of interest. In this paper, we propose the use of Regular Expressions (REs) as a flexible constraint specification tool that enables user-controlled focus to be incorporated into the pattern mining process. We develop a family of novel algorithms (termed SPIRIT ‐ Sequential Pattern mIning with Regular expressIon consTraints) for mining frequent sequential patterns that also satisfy user-specified RE constraints. The main distinguishing factor among the proposed schemes is the degree to which the RE constraints are enforced to prune the search space of patterns during computation. Our solutions provide valuable insights into the tradeoffs that arise when constraints that do not subscribe to nice properties (like anti-monotonicity) are integrated into the mining process. A quantitative exploration of these tradeoffs is conducted through an extensive experimental study on synthetic and real-life data sets.

Journal ArticleDOI
TL;DR: The new wireless LAN standards developed by IEEE 802.11, ETSI BRAN, and MMAC are targeting data rates up to 11 Mb/s in the 2.4 GHz band and up to 54 Mb/S in the 5 GHz band.
Abstract: After the IEEE 802.11 standardization group established the first wireless LAN, several efforts were started to increase data rates and also to use other bands. This article describes the new wireless LAN standards developed by IEEE 802.11, ETSI BRAN, and MMAC. The new standards are targeting data rates up to 11 Mb/s in the 2.4 GHz band and up to 54 Mb/s in the 5 GHz band.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: It is argued that there is a central notion of dependency common to these settings that can be captured within a single calculus, the Dependency Core Calculus (DCC), a small extension of Moggi's computational lambda calculus.
Abstract: Notions of program dependency arise in many settings: security, partial evaluation, program slicing, and call-tracking. We argue that there is a central notion of dependency common to these settings that can be captured within a single calculus, the Dependency Core Calculus (DCC), a small extension of Moggi's computational lambda calculus. To establish this thesis, we translate typed calculi for secure information flow, binding-time analysis, slicing, and call-tracking into DCC. The translations help clarify aspects of the source calculi. We also define a semantic model for DCC and use it to give simple proofs of noninterference results for each case.

Journal ArticleDOI
01 Jun 1999
TL;DR: This paper proposes join synopses as an effective solution for this problem and shows how precomputing just one join synopsis for each relation suffices to significantly improve the quality of approximate answers for arbitrary queries with foreign key joins.
Abstract: In large data warehousing environments, it is often advantageous to provide fast, approximate answers to complex aggregate queries based on statistical summaries of the full data. In this paper, we demonstrate the difficulty of providing good approximate answers for join-queries using only statistics (in particular, samples) from the base relations. We propose join synopses as an effective solution for this problem and show how precomputing just one join synopsis for each relation suffices to significantly improve the quality of approximate answers for arbitrary queries with foreign key joins. We present optimal strategies for allocating the available space among the various join synopses when the query work load is known and identify heuristics for the common case when the work load is not known. We also present efficient algorithms for incrementally maintaining join synopses in the presence of updates to the base relations. Our extensive set of experiments on the TPC-D benchmark database show the effectiveness of join synopses and various other techniques proposed in this paper.

Journal ArticleDOI
TL;DR: In this article, a case study clearly demonstrates how common but unanticipated events can stretch project communication to the breaking point, and how project schedules can fall apart, particularly during integration.
Abstract: Geographically distributed development teams face extraordinary communication and coordination problems. The authors' case study clearly demonstrates how common but unanticipated events can stretch project communication to the breaking point. Project schedules can fall apart, particularly during integration. Modular design is necessary, but not sufficient to avoid this fate.

Proceedings ArticleDOI
30 Aug 1999
TL;DR: The proportional differentiation model aims to provide the network operator with the 'tuning knobs' for adjusting the quality spacing between classes, independent of the class loads; this cannot be achieved with other relative differentiation models, such as strict prioritization or capacity differentiation.
Abstract: Internet applications and users have very diverse service expectations, making the current same-service-to-all model inadequate and limiting. In the relative differentiated services approach, the network traffic is grouped in a small number of service classes which are ordered based on their packet forwarding quality, in terms of per-hop metrics for the queueing delays and packet losses. The users and applications, in this context, can adaptivelychoose the class that best meets their quality and pricing constraints, based on the assurance that higher classes will be better, or at least no worse, than lower classes. In this work, we propose the proportional differentiation model as a way to refine and quantify this basic premise of relative differentiated services. The proportional differentiation model aims to provide the network operator with the 'tuning knobs' for adjusting the quality spacing between classes, independent of the class loads; this cannot be achieved with other relative differentiation models, such as strict prioritization or capacity differentiation. We apply the proportional model on queueing-delay differentiation only, leaving the problem of coupled delay and loss differentiation for future work. We discuss the dynamics of the proportional delay differentiation model and state the conditions under which it is feasible. Then, we identify and evaluate (using simulations) two packet schedulers that approximate the proportional differentiation model in heavy-load conditions, even in short timescales. Finally, we demonstrate that such per-hop and class-based mechanisms can provide consistent end-to-end differentiation to individual flows from different classes, independently of the network path and flow characteristics.

Book ChapterDOI
Conyers Herring1
01 Jan 1999
TL;DR: In this paper, the authors provide a broad and logically precise formulation of certain physical laws which underly the interpretation of some of the simplest experiments of this type, which not only provide clues toward the elucidation of more complicated metallurgical phenomena but also throw light on some fundamental fields of solid-state physics.
Abstract: The sintering together of powder particles into a dense solid mass at temperatures below the melting point of the particles is a process whose rate and end result are known to be influenced by many factors, e.g., particle size, distribution of particle sizes, compacting pressure, temperature of sintering, surrounding atmosphere, gas dissolved in the particles, etc. Because of the many factors involved, it is difficult to draw from practical metallurgical results any reliable conclusions regarding the detailed laws governing the processes occurring. Recently, however, interest has been growing in attempts to isolate the physical processes likely to be important and to study them in experiments designed for unambiguous interpretation. Such experiments have a twofold interest, in that they not only provide clues toward the elucidation of more complicated metallurgical phenomena but also throw light on some fundamental fields of solid-state physics. This chapter undertakes to provide a broad and logically precise formulation of certain physical laws which underly the interpretation of some of the simplest experiments of this type.

Posted Content
TL;DR: This paper proposes three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic that incorporates novel optimizations that improve efficiency greatly.
Abstract: Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multi-query optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space. In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.

Journal ArticleDOI
TL;DR: In this paper, a single-mode optical fiber switch which routes individual signals into and out of a wavelength multiplexed data stream without interrupting the remaining channels is described, and the total fiber-to-fiber insertion loss for the packaged switch is 5 dB for passed signals and 8 dB for added and dropped signals, with 0.2 dB polarization dependence.
Abstract: This paper describes a single-mode optical fiber switch which routes individual signals into and out of a wavelength multiplexed data stream without interrupting the remaining channels. The switch uses free-space optical wavelength multiplexing and a column of micromechanical tilt-mirrors to switch 16 channels at 200 GHz spacing from 1531 to 1556 nm. The electrostatically actuated tilt mirrors use an 80 V peak-to-peak 300 KHz sinusoidal drive signal to switch between /spl plusmn/10/spl deg/ with a 20 /spl mu/s response. The total fiber-to-fiber insertion loss for the packaged switch is 5 dB for the passed signals and 8 dB for added and dropped signals, with 0.2 dB polarization dependence. Switching contrast was 30 dB or more for all 16 channels and all input and output states. We demonstrate operation by switching 622 Mb/s data on eight wavelength channels between the two input and output ports with negligible eye closure.

Journal ArticleDOI
01 Jul 1999
TL;DR: In this paper, the authors describe and compare several mechanisms for marking documents and several other mechanisms for decoding the marks after documents have been subjected to common types of distortion, i.e., lines, words, or characters.
Abstract: Each copy of a text document can be made different in a nearly invisible way by repositioning or modifying the appearance of different elements of text, i.e., lines, words, or characters. A unique copy can be registered with its recipient, so that subsequent unauthorized copies that are retrieved can be traced back to the original owner. In this paper we describe and compare several mechanisms for marking documents and several other mechanisms for decoding the marks after documents have been subjected to common types of distortion. The marks are intended to protect documents of limited value that are owned by individuals who would rather possess a legal than an illegal copy if they can be distinguished. We describe attacks that remove the marks and countermeasures to those attacks. An architecture is described for distributing a large number of copies without burdening the publisher with creating and transmitting the unique documents. The architecture also allows the publisher to determine the identity of a recipient who has illegally redistributed the document, without compromising the privacy of individuals who are not operating illegally. Two experimental systems are described. One was used to distribute an issue of the IEEE Journal on Selected Areas in Communications, and the second was used to mark copies of company private memoranda.

Journal ArticleDOI
Bharat T. Doshi1, Subrahmanyam Dravida1, P. Harshavardhana1, Oded Hauser1, Yufei Wang1 
TL;DR: This paper reports test results for large carrier-scale networks that indicate that subsecond restoration, high capacity efficiency, and scalability can be achieved without fault isolation and with moderate processing.
Abstract: The explosion of data traffic and the availability of enormous bandwidth via dense wavelength division multiplexing (DWDM) and optical amplifier (OA) technologies make it important to study optical layer networking and restoration. This paper is concerned with fast distributed restoration and provisioning for generic mesh-based optical networks. We consider two problems of practical importance: determining the best restoration route for each wavelength demand, given the network topology and the capacities and primary routes of all demands, and determining primary and restoration routes for each wavelength demand to minimize network capacity and cost. The approach we propose for both problems is based on precomputing. For each problem, we describe specific algorithms used for computing routes. We also describe endpoint-based failure detection, message flows, and cross-connect actions for execution of fast restorations. Finally, we report test results for large carrier-scale networks that include both the computational performance of the optimization algorithms and the restoration speed obtained by simulation. Our results indicate that subsecond restoration, high capacity efficiency, and scalability can be achieved without fault isolation and with moderate processing. We also discuss methods for scaling algorithms to problems with very large numbers of demands. The wavelength routing and restoration algorithms, the failure detection, and the message exchange and activation architectures we propose are collectively known as WaveStar™ advanced routing platform.

Journal ArticleDOI
TL;DR: By the attachment of interacting laser dyes to the chain ends and focal point of a dendritic macromolecule it is possible to funnel energy harvested by the large peripheral antenna of the dendrimer efficiently and directly to the central fluorescent core by a process that does not involve the dendedrimer inner backbone.
Abstract: By the attachment of interacting laser dyes to the chain ends and focal point of a dendritic macromolecule it is possible to funnel energy harvested by the large peripheral antenna of the dendrimer (see picture on the left) efficiently and directly to the central fluorescent core (picture in the middle) by a process that does not involve the dendrimer inner backbone. The energy is then emitted as a narrow band of fluorescent radiation from the core (picture on the right).

Journal ArticleDOI
TL;DR: The thesis is Web users should have the ability to limit what information is revealed about them and to whom it is revealed.
Abstract: eb server log files are riddled with information about the users who visit them. Obviously, a server can record the content that each visitor accesses. In addition, however, the server can record the user’s IP address—and thus often the user’s Internet domain name, workplace, and/or approximate location—the type of computing platform she is using, the Web page that referred her to this site and, with some effort, the server that she visits next [7]. Even when the user’s IP address changes between browsing sessions (for example, the IP address is assigned dynamically using DHCP [4]), a Web server can link multiple sessions by the same user by planting a unique cookie in the user’s browser during the first browsing session, and retrieving that cookie in subsequent sessions. Moreover, virtually the same monitoring capabilities are available to other parties, for example, the user’s ISP or local gateway administrator who can observe all communication in which the user participates. The user profiling made possible by such monitoring capabilities is viewed as a useful tool by many businesses and consumers; it makes it possible for a Web server to personalize its content for its users, and for businesses to monitor employee activities. However, the negative ramifications for user privacy are considerable. While a lack of privacy has, in principle, always characterized Internet communication, never before has a type of Internet communication been logged so universally and revealed so much about the personal tastes of its users. Thus, our thesis is Web users should have the ability to limit what information is revealed about them and to whom it is revealed. Some with Crowds

Proceedings ArticleDOI
01 Jul 1999
TL;DR: This work presents a new method for user controlled morphing of two homeomorphic triangle meshes of arbitrary topology using the MAPS algorithm to parameterize both meshes over simple base domains and an additional harmonic map bringing the latter into correspondence.
Abstract: We present a new method for user controlled morphing of two homeomorphic triangle meshes of arbitrary topology. In particular we focus on the problem of establishing a correspondence map between source and target meshes. Our method employs the MAPS algorithm to parameterize both meshes over simple base domains and an additional harmonic map bringing the latter into correspondence. To control the mapping the user specifies any number of feature pairs, which control the parameterizations produced by the MAPS algorithm. Additional controls are provided through a direct manipulation interface allowing the user to tune the mapping between the base domains. We give several examples of aesthetically pleasing morphs which can be created in this manner with little user input. Additionally we demonstrate examples of temporal and spatial control over the morph.

Proceedings ArticleDOI
01 May 1999
TL;DR: This paper presents a join signature scheme based on tug-ofwar signatures that probvides guarantees on join size estimation as a function of t:he self-join sizes of the joining relations; this scheme can significantly improve upon the sampling scheme.
Abstract: Query optimizers rely on fast, high-quality estimates of result sizes in order to select between various join plans. Selfjoin sizes of relations provide bounds on the join size of any pairs of such relations. It also indicates the degree of skew in the data, and has been advocated for several estimation procedures. Exact computation of the self-join size requires storage proportional to, the number of distinct attribute values, which may be prohibitively large. In this paper, we study algorithms for tracking (approximate) self-join sizes in limited storage in the presence of insertions and deletions to the relations. Such algorithms detect changes in the degree of skew without an expensive recomputation from the base data. We show that an algorithm based on a tug-ofwar approach provides a more accurate estimation than one based on a sample-and-count approach which is in turn more accurate than a sampling-only approach. Next, we study algorithms for tracking (approximate) join sizes in limited storage; the goal is to maintain a small signature of each relation such that join sizes can be accurately estimated between any pairs of relations. We show that taking random samples for join signatures can lead to inaccurate estimation unless the sample size is quite large; moreover, by a lower bound we show, no other signature scheme can significantly improve upon sampling without further assumptions. These negative results are shown to hold even in the presence of sanity bounds. On the other hand, we present a join signature scheme based on tug-ofwar signatures that probvides guarantees on join size estimation as a function of t:he self-join sizes of the joining relations; this scheme can significantly improve upon the sampling scheme.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive numerical and experimental study of 40 Gbit/s RZ transmission is presented which reveals two new forms of nonlinear interactions that limit the speed of high-speed systems.
Abstract: A comprehensive numerical and experimental study of 40 Gbit/s RZ transmission is presented which reveals two new forms of nonlinear interactions that limit the speed of high-speed systems. Both limitations originate from nonlinear interactions among neighbouring bits. The first interaction involves cross-phase modulation and leads to timing fluctuations while the second interaction originates from four-wave mixing and leads to the creation of shadow pulses.

Journal ArticleDOI
TL;DR: The class of event-clock automata, which contain both event-recording and event-predicting clocks, is a suitable specification language for real-time properties and an algorithm for checking if a timed automaton meets a specification that is given as an event- clock automaton is provided.

Journal ArticleDOI
TL;DR: This paper presents a new approach to an auditory model for robust speech recognition in noisy environments that consists of cochlear bandpass filters and nonlinear operations in which frequency information of the signal is obtained by zero-crossing intervals.
Abstract: This paper presents a new approach to an auditory model for robust speech recognition in noisy environments. The proposed model consists of cochlear bandpass filters and nonlinear operations in which frequency information of the signal is obtained by zero-crossing intervals. Intensity information is also incorporated by a peak detector and a compressive nonlinearity. The robustness of the zero-crossings in spectral estimation is verified by analyzing the variance of the level-crossing intervals as a function of the crossing level values. Compared with other auditory models, the proposed auditory model is computationally efficient, free from many unknown parameters, and able to serve as a robust front-end for speech recognition in noisy environments. Experimental results of speech recognition demonstrate the robustness of the proposed method in various types of noisy environments.

Journal ArticleDOI
Gerhard Kramer1
TL;DR: Several techniques for improving the bounds are developed: (1) causally conditioned entropy and directed information simplify the inner bounds, (2) code trellises serve as simple code trees, (3) superposition coding and binning with code trees improves rates.
Abstract: A discrete memoryless network (DMN) is a memoryless multiterminal channel with discrete inputs and outputs. A sequence of inner bounds to the DMN capacity region is derived by using code trees. Capacity expressions are given for three classes of DMNs: (1) a single-letter expression for a class with a common output, (2) a two-letter expression for a binary-symmetric broadcast channel (BC) with partial feedback, and (3) a finite-letter expression for push-to-talk DMNs. The first result is a consequence of a new capacity outer bound for common output DMNs. The third result demonstrates that the common practice of using a time-sharing random variable does not include all time-sharing possibilities, namely, time sharing of channels. Several techniques for improving the bounds are developed: (1) causally conditioned entropy and directed information simplify the inner bounds, (2) code trellises serve as simple code trees, (3) superposition coding and binning with code trees improves rates. Numerical computations show that the last technique enlarges the best known rate regions for a multiple-access channel (MAC) and a BC, both with feedback. In addition to the rate bounds, a sequence of inner bounds to the DMN reliability function is derived. A numerical example for a two-way channel illustrates the behavior of the error exponents.