scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2002"


Journal ArticleDOI
TL;DR: This work introduces pricing of transmit powers in order to obtain Pareto improvement of the noncooperative power control game, i.e., to obtain improvements in user utilities relative to the case with no pricing.
Abstract: A major challenge in the operation of wireless communications systems is the efficient use of radio resources. One important component of radio resource management is power control, which has been studied extensively in the context of voice communications. With the increasing demand for wireless data services, it is necessary to establish power control algorithms for information sources other than voice. We present a power control solution for wireless data in the analytical setting of a game theoretic framework. In this context, the quality of service (QoS) a wireless terminal receives is referred to as the utility and distributed power control is a noncooperative power control game where users maximize their utility. The outcome of the game results in a Nash (1951) equilibrium that is inefficient. We introduce pricing of transmit powers in order to obtain Pareto improvement of the noncooperative power control game, i.e., to obtain improvements in user utilities relative to the case with no pricing. Specifically, we consider a pricing function that is a linear function of the transmit power. The simplicity of the pricing function allows a distributed implementation where the price can be broadcast by the base station to all the terminals. We see that pricing is especially helpful in a heavily loaded system.

1,416 citations


Journal ArticleDOI
TL;DR: Results show that empirical capacities converge to the limit capacity predicted from the asymptotic theory even at moderate n = 16, and the assumption of separable transmit/receive correlations via simulations based on a ray-tracing propagation model is analyzed.
Abstract: Previous studies have shown that single-user systems employing n-element antenna arrays at both the transmitter and the receiver can achieve a capacity proportional to n, assuming independent Rayleigh fading between antenna pairs. We explore the capacity of dual-antenna-array systems under correlated fading via theoretical analysis and ray-tracing simulations. We derive and compare expressions for the asymptotic growth rate of capacity with n antennas for both independent and correlated fading cases; the latter is derived under some assumptions about the scaling of the fading correlation structure. In both cases, the theoretic capacity growth is linear in n but the growth rate is 10-20% smaller in the presence of correlated fading. We analyze our assumption of separable transmit/receive correlations via simulations based on a ray-tracing propagation model. Results show that empirical capacities converge to the limit capacity predicted from our asymptotic theory even at moderate n = 16. We present results for both the cases when the transmitter does and does not know the channel realization.

1,039 citations


Journal ArticleDOI
TL;DR: A set of real-world problems to random labelings of points is compared and it is found that real problems contain structures in this measurement space that are significantly different from the random sets.
Abstract: We studied a number of measures that characterize the difficulty of a classification problem, focusing on the geometrical complexity of the class boundary. We compared a set of real-world problems to random labelings of points and found that real problems contain structures in this measurement space that are significantly different from the random sets. Distributions of problems in this space show that there exist at least two independent factors affecting a problem's difficulty. We suggest using this space to describe a classifier's domain of competence. This can guide static and dynamic selection of classifiers for specific problems as well as subproblems formed by confinement, projection, and transformations of the feature vectors.

650 citations


Journal ArticleDOI
TL;DR: The measurement results confirm that the majority of the multipath components can be determined from image based ray tracing techniques for line-of-sight (LOS) applications and can be used as empirical values for broadband wireless system design for 60-GHz short-range channels.
Abstract: This article presents measurement results and models for 60-GHz channels. Multipath components were resolved in time by using a sliding correlator with 10-ns resolution and in space by sweeping a directional antenna with 7/spl deg/ half power beamwidth in the azimuthal direction. Power delay profiles (PDPs) and power angle profiles (PAPs) were measured in various indoor and short-range outdoor environments. Detailed multipath structure was retrieved from PDPs and PAPs and was related to site-specific environments. Results show an excellent correlation between the propagation environments and the multipath channel structures. The measurement results confirm that the majority of the multipath components can be determined from image based ray tracing techniques for line-of-sight (LOS) applications. For non-LOS (NLOS) propagation through walls, the metallic structure of composite walls must be considered. From the recorded PDPs and PAPs, received signal power and statistical parameters of angle-of-arrival and time-of-arrival were also calculated. These parameters accurately describe the spatial and temporal properties of millimeter-wave channels and can be used as empirical values for broadband wireless system design for 60-GHz short-range channels.

650 citations


Journal ArticleDOI
01 Jun 2002
TL;DR: AWAII as discussed by the authors uses specialized path setup schemes which install host-based forwarding entries in specific routers to support intra-domain micromobility, which reduces mobility related disruption to user applications.
Abstract: Mobile IP is the current standard for supporting macromobility of mobile hosts. However, in the case of micromobility support, there are several competing proposals. We present the design, implementation and performance evaluation of HAWAII (handoff-aware wireless access Internet infrastructure), a domain-based approach for supporting mobility. HAWAII uses specialized path setup schemes which install host-based forwarding entries in specific routers to support intra-domain micromobility. These path setup schemes deliver excellent performance by reducing mobility related disruption to user applications. Also, mobile hosts retain their network address while moving within the domain, simplifying quality-of-service (QoS) support. Furthermore, reliability is achieved through maintaining soft-state forwarding entries for the mobile hosts and leveraging fault detection mechanisms built in existing intra-domain routing protocols. HAWAII defaults to using Mobile IP for macromobility, thus providing a comprehensive solution for mobility support in wide-area wireless networks.

650 citations


Journal ArticleDOI
TL;DR: This work defines the simple path-vector protocol (SPVP), a distributed algorithm for solving the stable paths problem that is intended to capture the dynamic behavior of BGP at an abstract level and shows that SPVP will converge to the unique solution of an instance of the stable path problem if no dispute wheel exists.
Abstract: Dynamic routing protocols such as RIP and OSPF essentially implement distributed algorithms for solving the shortest paths problem. The border gateway protocol (BGP) is currently the only interdomain routing protocol deployed in the Internet. BGP does not solve a shortest paths problem since any interdomain protocol is required to allow policy-based metrics to override distance-based metrics and enable autonomous systems to independently define their routing policies with little or no global coordination. It is then natural to ask if BGP can be viewed as a distributed algorithm for solving some fundamental problem. We introduce the stable paths problem and show that BGP can be viewed as a distributed algorithm for solving this problem. Unlike a shortest path tree, such a solution does not represent a global optimum, but rather an equilibrium point in which each node is assigned its local optimum. We study the stable paths problem using a derived structure called a dispute wheel, representing conflicting routing policies at various nodes. We show that if no dispute wheel can be constructed, then there exists a unique solution for the stable paths problem. We define the simple path vector protocol (SPVP), a distributed algorithm for solving the stable paths problem. SPVP is intended to capture the dynamic behavior of BGP at an abstract level. If SPVP converges, then the resulting state corresponds to a stable paths solution. If there is no solution, then SPVP always diverges. In fact, SPVP can even diverge when a solution exists. We show that SPVP will converge to the unique solution of an instance of the stable paths problem if no dispute wheel exists.

536 citations


Journal ArticleDOI
TL;DR: It is shown here that degenerate channel phenomena called "keyholes" may arise under realistic assumptions which have zero correlation between the entries of the channel matrix H and yet only a single degree of freedom.
Abstract: Multielement system capacities are usually thought of as limited only by correlations between elements. It is shown here that degenerate channel phenomena called "keyholes" may arise under realistic assumptions which have zero correlation between the entries of the channel matrix H and yet only a single degree of freedom. Canonical physical examples of keyholes are presented. For outdoor environments, it is shown that roof edge diffraction is perceived as a "keyhole" by a vertical base array that may be avoided by employing instead a horizontal base array.

524 citations


Posted Content
Christopher A. Fuchs1
TL;DR: In this regard, no tool appears better calibrated for a direct assault than quantum information theory as discussed by the authors, and this method holds promise precisely because a large part of the structure of quantum theory has always concerned information.
Abstract: In this paper, I try once again to cause some good-natured trouble The issue remains, when will we ever stop burdening the taxpayer with conferences devoted to the quantum foundations? The suspicion is expressed that no end will be in sight until a means is found to reduce quantum theory to two or three statements of crisp physical (rather than abstract, axiomatic) significance In this regard, no tool appears better calibrated for a direct assault than quantum information theory Far from a strained application of the latest fad to a time-honored problem, this method holds promise precisely because a large part--but not all--of the structure of quantum theory has always concerned information It is just that the physics community needs reminding This paper, though taking quant-ph/0106166 as its core, corrects one mistake and offers several observations beyond the previous version In particular, I identify one element of quantum mechanics that I would not label a subjective term in the theory--it is the integer parameter D traditionally ascribed to a quantum system via its Hilbert-space dimension

460 citations


Proceedings ArticleDOI
19 May 2002
TL;DR: ExB as mentioned in this paper is a tool that uses data from change management systems to locate people with desired expertise, using a quantification of experience, and presents evidence to validate this quantification as a measure of expertise.
Abstract: Finding relevant expertise is a critical need in collaborative software engineering, particularly in geographically distributed developments. We introduce a tool, called Expertise Browser (ExB), that uses data from change management systems to locate people with desired expertise. It uses a quantification of experience, and presents evidence to validate this quantification as a measure of expertise. The tool enables developers, for example, to easily distinguish someone who has worked only briefly in a particular area of the code from someone who has more extensive experience, and to locate people with broad expertise throughout large parts of the product, such as modules or even subsystems. In addition, it allows a user to discover expertise profiles for individuals or organizations. Data from a deployment of the tool in a large software development organization shows that newer, remote sites tend to use the tool for expertise location more frequently. Larger, more established sites used the tool to find expertise profiles for people or organizations. We conclude by describing extensions that provide continuous awareness of ongoing work and an interactive, quantitative resume/spl acute/.

443 citations


Journal ArticleDOI
TL;DR: In this paper, a quasi-Yagi antenna based on the classic Yagi-Uda dipole array is presented, which achieves a measured 48% bandwidth for VSWR <2, better than 12 dB front-to-back ratio, smaller than -15 dB cross polarization, 3-5 dB absolute gain and a nominal efficiency of 93% across the operating bandwidth.
Abstract: A novel broadband planar antenna based on the classic Yagi-Uda dipole array is presented. This "quasi-Yagi" antenna achieves a measured 48% bandwidth for VSWR <2, better than 12 dB front-to-back ratio, smaller than -15 dB cross polarization, 3-5 dB absolute gain and a nominal efficiency of 93% across the operating bandwidth. Finite-difference time-domain simulation is used for optimization of the antenna and the results agree very well with measurements. Additionally, a gain-enhanced design is presented, where higher gain has been achieved at the cost of reduced bandwidth. These quasi-Yagi antennas are realized on a high dielectric constant substrate and are completely compatible with microstrip circuitry and solid-state devices. The excellent radiation properties of this antenna make it ideal as either a stand-alone antenna with a broad pattern or as an array element. The antenna should find wide applications in wireless communication systems, power combining, phased arrays and active arrays, as well as millimeter-wave imaging arrays.

385 citations


Proceedings ArticleDOI
03 Jun 2002
TL;DR: This paper relies on randomizing techniques that compute small "sketch" summaries of the streams that can be used to provide approximate answers to aggregate queries with provable guarantees on the approximation error, and results indicate that sketches provide significantly more accurate answers compared to histograms for aggregate queries.
Abstract: Recent years have witnessed an increasing interest in designing algorithms for querying and analyzing streaming data (i.e., data that is seen only once in a fixed order) with only limited memory. Providing (perhaps approximate) answers to queries over such continuous data streams is a crucial requirement for many application environments; examples include large telecom and IP network installations where performance data from different parts of the network needs to be continuously collected and analyzed.In this paper, we consider the problem of approximately answering general aggregate SQL queries over continuous data streams with limited memory. Our method relies on randomizing techniques that compute small "sketch" summaries of the streams that can then be used to provide approximate answers to aggregate queries with provable guarantees on the approximation error. We also demonstrate how existing statistical information on the base data (e.g., histograms) can be used in the proposed framework to improve the quality of the approximation provided by our algorithms. The key idea is to intelligently partition the domain of the underlying attribute(s) and, thus, decompose the sketching problem in a way that provably tightens our guarantees. Results of our experimental study with real-life as well as synthetic data streams indicate that sketches provide significantly more accurate answers compared to histograms for aggregate queries. This is especially true when our domain partitioning methods are employed to further boast the accuracy of the final estimates.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: This paper proposes a novel index structure, termed XTrie, that supports the efficient filtering of XML documents based on XPath expressions and offers several novel features that, it believes, make it especially attractive for large-scale publish/subscribe systems.
Abstract: We propose a novel index structure, termed XTrie, that supports the efficient filtering of XML documents based on XPath expressions. Our XTrie index structure offers several novel features that make it especially attractive for large scale publish/subscribe systems. First, XTrie is designed to support effective filtering based on complex XPath expressions (as opposed to simple, single-path specifications). Second, our XTrie structure and algorithms are designed to support both ordered and unordered matching of XML data. Third, by indexing on sequences of element names organized in a trie structure and using a sophisticated matching algorithm, XTrie is able to both reduce the number of unnecessary index probes as well as avoid redundant matchings, thereby providing extremely efficient filtering. Our experimental results over a wide range of XML document and XPath expression workloads demonstrate that our XTrie index structure outperforms earlier approaches by wide margins.

Proceedings ArticleDOI
03 Jun 2002
TL;DR: This paper asks if the traditional relational query acceleration techniques of summary tables and covering indexes have analogs for branching path expression queries over tree- or graph-structured XML data and shows that the forward-and-backward index already proposed in the literature can be viewed as a structure analogous to a summary table or covering index.
Abstract: In this paper, we ask if the traditional relational query acceleration techniques of summary tables and covering indexes have analogs for branching path expression queries over tree- or graph-structured XML data. Our answer is yes --- the forward-and-backward index already proposed in the literature can be viewed as a structure analogous to a summary table or covering index. We also show that it is the smallest such index that covers all branching path expression queries. While this index is very general, our experiments show that it can be so large in practice as to offer little performance improvement over evaluating queries directly on the data. Likening the forward-and-backward index to a covering index on all the attributes of several tables, we devise an index definition scheme to restrict the class of branching path expressions being indexed. The resulting index structures are dramatically smaller and perform better than the full forward-and-backward index for these classes of branching path expressions. This is roughly analogous to the situation in multidimensional or OLAP workloads, in which more highly aggregated summary tables can service a smaller subset of queries but can do so at increased performance. We evaluate the performance of our indexes on both relational decompositions of XML and a native storage technique. As expected, the performance benefit of an index is maximized when the query matches the index definition.

Journal ArticleDOI
TL;DR: This algorithm combines the benefits of the well-known reduced constellation algorithm (RCA) and constant modulus algorithm (CMA) with more flexibility and is better suited to take advantage of the symbol statistics of certain types of signal constellations.
Abstract: This paper presents a new blind equalization algorithm called multimodulus algorithm (MMA). This algorithm combines the benefits of the well-known reduced constellation algorithm (RCA) and constant modulus algorithm (CMA). In addition, MMA provides more flexibility than RCA and CMA, and is better suited to take advantage of the symbol statistics of certain types of signal constellations, such as nonsquare constellations, very dense constellations, and some wrong solutions.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: LegionDB as discussed by the authors is a cost-based XML storage mapping engine that explores a space of possible XML-to-relational mappings and selects the best mapping for a given application.
Abstract: As Web applications manipulate an increasing amount of XML, there is a growing interest in storing XML data in relational databases. Due to the mismatch between the complexity of XML's tree structure and the simplicity of flat relational tables, there are many ways to store the same document in an RDBMS, and a number of heuristic techniques have been proposed. These techniques typically define fixed mappings and do not take application characteristics into account. However, a fixed mapping is unlikely to work well for all possible applications. In contrast, LegoDB is a cost-based XML storage mapping engine that explores a space of possible XML-to-relational mappings and selects the best mapping for a given application. LegoDB leverages current XML and relational technologies: (1) it models the target application with an XML Schema, XML data statistics, and an XQuery workload; (2) the space of configurations is generated through XML-Schema rewritings; and (3) the best among the derived configurations is selected using cost estimates obtained through a standard relational optimizer. We describe the LegoDB storage engine and provide experimental results that demonstrate the effectiveness of this approach.

Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive review of recent developments that will likely enable important advances in areas such as optical communications, ultra-high resolution spectroscopy and applications to ultrahigh sensitivity gas-sensing systems.
Abstract: Following an introduction to the history of the invention of the quantum cascade (QC) laser and of the band-structure engineering advances that have led to laser action over most of the mid-infrared (IR) and part of the far-IR spectrum, the paper provides a comprehensive review of recent developments that will likely enable important advances in areas such as optical communications, ultrahigh resolution spectroscopy and applications to ultrahigh sensitivity gas-sensing systems We discuss the experimental observation of the remarkably different frequency response of QC lasers compared to diode lasers, ie, the absence of relaxation oscillations, their high-speed digital modulation, and results on mid-IR optical wireless communication links, which demonstrate the possibility of reliably transmitting complex multimedia data streams Ultrashort pulse generation by gain switching and active and passive modelocking is subsequently discussed Recent data on the linewidth of free-running QC lasers (/spl sim/150 kHz) and their frequency stabilization down to 10 kHz are presented Experiments on the relative frequency stability (/spl sim/5 Hz) of two QC lasers locked to optical cavities are discussed Finally, developments in metallic waveguides with surface plasmon modes, which have enabled extension of the operating wavelength to the far IR are reported

Proceedings ArticleDOI
07 Aug 2002
TL;DR: The A(k)-indices are introduced, a family of approximate structural summaries based on the concept of k-bisimilarity, in which nodes are grouped based on local structure, i.e., the incoming paths of length up to k, which ranges from being very efficient for simple queries to competitive for most complex queries, while using significantly less space than comparable structures.
Abstract: XML and other semi-structured data may have partially specified or missing schema information, motivating the use of a structural summary which can be automatically computed from the data. These summaries also serve as indices for evaluating the complex path expressions common to XML and semi-structured query languages. However, to answer all path queries accurately, summaries must encode information about long, seldom-queried paths, leading to increased size and complexity with little added value. We introduce the A(k)-indices, a family of approximate structural summaries. They are based on the concept of k-bisimilarity, in which nodes are grouped based on local structure, i.e., the incoming paths of length up to k. The parameter k thus smoothly varies the level of detail (and accuracy) of the A(k)-index. For small values of k, the size of the index is substantially reduced. While smaller, the A(k) index is approximate, and we describe techniques for efficiently extracting exact answers to regular path queries. Our experiments show that, for moderate values of k, path evaluation using the A(k)-index ranges from being very efficient for simple queries to competitive for most complex queries, while using significantly less space than comparable structures.

Journal ArticleDOI
01 Sep 2002
TL;DR: Minimum distances, distance distributions, and error exponents on a binary-symmetric channel (BSC) are given for typical codes from Shannon's random code ensemble and from a random linear code ensemble.
Abstract: Minimum distances, distance distributions, and error exponents on a binary-symmetric channel (BSC) are given for typical codes from Shannon's random code ensemble and for typical codes from a random linear code ensemble. A typical random code of length N and rate R is shown to have minimum distance N/spl delta//sub GV/(2R), where /spl delta//sub GV/(R) is the Gilbert-Varshamov (GV) relative distance at rate R, whereas a typical linear code (TLC) has minimum distance N/spl delta//sub GV/(R). Consequently, a TLC has a better error exponent on a BSC at low rates, namely, the expurgated error exponent.

Journal ArticleDOI
TL;DR: Simulations show that the Cayley codes allow efficient and effective high-rate data transmission in multiantenna communication systems without knowing the channel.
Abstract: One method for communicating with multiple antennas is to encode the transmitted data differentially using unitary matrices at the transmitter, and to decode differentially without knowing the channel coefficients at the receiver. Since channel knowledge is not required at the receiver, differential schemes are ideal for use on wireless links where channel tracking is undesirable or infeasible, either because of rapid changes in the channel characteristics or because of limited system resources. Although this basic principle is well understood, it is not known how to generate good-performing constellations of unitary matrices, for any number of transmit and receive antennas and for any rate. This is especially true at high rates where the constellations must be rapidly encoded and decoded. We propose a class of Cayley codes that works with any number of antennas, and has efficient encoding and decoding at any rate. The codes are named for their use of the Cayley transform, which maps the highly nonlinear Stiefel manifold of unitary matrices to the linear space of skew-Hermitian matrices. This transformation leads to a simple linear constellation structure in the Cayley transform domain and to an information-theoretic design criterion based on emulating a Cauchy random matrix. Moreover, the resulting Cayley codes allow polynomial-time near-maximum-likelihood (ML) decoding based on either successive nulling/canceling or sphere decoding. Simulations show that the Cayley codes allow efficient and effective high-rate data transmission in multiantenna communication systems without knowing the channel.

Journal ArticleDOI
Alexander Barg1, D.Yu. Nogin
TL;DR: The Gilbert-Varshamov and Hamming bounds for packings of spheres (codes) in the Grassmann manifolds over R and C are derived.
Abstract: We derive the Gilbert-Varshamov and Hamming bounds for packings of spheres (codes) in the Grassmann manifolds over R and C. Asymptotic expressions are obtained for the geodesic metric and projection Frobenius (chordal) metric on the manifold.

Journal ArticleDOI
TL;DR: The dynamic parallel-access scheme presented in this paper does not require any modifications to servers or content and can be easily included in browsers, peer-to-peer applications or content distribution networks to speed up delivery of popular content.
Abstract: Popular content is frequently replicated in multiple servers or caches in the Internet to offload origin servers and improve end-user experience. However, choosing the best server is a nontrivial task and a bad choice may provide poor end user experience. In contrast to retrieving a file from a single server, we propose a parallel-access scheme where end users access multiple servers at the same time, fetching different portions of that file from different servers and reassembling them locally. The amount of data retrieved from a particular server depends on the resources available at that server or along the path from the user to the server. Faster servers will deliver bigger portions of a file while slower servers will deliver smaller portions. If the available resources at a server or along the path change during the download of a file, a dynamic parallel access will automatically shift the load from congested locations to less loaded parts (server and links) of the Internet. The end result is that users experience significant speedups and very consistent response times. Moreover, there is no need for complicated server selection algorithms and load is dynamically shared among all servers. The dynamic parallel-access scheme presented in this paper does not require any modifications to servers or content and can be easily included in browsers, peer-to-peer applications or content distribution networks to speed up delivery of popular content.

Journal ArticleDOI
Qi Li1, Jinsong Zheng1, A. Tsai1, Qiru Zhou1
TL;DR: The experiments show that the batch-mode algorithm can detect endpoints as accurately as using HMM forced alignment while the proposed one has much less computational complexity.
Abstract: When automatic speech recognition (ASR) and speaker verification (SV) are applied in adverse acoustic environments, endpoint detection and energy normalization can be crucial to the functioning of both systems. In low signal-to-noise ratio (SNR) and nonstationary environments, conventional approaches to endpoint detection and energy normalization often fail and ASR performances usually degrade dramatically. The purpose of this paper is to address the endpoint problem. For ASR, we propose a real-time approach. It uses an optimal filter plus a three-state transition diagram for endpoint detection. The filter is designed utilizing several criteria to ensure accuracy and robustness. It has almost invariant response at various background noise levels. The detected endpoints are then applied to energy normalization sequentially. Evaluation results show that the proposed algorithm significantly reduces the string error rates in low SNR situations. The reduction rates even exceed 50% in several evaluated databases. For SV, we propose a batch-mode approach. It uses the optimal filter plus a two-mixture energy model for endpoint detection. The experiments show that the batch-mode algorithm can detect endpoints as accurately as using HMM forced alignment while the proposed one has much less computational complexity.

Journal ArticleDOI
TL;DR: In this article, the theory of parametric amplifiers driven by two pump waves is developed, and an amplifier that produces uniform exponential gain over a range of wavelengths that extends at least 30 nm on either side of the average pump wavelength is presented.
Abstract: The theory of parametric amplifiers driven by two pump waves is developed. By choosing the pump wavelengths judiciously, one can design an amplifier that produces uniform exponential gain over a range of wavelengths that extends at least 30 nm on either side of the average pump wavelength.

Proceedings ArticleDOI
17 Mar 2002
TL;DR: In this paper, the authors reported 2.5 Tb/s (64 /spl times/ 42.7-Gb/s) WDM transmission over 4000 km (forty 100-km spans) of nonzero dispersion-shifted fiber.
Abstract: We report 2.5 Tb/s (64 /spl times/ 42.7-Gb/s) WDM transmission over 4000 km (forty 100-km spans) of non-zero dispersion-shifted fiber. This capacity /spl times/ distance record of 10 petabit-km/s for 40-Gb/s systems is achieved in a single 53-nm extended L band using return-to-zero differential-phase-shift-keyed modulation, balanced detection, and distributed Raman amplification.

Book ChapterDOI
Kousha Etessami1
20 Aug 2002
TL;DR: In this article, the authors define and provide algorithms for computing a hierarchy of simulation relations on the state-spaces of ordinary transition systems, finite automata, and Buchi automata.
Abstract: We define and provide algorithms for computing a natural hierarchy of simulation relations on the state-spaces of ordinary transition systems, finite automata, and Buchi automata.T hese simulations enrich ordinary simulation and can be used to obtain greater reduction in the size of automata by computing the automaton quotient with respect to their underlying equivalence.Sta te reduction for Buchi automata is useful for making explicit-state model checking run faster ([EH00, SB00, EWS01]).We define k-simulations, where 1-simulation corresponds to ordinary simulation and its variants for Buchi automata ([HKR97, EWS01]), and k-simulations, for k > 1, generalize the game definition of 1-simulation by allowing the Duplicator to use k pebbles instead of 1 (to "hedge its bets") in response to the Spoiler's move of a single pebble.As k increases, ksimulations are monotonically non-decreasing relations. Indeed, when k reaches n, the number of states of the automaton, the n-simulations defined for finite-automata and for labeled transition systems correspond precisely to language containment and trace containment, respectively. But for each fixed k, the maximal k-simulation relation is computable in polynomial time: nO(k).This provides a mechanism with which to trade off increased computing time for larger simulation relation size, and more potential reduction in automaton size.W e provide algorithms for computing k-simulations using a natural generalization of a prior efficient algorithm based on parity games ([EWS01]) for computing various simulations.Lastly, we observe the relationship between k-simulations and a k-variable interpretation of modal logic.

Journal ArticleDOI
TL;DR: The definition of keys for XML documents is discussed, paying particular attention to the concept of a relative key , which is commonly used in hierarchically structured documents and scientific databases.

Proceedings ArticleDOI
03 Jun 2002
TL;DR: In these situations, algorithms that can summarize the data stream involved in a concise, but reasonably accurate, synopsis that can be stored in the allotted (small) amount of memory and can be used to provide approximate answers to user queries along with some reasonable guarantees on the quality of the approximation are needed.

Journal ArticleDOI
TL;DR: The capacity of wireless communication architectures equipped with multiple transmit and receive antennas and impaired by both noise and cochannel interference is studied and a closed-form solution is found for the capacity in the limit of a large number of antennas.
Abstract: The capacity of wireless communication architectures equipped with multiple transmit and receive antennas and impaired by both noise and cochannel interference is studied. We find a closed-form solution for the capacity in the limit of a large number of antennas. This asymptotic solution, which is a sole function of the relative number of transmit and receive antennas and the signal-to-noise and signal-to-interference ratios (SNR and SIR), is then particularized to a number of cases of interest. By verifying that antenna diversity one can substitute for time and/or frequency diversity at providing ergodicity, we show that these asymptotic solutions approximate the ergodic capacity very closely even when the number of antennas is very small.

01 Jan 2002
TL;DR: VeriWeb is presented, a tool for automatically discovering and systematically exploring Web-site execution paths that can be followed by a user in a Web application that can navigate automatically through dynamic components of Web sites, including form submissions and execution of client-side script.
Abstract: Web sites are becoming increasingly complex as more and more services and information are made available over the Internet and intranets. At the same time, the correct behavior of sites has become crucial to the success of businesses and organizations and thus should be tested thoroughly and frequently. Although traditional software testing is already a notoriously hard, time-consuming and expensive process, testing Web sites presents even greater challenges: Web interfaces are very dynamic; the environment of Web applications is more complex than that of typical monolithic or client-server applications; Web applications, most notably e-commerce sites, have a large number of users who have no training on how to use the application and hence are more likely to exercise it in unpredictable ways. Existing testing tools for automating the process of testing dynamic Web sites require the specification of test scenarios, which results in limited test coverage. In this paper, we present an overview of VeriWeb, a tool for automatically discovering and systematically exploring Web-site execution paths that can be followed by a user in a Web application. Unlike traditional crawlers which are limited to the exploration of static links, VeriWeb can navigate automatically through dynamic components of Web sites, including form submissions and execution of client-side script.

Journal ArticleDOI
TL;DR: This paper presents a class of layered space-time receivers devised for frequency-selective channels, which offer various performance and complexity tradeoffs and are compared and evaluated in the context of a typical urban channel with excellent results.
Abstract: Results in information theory have demonstrated the enormous potential of wireless communication systems with antenna arrays at both the transmitter and receiver. To exploit this potential, a number of layered space-time architectures have been proposed. These layered space-time systems transmit parallel data streams, simultaneously and on the same frequency, in a multiple-input multiple-output fashion. With rich multipath propagation, these different streams can be separated at the receiver because of their distinct spatial signatures. However, the analysis of these techniques presented thus far had mostly been strictly narrowband. In order to enable high-data-rate applications, it might be necessary to utilize signals whose bandwidth exceeds the coherence bandwidth of the channel, which brings in the issue of frequency selectivity. In this paper, we present a class of layered space-time receivers devised for frequency-selective channels. These new receivers, which offer various performance and complexity tradeoffs, are compared and evaluated in the context of a typical urban channel with excellent results.