scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2005"


Journal ArticleDOI
TL;DR: The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted.
Abstract: Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decode-and-forward the source message to the destination, or they compress-and-forward their channel outputs to the destination. The decode-and-forward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compress-and-forward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destination's channel output. The strategies are applied to wireless channels, and it is shown that decode-and-forward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels.

2,842 citations


Journal ArticleDOI
TL;DR: A simple encoding algorithm is introduced that achieves near-capacity at sum rates of tens of bits/channel use and regularization is introduced to improve the condition of the inverse and maximize the signal-to-interference-plus-noise ratio at the receivers.
Abstract: Recent theoretical results describing the sum capacity when using multiple antennas to communicate with multiple users in a known rich scattering environment have not yet been followed with practical transmission schemes that achieve this capacity. We introduce a simple encoding algorithm that achieves near-capacity at sum rates of tens of bits/channel use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the power of the transmitted signal. This work is comprised of two parts. In this first part, we show that while the sum capacity grows linearly with the minimum of the number of antennas and users, the sum rate of channel inversion does not. This poor performance is due to the large spread in the singular values of the channel matrix. We introduce regularization to improve the condition of the inverse and maximize the signal-to-interference-plus-noise ratio at the receivers. Regularization enables linear growth and works especially well at low signal-to-noise ratios (SNRs), but as we show in the second part, an additional step is needed to achieve near-capacity performance at all SNRs.

1,796 citations


Journal ArticleDOI
TL;DR: A simple encoding algorithm is introduced that achieves near-capacity at sum-rates of tens of bits/channel use and a certain perturbation of the data using a "sphere encoder" can be chosen to further reduce the energy of the transmitted signal.
Abstract: Recent theoretical results describing the sum-capacity when using multiple antennas to communicate with multiple users in a known rich scattering environment have not yet been followed with practical transmission schemes that achieve this capacity. We introduce a simple encoding algorithm that achieves near-capacity at sum-rates of tens of bits/channel use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the energy of the transmitted signal. The paper is comprised of two parts. In this second part, we show that, after the regularization of the channel inverse introduced in the first part, a certain perturbation of the data using a "sphere encoder" can be chosen to further reduce the energy of the transmitted signal. The performance difference with and without this perturbation is shown to be dramatic. With the perturbation, we achieve excellent performance at all signal-to-noise ratios. The results of both uncoded and turbo-coded simulations are presented.

972 citations


Journal ArticleDOI
10 Jan 2005
TL;DR: Differential-phase-shift keying has recently been used to reach record distances in long-haul lightwave communication systems and theoretical as well as implementation aspects of DPSK are reviewed.
Abstract: Differential-phase-shift keying (DPSK) has recently been used to reach record distances in long-haul lightwave communication systems. This paper will review theoretical, as well as implementation, aspects of DPSK, and discuss experimental results.

949 citations


Proceedings ArticleDOI
13 Jun 2005
TL;DR: It is argued that a simpler pragmatic approach that offers coordinated, spatially aggregated spectrum access via a regional spectrum broker is more attractive in the immediate future.
Abstract: The new paradigm of dynamic spectrum access (DSA) networks aims to provide opportunistic access to large parts of the underutilized spectrum. The majority of research in this area has focused on free-for-all, uncoordinated access methods common in ad-hoc military applications (Horne, W. 2003; Leaves, P. et al., 2002; Lehr, W. et al., 2002; Schafer, D.J.; To/spl uml/njes, R., 2002). We argue that a simpler pragmatic approach that offers coordinated, spatially aggregated spectrum access via a regional spectrum broker is more attractive in the immediate future. We first introduce two new concepts, coordinated access band (CAB) and statistically multiplexed access (SMA), to the spectrum. We describe their implementation in the new DIMSUMnet (dynamic intelligent management of spectrum for ubiquitous mobile-access network) architecture consisting of four elements: base stations; clients; a radio access network manager (RAN-MAN) that obtains spectrum leases; a per-domain spectrum broker that controls spectrum access. We also discuss in detail various issues in the design of spectrum brokers and spectrum allocation policies and algorithms.

527 citations


Journal ArticleDOI
23 Sep 2005-Science
TL;DR: Observations from Voyager 1 are interpreted as evidence that V1 was crossed by the TS on 2004/351 (during a tracking gap) at 94.0 astronomical units, evidently as the shock was moving radially inward in response to decreasing solar wind ram pressure, and that V 1 has remained in the heliosheath until at least mid-2005.
Abstract: Voyager 1 (V1) began measuring precursor energetic ions and electrons from the heliospheric termination shock (TS) in July 2002. During the ensuing 2.5 years, average particle intensities rose as V1 penetrated deeper into the energetic particle foreshock of the TS. Throughout 2004, V1 observed even larger, fluctuating intensities of ions from 40 kiloelectron volts (keV) to >/=50 megaelectron volts per nucleon and of electrons from >26 keV to >/=350 keV. On day 350 of 2004 (2004/350), V1 observed an intensity spike of ions and electrons that was followed by a sustained factor of 10 increase at the lowest energies and lesser increases at higher energies, larger than any intensities since V1 was at 15 astronomical units in 1982. The estimated solar wind radial flow speed was positive (outward) at approximately +100 kilometers per second (km s(-1)) from 2004/352 until 2005/018, when the radial flows became predominantly negative (sunward) and fluctuated between approximately -50 and 0 km s(-1) until about 2005/110; they then became more positive, with recent values (2005/179) of approximately +50 km s(-1). The energetic proton spectrum averaged over the postshock period is apparently dominated by strongly heated interstellar pickup ions. We interpret these observations as evidence that V1 was crossed by the TS on 2004/351 (during a tracking gap) at 94.0 astronomical units, evidently as the shock was moving radially inward in response to decreasing solar wind ram pressure, and that V1 has remained in the heliosheath until at least mid-2005.

424 citations


Journal ArticleDOI
TL;DR: This paper shows that taking /spl alpha/=5/spl pi//6 is a necessary and sufficient condition to guarantee that network connectivity is preserved, and proposes a set of optimizations that further reduce power consumption and prove that they retain network connectivity.
Abstract: The topology of a wireless multi-hop network can be controlled by varying the transmission power at each node. In this paper, we give a detailed analysis of a cone-based distributed topology-control (CBTC) algorithm. This algorithm does not assume that nodes have GPS information available; rather it depends only on directional information. Roughly speaking, the basic idea of the algorithm is that a node u transmits with the minimum power p/sub u,/spl alpha// required to ensure that in every cone of degree /spl alpha/ around u, there is some node that u can reach with power p/sub u,/spl alpha//. We show that taking /spl alpha/=5/spl pi//6 is a necessary and sufficient condition to guarantee that network connectivity is preserved. More precisely, if there is a path from s to t when every node communicates at maximum power then, if /spl alpha//spl les/5/spl pi//6, there is still a path in the smallest symmetric graph G/sub /spl alpha// containing all edges (u,v) such that u can communicate with v using power p/sub u,/spl alpha//. On the other hand, if /spl alpha/>5/spl pi//6, connectivity is not necessarily preserved. We also propose a set of optimizations that further reduce power consumption and prove that they retain network connectivity. Dynamic reconfiguration in the presence of failures and mobility is also discussed. Simulation results are presented to demonstrate the effectiveness of the algorithm and the optimizations.

336 citations


Journal ArticleDOI
TL;DR: This paper advocates a more refined characterization whereby the high-SNR capacity is expanded as an affine function where the impact of channel features such as antenna correlation, unfaded components, etc., resides in the zero-order term or power offset.
Abstract: The analysis of the multiple-antenna capacity in the high-SNR regime has hitherto focused on the high-SNR slope (or maximum multiplexing gain), which quantifies the multiplicative increase as a function of the number of antennas. This traditional characterization is unable to assess the impact of prominent channel features since, for a majority of channels, the slope equals the minimum of the number of transmit and receive antennas. Furthermore, a characterization based solely on the slope captures only the scaling but it has no notion of the power required for a certain capacity. This paper advocates a more refined characterization whereby, as a function of SNR|/sub dB/, the high-SNR capacity is expanded as an affine function where the impact of channel features such as antenna correlation, unfaded components, etc., resides in the zero-order term or power offset. The power offset, for which we find insightful closed-form expressions, is shown to play a chief role for SNR levels of practical interest.

305 citations


Proceedings ArticleDOI
23 Oct 2005
TL;DR: An O(log OPT) approximation is obtained for a generalization of the orienteering problem in which the profit for visiting each node may vary arbitrarily with time and the implications for the approximability of several basic optimization problems are interesting.
Abstract: Given an arc-weighted directed graph G = (V, A, /spl lscr/) and a pair of nodes s, t, we seek to find an s-t walk of length at most B that maximizes some given function f of the set of nodes visited by the walk. The simplest case is when we seek to maximize the number of nodes visited: this is called the orienteering problem. Our main result is a quasi-polynomial time algorithm that yields an O(log OPT) approximation for this problem when f is a given submodular set function. We then extend it to the case when a node v is counted as visited only if the walk reaches v in its time window [R(v), D(v)]. We apply the algorithm to obtain several new results. First, we obtain an O(log OPT) approximation for a generalization of the orienteering problem in which the profit for visiting each node may vary arbitrarily with time. This captures the time window problem considered earlier for which, even in undirected graphs, the best approximation ratio known [Bansal, N et al. (2004)] is O(log/sup 2/ OPT). The second application is an O(log/sup 2/ k) approximation for the k-TSP problem in directed graphs (satisfying asymmetric triangle inequality). This is the first non-trivial approximation algorithm for this problem. The third application is an O(log/sup 2/ k) approximation (in quasi-poly time) for the group Steiner problem in undirected graphs where k is the number of groups. This improves earlier ratios (Garg, N et al.) by a logarithmic factor and almost matches the inapproximability threshold on trees (Halperin and Krauthgamer, 2003). This connection to group Steiner trees also enables us to prove that the problem we consider is hard to approximate to a ratio better than /spl Omega/(log/sup 1-/spl epsi// OPT), even in undirected graphs. Even though our algorithm runs in quasi-poly time, we believe that the implications for the approximability of several basic optimization problems are interesting.

272 citations


Journal ArticleDOI
TL;DR: This study examines the verification of linear time properties of RSMs, and easily derive algorithms for linear time temporal logic model checking with the same complexity in the model.
Abstract: Recursive state machines (RSMs) enhance the power of ordinary state machines by allowing vertices to correspond either to ordinary states or to potentially recursive invocations of other state machines. RSMs can model the control flow in sequential imperative programs containing recursive procedure calls. They can be viewed as a visual notation extending Statecharts-like hierarchical state machines, where concurrency is disallowed but recursion is allowed. They are also related to various models of pushdown systems studied in the verification and program analysis communities.After introducing RSMs and comparing their expressiveness with other models, we focus on whether verification can be efficiently performed for RSMs. Our first goal is to examine the verification of linear time properties of RSMs. We begin this study by dealing with two key components for algorithmic analysis and model checking, namely, reachability (Is a target state reachable from initial states?) and cycle detection (Is there a reachable cycle containing an accepting state?). We show that both these problems can be solved in time O(nθ2) and space O(nθ), where n is the size of the recursive machine and θ is the maximum, over all component state machines, of the minimum of the number of entries and the number of exits of each component. From this, we easily derive algorithms for linear time temporal logic model checking with the same complexity in the model. We then turn to properties in the branching time logic CTL*, and again demonstrate a bound linear in the size of the state machine, but only for the case of RSMs with a single exit node.

252 citations


Journal ArticleDOI
Hong Jiang1, Paul A. Wilford1
TL;DR: Analysis will be performed to show the tradeoff between bit rate of the data in secondary constellation and the penalty to the performance of receiving the basic constellation.
Abstract: A hierarchical modulation scheme is proposed to upgrade an existing digital broadcast system, such as satellite TV, or satellite radio, by adding more data in its transmission. The hierarchical modulation consists of a basic constellation, which is the same as in the original system, and a secondary constellation, which carries the additional data for the upgraded system. The upgraded system with the hierarchical modulation is backward compatible in the sense that receivers that have been deployed in the original system can continue receiving data in the basic constellation. New receivers can be designed to receive data carried in the secondary constellation, as well as those in the basic constellation. Analysis will be performed to show the tradeoff between bit rate of the data in secondary constellation and the penalty to the performance of receiving the basic constellation.

Proceedings ArticleDOI
16 May 2005
TL;DR: Simulation results show the proposed architecture can meet the QoS requirement in terms of bandwidth and fairness for all types of traffic.
Abstract: A fair and efficient service flow management architecture for IEEE802.16 broadband wireless access (BWA) systems is proposed for TDD mode. Compared with the traditional fixed bandwidth allocation, the proposed architecture adjusts uplink and downlink bandwidth dynamically to achieve higher throughput for unbalanced traffic. A deficit fair priority queue scheduling algorithm is deployed to serve different types of service flows in both uplink and downlink, which provides more fairness to the system. Simulation results show the proposed architecture can meet the QoS requirement in terms of bandwidth and fairness for all types of traffic.

Proceedings ArticleDOI
13 Jun 2005
TL;DR: A finer analysis of the problem for particular classes of DTDs is given, exploring the impact of various DTD constructs, identifying tractable cases, as well as providing the complexity in the query size alone.
Abstract: We study the satisfiability problem associated with XPath in the presence of DTDs. This is the problem of determining, given a query p in an XPath fragment and a DTD D, whether or not there exists an XML document T such that T conforms to D and the answer of p on T is nonempty. We consider a variety of XPath fragments widely used in practice, and investigate the impact of different XPath operators on satisfiability analysis. We first study the problem for negation-free XPath fragments with and without upward axes, recursion and data-value joins, identifying which factors lead to tractability and which to NP-completeness. We then turn to fragments with negation but without data values, establishing lower and upper bounds in the absence and in the presence of upward modalities and recursion. We show that with negation the complexity ranges from PSPACE to EXPTIME. Moreover, when both data values and negation are in place, we find that the complexity ranges from NEXPTIME to undecidable. Finally, we give a finer analysis of the problem for particular classes of DTDs, exploring the impact of various DTD constructs, identifying tractable cases, as well as providing the complexity in the query size alone.

Proceedings ArticleDOI
14 Jun 2005
TL;DR: This work presents the first known distributed-tracking schemes for maintaining accurate quantile estimates with provable approximation guarantees, while simultaneously optimizing the storage space at each remote site as well as the communication cost across the network.
Abstract: While traditional database systems optimize for performance on one-shot queries, emerging large-scale monitoring applications require continuous tracking of complex aggregates and data-distribution summaries over collections of physically-distributed streams. Thus, effective solutions have to be simultaneously space efficient (at each remote site), communication efficient (across the underlying communication network), and provide continuous, guaranteed-quality estimates. In this paper, we propose novel algorithmic solutions for the problem of continuously tracking complex holistic aggregates in such a distributed-streams setting --- our primary focus is on approximate quantile summaries, but our approach is more broadly applicable and can handle other holistic-aggregate functions (e.g., "heavy-hitters" queries). We present the first known distributed-tracking schemes for maintaining accurate quantile estimates with provable approximation guarantees, while simultaneously optimizing the storage space at each remote site as well as the communication cost across the network. In a nutshell, our algorithms employ a combination of local tracking at remote sites and simple prediction models for local site behavior in order to produce highly communication- and space-efficient solutions. We perform extensive experiments with real and synthetic data to explore the various tradeoffs and understand the role of prediction models in our schemes. The results clearly validate our approach, revealing significant savings over naive solutions as well as our analytical worst-case guarantees.

Proceedings ArticleDOI
28 Aug 2005
TL;DR: Corsac is presented, a cooperation-optimal protocol consisting of a routing protocol and a forwarding protocol that addresses the challenge in wireless ad-hoc networks that a link's cost is determined by two nodes together.
Abstract: In many applications, wireless ad-hoc networks are formed by devices belonging to independent users. Therefore, a challenging problem is how to provide incentives to stimulate cooperation. In this paper, we study ad-hoc games---the routing and packet forwarding games in wireless ad-hoc networks. Unlike previous work which focuses either on routing or on forwarding, this paper investigates both routing and forwarding. We first uncover an impossibility result---there does not exist a protocol such that following the protocol to always forward others' traffic is a dominant action. Then we define a novel solution concept called cooperation-optimal protocols. We present Corsac, a cooperation-optimal protocol consisting of a routing protocol and a forwarding protocol. The routing protocol of Corsac integrates VCG with a novel cryptographic technique to address the challenge in wireless ad-hoc networks that a link's cost (ie, its type) is determined by two nodes together. Corsac also applies efficient cryptographic techniques to design a forwarding protocol to enforce the routing decision, such that fulfilling the routing decision is the optimal action of each node in the sense that it brings the maximum utility to the node. Additionally, we extend our framework to a practical radio propagation model where a transmission is successful with a probability. We evaluate our protocols using simulations. Our evaluations demonstrate that our protocols provide incentives for nodes to forward packets.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: The dynamics of user throughputs under GMR algorithm is studied, and it is shown that GMR is asymptotically optimal in the following sense.
Abstract: We consider the problem of scheduling multiple users sharing a time-varying wireless channel. (As an example, this is a model of scheduling in 3G wireless technologies, such as CDMA2000 3G1xEV-DO downlink scheduling.) We introduce an algorithm which seeks to optimize a concave utility function /spl Sigma//sub i/H/sub i/(R/sub i/) of the user throughputs R/sub i/, subject to certain lower and upper throughput bounds: R/sub i//sup min//spl les/R/sub i//spl les/R/sub i//sup max/. The algorithm, which we call the gradient algorithm with minimum/maximum rate constraints (GMR) uses a token counter mechanism, which modifies an algorithm solving the corresponding unconstrained problem, to produce the algorithm solving the problem with throughput constraints. Two important special cases of the utility functions are /spl Sigma//sub i/log R/sub i/ and /spl Sigma//sub i/R/sub i/, corresponding to the common proportional fairness and throughput maximization objectives. We study the dynamics of user throughputs under GMR algorithm, and show that GMR is asymptotically optimal in the following sense. If, under an appropriate scaling, the throughput vector R(t) converges to a fixed vector R/sup +/ as time t/spl rarr//spl infin/ then R/sup +/ is an optimal solution to the optimization problem described above. We also present simulation results showing the algorithm performance.

Kenneth L. Clarkson1
01 Jan 2005
TL;DR: Several measures of dimension can be estimated using nearest-neighbor searching, while others can be used to estimate the cost of that searching.
Abstract: Given a set S of points in a metric space with distance function D, the nearest-neighbor searching problem is to build a data structure for S so that for an input query point q, the point s 2 S that minimizes D(s,q) can be found quickly. We survey approaches to this problem, and its relation to concepts of metric space dimension. Several measures of dimension can be estimated using nearest-neighbor searching, while others can be used to estimate the cost of that searching. In recent years, several data structures have been proposed that are provably good for low-dimensional spaces, for some particular measures of dimension. These and other data structures for nearest-neighbor searching are surveyed.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper investigates practically realizable candidate algorithms for spectrum allocation for homogeneous CDMA networks based on important spectrum management concepts of scope, access fairness, "stickiness" and spectrum utilization.
Abstract: This paper focuses on spectrum management in next generation cellular networks that employ coordinated dynamic spectrum access (DSA). In our model, a spectrum broker controls and provides time-bounded access to a band of spectrum to wireless service providers and/or end users and implements the spectrum pricing and allocation schemes and policies. We introduce several concepts that are central to the design of spectrum management algorithms. These include: (1) demand processing model (batched vs. online), (2) spectrum pricing models (merchant mode, simple bidding, and iterative bidding), (3) different network infrastructure options such as shared base stations with collocated antennas, non-shared base stations with collocated antennas, and non-shared base stations and non-collocated antennas, and (4) important spectrum management concepts of scope, access fairness, "stickiness" and spectrum utilization. Based on these concepts, we investigate practically realizable candidate algorithms for spectrum allocation for homogeneous CDMA networks

Journal ArticleDOI
TL;DR: This paper addresses the problem of distributed routing of restoration paths and introduces the concept of "backtracking" to bound the restoration latency, using a link cost model that captures bandwidth sharing among links using various types of aggregate link-state information.
Abstract: The emerging multiprotocol label switching (MPLS) networks enable network service providers to route bandwidth guaranteed paths between customer sites. This basic label switched path (LSP) routing is often enhanced using restoration routing which sets up alternate LSPs to guarantee uninterrupted connectivity in case network links or nodes along primary path fail. We address the problem of distributed routing of restoration paths, which can be defined as follows: given a request for a bandwidth guaranteed LSP between two nodes, find a primary LSP, and a set of backup LSPs that protect the links along the primary LSP. A routing algorithm that computes these paths must optimize the restoration latency and the amount of bandwidth used. We introduce the concept of "backtracking" to bound the restoration latency. We consider three different cases characterized by a parameter called backtracking distance D: 1) no backtracking (D=0); 2) limited backtracking (D=k); and 3) unlimited backtracking (D=/spl infin/). We use a link cost model that captures bandwidth sharing among links using various types of aggregate link-state information. We first show that joint optimization of primary and backup paths is NP-hard in all cases. We then consider algorithms that compute primary and backup paths in two separate steps. Using link cost metrics that capture bandwidth sharing, we devise heuristics for each case. Our simulation study shows that these algorithms offer a way to tradeoff bandwidth to meet a range of restoration latency requirements.

Journal ArticleDOI
TL;DR: A centralized downlink scheduling scheme in a cellular network with a small number of relays is proposed and it is found that, with four relays deployed in each sector, it is possible to achieve significant throughput gain including the signaling overhead.
Abstract: Future cellular wireless networks could include multihop transmission through relays. We propose a centralized downlink scheduling scheme in a cellular network with a small number of relays. The scheduling scheme has the property that it guarantees stability of the user queues for the largest set of arrival rates. We obtain throughput results by simulation for various scenarios and study the effect of number of relays, relay transmit power relative to the base station (BS) power, and the effect of distributing a given total power between the BS and different numbers of relays. We also present results for the case without channel fading to determine what fraction of the throughput gain is achieved from diversity reception. We find that, with four relays deployed in each sector, it is possible to achieve significant throughput gain including the signaling overhead.

Journal ArticleDOI
TL;DR: In this article, the design and performance of several generations of wavelength-selective 1/spl times/K switches are reviewed, which combine the functionality of a demultiplexer, per-wavelength switch, and multiplexer in a single, low-loss unit.
Abstract: The design and performance of several generations of wavelength-selective 1/spl times/K switches are reviewed. These optical subsystems combine the functionality of a demultiplexer, per-wavelength switch, and multiplexer in a single, low-loss unit. Free-space optics is utilized for spatially separating the constituent wavelength division multiplexing (WDM) channels as well as for space-division switching from an input optical fiber to one of K output fibers (1/spl times/K functionality) on a channel-by-channel basis using a microelectromechanical system (MEMS) micromirror array. The switches are designed to provide wide and flat passbands for minimal signal distortion. They can also provide spectral equalization and channel blocking functionality, making them well suited for use in transparent WDM optical mesh networks.

Journal ArticleDOI
25 Feb 2005-Science
TL;DR: ENA imaging has revealed a radiation belt that resides inward of the D ring and is probably the result of double charge exchange between the main radiation belt and the upper layers of Saturn's exosphere.
Abstract: The Magnetospheric Imaging Instrument (MIMI) onboard the Cassini spacecraft observed the saturnian magnetosphere from January 2004 until Saturn orbit insertion (SOI) on 1 July 2004. The MIMI sensors observed frequent energetic particle activity in interplanetary space for several months before SOI. When the imaging sensor was switched to its energetic neutral atom (ENA) operating mode on 20 February 2004, at approximately 10(3) times Saturn's radius RS (0.43 astronomical units), a weak but persistent signal was observed from the magnetosphere. About 10 days before SOI, the magnetosphere exhibited a day-night asymmetry that varied with an approximately 11-hour periodicity. Once Cassini entered the magnetosphere, in situ measurements showed high concentrations of H+, H2+, O+, OH+, and H2O+ and low concentrations of N+. The radial dependence of ion intensity profiles implies neutral gas densities sufficient to produce high loss rates of trapped ions from the middle and inner magnetosphere. ENA imaging has revealed a radiation belt that resides inward of the D ring and is probably the result of double charge exchange between the main radiation belt and the upper layers of Saturn's exosphere.

Journal ArticleDOI
TL;DR: This paper classify wireless networks with orthogonal channels into two types, half duplex and full duplex, and considers the problem of jointly routing the flows and scheduling transmissions to achieve a given rate vector, and develops tight necessary and sufficient conditions for the achievability of the rate vector.
Abstract: This paper considers the problem of determining the achievable rates in multi-hop wireless mesh networks with orthogonal channels. We classify wireless networks with orthogonal channels into two types, half duplex and full duplex, and consider the problem of jointly routing the flows and scheduling transmissions to achieve a given rate vector. We develop tight necessary and sufficient conditions for the achievability of the rate vector. We develop efficient and easy to implement Fully Polynomial Time Approximation Schemes for solving the routing problem. The scheduling problem is a solved as a graph edge-coloring problem. We show that this approach guarantees that the solution obtained is within 50% of the optimal solution in the worst case (within 67% of the optimal solution in a common special case) and, in practice, is close to 90% of the optimal solution on the average. The approach that we use is quite flexible and can be extended to handle more sophisticated interference conditions, and routing with diversity requirements.

Book ChapterDOI
TL;DR: It is argued that an architecture that maximises compatibility with existing languages, in particular RDF and OWL, will benefit the development of the Semantic Web, and still allow for forms of closed world assumption and negation as failure.
Abstract: We discuss language architecture for the Semantic Web, and in particular different proposals for extending this architecture with a rules component. We argue that an architecture that maximises compatibility with existing languages, in particular RDF and OWL, will benefit the development of the Semantic Web, and still allow for forms of closed world assumption and negation as failure.

Proceedings ArticleDOI
13 Jun 2005
TL;DR: This study studies streams of edges in massive communication multigraphs, defined by (source, destination) pairs, and sees that cascaded summaries are highly effective, enabling massive multigraph streams to be effectively summarized to answer queries of interest with high accuracy using only a small amount of space.
Abstract: The challenge of monitoring massive amounts of data generated by communication networks has led to the interest in data stream processing. We study streams of edges in massive communication multigraphs, defined by (source, destination) pairs. The goal is to compute properties of the underlying graph while using small space (much smaller than the number of communicants), and to avoid bias introduced because some edges may appear many times, while others are seen only once. We give results for three fundamental problems on multigraph degree sequences: estimating frequency moments of degrees, finding the heavy hitter degrees, and computing range sums of degree values. In all cases we are able to show space bounds for our summarizing algorithms that are significantly smaller than storing complete information. We use a variety of data stream methods: sketches, sampling, hashing and distinct counting, but a common feature is that we use cascaded summaries: nesting multiple estimation techniques within one another. In our experimental study, we see that such summaries are highly effective, enabling massive multigraph streams to be effectively summarized to answer queries of interest with high accuracy using only a small amount of space.

Journal ArticleDOI
TL;DR: Structural properties of each of the main sublanguages of navigational XPath (W3c Recommendation) commonly used in practice are studied, providing sound and complete axiom systems and normal forms for several of these fragments.

Journal ArticleDOI
TL;DR: It is shown that delayed simulation---unlike fair simulation---preserves the automaton language upon quotienting and allows substantially better state space reduction than direct simulation.
Abstract: We give efficient algorithms, improving optimal known bounds, for computing a variety of simulation relations on the state space of a Buchi automaton. Our algorithms are derived via a unified and simple parity-game framework. This framework incorporates previously studied notions like fair and direct simulation, but also a new natural notion of simulation called delayed simulation, which we introduce for the purpose of state space reduction. We show that delayed simulation---unlike fair simulation---preserves the automaton language upon quotienting and allows substantially better state space reduction than direct simulation. Using our parity-game approach, which relies on an algorithm by Jurdzinski, we give efficient algorithms for computing all of the above simulations. In particular, we obtain an O(mn3)-time and O(mn)-space algorithm for computing both the delayed and the fair simulation relations. The best prior algorithm for fair simulation requires time and space O(n6). Our framework also allows one to compute bisimulations: we compute the fair bisimulation relation in O(mn3) time and O(mn) space, whereas the best prior algorithm for fair bisimulation requires time and space O(n10).

Journal ArticleDOI
TL;DR: In this article, a wide-tuning-range optical delay line is demonstrated in high (2%) index contrast waveguides, which integrates four-stage ring resonator all-pass filters (APFs) with cascaded fixed spiral-type delay waveguide; each fixed delay path varies in length by a factor of two from the previous stage.
Abstract: A wide-tuning-range optical delay line is demonstrated in high (2%) index contrast waveguides. This device integrates four-stage ring resonator all-pass filters (APFs) with cascaded fixed spiral-type delay waveguides; each fixed delay path varies in length by a factor of two from the previous stage. A 2/spl times/2 switch separates each fixed delay and the tunable parts of the delay line. The APF allows for continuous delay tuning. This device enables coherent switching and continuous tuning ranges up to 2.56 ns.

Book ChapterDOI
Chuanhai Liu1
14 Jul 2005
TL;DR: In this article, the maximum likelihood estimators of the robit model with a known number of degrees of freedom were shown to be robust to outliers, and the authors proposed a Data Augmentation (DA) algorithm for Bayesian inference with the Robit regression model.
Abstract: Logistic and probit regression models are commonly used in practice to analyze binary response data, but the maximum likelihood estimators of these models are not robust to outliers. This paper considers a robit regression model, which replaces the normal distribution in the probit regression model with a t-distribution with a known or unknown number of degrees of freedom. It is shown that (i) the maximum likelihood estimators of the robit model with a known number of degrees of freedom are robust; (ii) the robit link with about seven degrees of freedom provides an excellent approximation to the logistic link; and (iii) the robit link with a large number of degrees of freedom approximates the probit link. The maximum likelihood estimates can be obtained using efficient EM-type algorithms. EM-type algorithms also provide information that can be used to identify outliers, to which the maximum likelihood estimates of the logistic and probit regression coefficient would be sensitive. The EM algorithms for robit regression are easily modified to obtain efficient Data Augmentation (DA) algorithms for Bayesian inference with the robit regression model. The DA algorithms for robit regression model are much simpler to implement than the existing Gibbs sampler for the logistic regression model. A numerical example illustrates the methodology.

Proceedings ArticleDOI
Holger Claussen1
11 Sep 2005
TL;DR: It is shown that taking a small number of neighbouring values into account is sufficient to achieve relatively small correlation errors in the generated shadow fading maps.
Abstract: For simulating mobility of both user terminals and base stations in wireless access systems, it is often advantageous to generate environment "maps" of the channel attenuations including path loss and shadow fading. However, the generation of a large number of spatially correlated shadow fading values, using for example the Cholesky decomposition of the corresponding correlation matrix, is computationally complex and requires a large amount of memory. In this paper an efficient low complexity alternative is proposed, where each new fading value is generated based only on the correlation with selected neighbouring values in the map. This results in a significant reduction of the computational complexity and memory requirements. It is shown that taking a small number of neighbouring values into account is sufficient to achieve relatively small correlation errors in the generated shadow fading maps