scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2003"


BookDOI
01 Jan 2003
TL;DR: The Description Logic Handbook as mentioned in this paper provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications, and can also be used for self-study or as a reference for knowledge representation and artificial intelligence courses.
Abstract: Description logics are embodied in several knowledge-based systems and are used to develop various real-life applications. Now in paperback, The Description Logic Handbook provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications. Its appeal will be broad, ranging from more theoretically oriented readers, to those with more practically oriented interests who need a sound and modern understanding of knowledge representation systems based on description logics. As well as general revision throughout the book, this new edition presents a new chapter on ontology languages for the semantic web, an area of great importance for the future development of the web. In sum, the book will serve as a unique resource for the subject, and can also be used for self-study or as a reference for knowledge representation and artificial intelligence courses.

5,644 citations


Journal ArticleDOI
TL;DR: This work compute a lower bound on the capacity of a channel that is learned by training, and maximize the bound as a function of the received signal-to-noise ratio (SNR), fading coherence time, and number of transmitter antennas.
Abstract: Multiple-antenna wireless communication links promise very high data rates with low error probabilities, especially when the wireless channel response is known at the receiver. In practice, knowledge of the channel is often obtained by sending known training symbols to the receiver. We show how training affects the capacity of a fading channel-too little training and the channel is improperly learned, too much training and there is no time left for data transmission before the channel changes. We compute a lower bound on the capacity of a channel that is learned by training, and maximize the bound as a function of the received signal-to-noise ratio (SNR), fading coherence time, and number of transmitter antennas. When the training and data powers are allowed to vary, we show that the optimal number of training symbols is equal to the number of transmit antennas-this number is also the smallest training interval length that guarantees meaningful estimates of the channel matrix. When the training and data powers are instead required to be equal, the optimal number of symbols may be larger than the number of antennas. We show that training-based schemes can be optimal at high SNR, but suboptimal at low SNR.

2,466 citations


Journal ArticleDOI
TL;DR: This work provides a simple method to iteratively detect and decode any linear space-time mapping combined with any channel code that can be decoded using so-called "soft" inputs and outputs and shows that excellent performance at very high data rates can be attained with either.
Abstract: Recent advancements in iterative processing of channel codes and the development of turbo codes have allowed the communications industry to achieve near-capacity on a single-antenna Gaussian or fading channel with low complexity. We show how these iterative techniques can also be used to achieve near-capacity on a multiple-antenna system where the receiver knows the channel. Combining iterative processing with multiple-antenna channels is particularly challenging because the channel capacities can be a factor of ten or more higher than their single-antenna counterparts. Using a "list" version of the sphere decoder, we provide a simple method to iteratively detect and decode any linear space-time mapping combined with any channel code that can be decoded using so-called "soft" inputs and outputs. We exemplify our technique by directly transmitting symbols that are coded with a channel code; we show that iterative processing with even this simple scheme can achieve near-capacity. We consider both simple convolutional and powerful turbo channel codes and show that excellent performance at very high data rates can be attained with either. We compare our simulation results with Shannon capacity limits for ergodic multiple-antenna channel.

2,291 citations


Journal ArticleDOI
TL;DR: A result of Johnson and Lindenstrauss shows that a set of n points in high dimensional Euclidean space can be mapped into an O(log n/e2)-dimensional Euclidesan space such that the distance between any two points changes by only a factor of (1 ± e).
Abstract: A result of Johnson and Lindenstrauss [13] shows that a set of n points in high dimensional Euclidean space can be mapped into an O(log n/e2)-dimensional Euclidean space such that the distance between any two points changes by only a factor of (1 ± e). In this note, we prove this theorem using elementary probabilistic techniques.

1,036 citations


Book
06 Mar 2003
TL;DR: The first edition made a number of predictions, explicitly or implicitly, about the growth of the Web and the patterns of Internet connectivity vastly increased, and warned of issues posed by home LANs, and about the problems caused by roaming laptops.
Abstract: From the Book: But after a time, as Frodo did not show any sign of writing a book on the spot, the hobbits returned to their questions about doings in the Shire. Lord of the Rings —J.R.R. TOLKIEN The first printing of the First Edition appeared at the Las Vegas Interop in May, 1994. At that same show appeared the first of many commercial firewall products. In many ways, the field has matured since then: You can buy a decent firewall off the shelf from many vendors. The problem of deploying that firewall in a secure and useful manner remains. We have studied many Internet access arrangements in which the only secure component was the firewall itself—it was easily bypassed by attackers going after the “protected” inside machines. Before the trivestiture of AT&T/Lucent/NCR, there were over 300,000 hosts behind at least six firewalls, plus special access arrangements with some 200 business partners. Our first edition did not discuss the massive sniffing attacks discovered in the spring of 1994. Sniffers had been running on important Internet Service Provider (ISP) machines for months—machines that had access to a major percentage of the ISP’s packet flow. By some estimates, these sniffers captured over a million host name/user name/password sets from passing telnet, ftp, and rlogin sessions. There were also reports of increased hacker activity on military sites. It’s obvious what must have happened: If you are a hacker with a million passwords in your pocket, you are going to look for the most interesting targets, and .mil certainly qualifies. Since the First Edition, we have been slowlylosing the Internet arms race. The hackers have developed and deployed tools for attacks we had been anticipating for years. IP spoofing Shimomura, 1996 and TCP hijacking are now quite common, according to the Computer Emergency Response Team (CERT). ISPs report that attacks on the Internet’s infrastructure are increasing. There was one attack we chose not to include in the First Edition: the SYN-flooding denial-of- service attack that seemed to be unstoppable. Of course, the Bad Guys learned about the attack anyway, making us regret that we had deleted that paragraph in the first place. We still believe that it is better to disseminate this information, informing saints and sinners at the same time. The saints need all the help they can get, and the sinners have their own channels of communication.Crystal Ball or Bowling Ball?The first edition made a number of predictions, explicitly or implicitly. Was our foresight accurate? Our biggest failure was neglecting to foresee how successful the Internet would become. We barely mentioned the Web and declined a suggestion to use some weird syntax when listing software resources. The syntax, of course, was the URL... Concomitant with the growth of the Web, the patterns of Internet connectivity vastly increased. We assumed that a company would have only a few external connections—few enough that they’d be easy to keep track of, and to firewall. Today’s spaghetti topology was a surprise. We didn’t realize that PCs would become Internet clients as soon as they did. We did, however, warn that as personal machines became more capable, they’d become more vulnerable. Experience has proved us very correct on that point. We did anticipate high-speed home connections, though we spoke of ISDN, rather than cable modems or DSL. (We had high-speed connectivity even then, though it was slow by today’s standards.) We also warned of issues posed by home LANs, and we warned about the problems caused by roaming laptops. We were overly optimistic about the deployment of IPv6 (which was called IPng back then, as the choice hadn’t been finalized). It still hasn’t been deployed, and its future is still somewhat uncertain. We were correct, though, about the most fundamental point we made: Buggy host software is a major security issue. In fact, we called it the “fundamental theorem of firewalls”: Most hosts cannot meet our requirements: they run too many programs that are too large. Therefore, the only solution is to isolate them behind a firewall if you wish to run any programs at all. If anything, we were too conservative.Our ApproachThis book is nearly a complete rewrite of the first edition. The approach is different, and so are many of the technical details. Most people don’t build their own firewalls anymore. There are far more Internet users, and the economic stakes are higher. The Internet is a factor in warfare. The field of study is also much larger—there is too much to cover in a single book. One reviewer suggested that Chapters 2 and 3 could be a six-volume set. (They were originally one mammoth chapter.) Our goal, as always, is to teach an approach to security. We took far too long to write this edition, but one of the reasons why the first edition survived as long as it did was that we concentrated on the concepts, rather than details specific to a particular product at a particular time. The right frame of mind goes a long way toward understanding security issues and making reasonable security decisions. We’ve tried to include anecdotes, stories, and comments to make our points. Some complain that our approach is too academic, or too UNIX-centric, that we are too idealistic, and don’t describe many of the most common computing tools. We are trying to teach attitudes here more than specific bits and bytes. Most people have hideously poor computing habits and network hygiene. We try to use a safer world ourselves, and are trying to convey how we think it should be. The chapter outline follows, but we want to emphasize the following: It is OK to skip the hard parts. If we dive into detail that is not useful to you, feel free to move on. The introduction covers the overall philosophy of security, with a variety of time-tested maxims. As in the first edition, Chapter 2 discusses most of the important protocols, from a security point of view. We moved material about higher-layer protocols to Chapter 3. The Web merits a chapter of its own. The next part discusses the threats we are dealing with: the kinds of attacks in Chapter 5, and some of the tools and techniques used to attack hosts and networks in Chapter 6. Part III covers some of the tools and techniques we can use to make our networking world safer. We cover authentication tools in Chapter 7, and safer network servicing software in Chapter 8. Part IV covers firewalls and virtual private networks (VPNs). Chapter 9 introduces various types of firewalls and filtering techniques, and Chapter 10 summarizes some reasonable policies for filtering some of the more essential services discussed in Chapter 2. If you don’t find advice about filtering a service you like, we probably think it is too dangerous (refer to Chapter 2). Chapter 11 covers a lot of the deep details of firewalls, including their configuration, administration, and design. It is certainly not a complete discussion of the subject, but should give readers a good start. VPN tunnels, including holes through firewalls, are covered in some detail in Chapter 12. There is more detail in Chapter 18. In Part V, we apply these tools and lessons to organizations. Chapter 13 examines the problems and practices on modern intranets. See Chapter 15 for information about deploying a hacking-resistant host, which is useful in any part of an intranet. Though we don’t especially like intrusion detection systems (IDSs) very much, they do play a role in security, and are discussed in Chapter 15. The last part offers a couple of stories and some further details. The Berferd chapter is largely unchanged, and we have added “The Taking of Clark,” a real-life story about a minor break-in that taught useful lessons. Chapter 18 discusses secure communications over insecure networks, in quite some detail. For even further detail, Appendix A has a short introduction to cryptography. The conclusion offers some predictions by the authors, with justifications. If the predictions are wrong, perhaps the justifications will be instructive. (We don’t have a great track record as prophets.) Appendix B provides a number of resources for keeping up in this rapidly changing field.Errata and UpdatesEveryone and every thing seems to have a Web site these days; this book is no exception. Our “official” Web site is . We’ll post an errata list there; we’ll also keep an up-to-date list of other useful Web resources. If you find any errors—we hope there aren’t many—please let us know via e-mail at .AcknowledgmentsFor many kindnesses, we’d like to thank Joe Bigler, Steve “Hollywood” Branigan, Hal Burch, Brian Clapper, David Crocker, Tom Dow, Phil Edwards and the Internet Public Library, Anja Feldmann, Karen Gettman, Brian Kernighan, David Korman, Tom Limoncelli, Norma Loquendi, Cat Okita, Robert Oliver, Vern Paxson, Marcus Ranum, Eric Rescorla, Guido van Rooij, Luann Rouff (a most excellent copy editor), Abba Rubin, Peter Salus, Glenn Sieb, Karl Siil (we’ll always have Boston), Irina Strizhevskaya, Rob Thomas, Win Treese, Dan Wallach, Avishai Wool, Karen Yannetta, and Michal Zalewski, among many others. BILL CHESWICK STEVE BELLOVIN AVI RUBIN 020163466XP01302003

730 citations


Journal ArticleDOI
TL;DR: A statistical signal processing technique based on abrupt change detection is described that is effective at detecting several network anomalies and has great potential to enhance the field, and thereby improve the reliability of IP networks.
Abstract: Network anomaly detection is a vibrant research area. Researchers have approached this problem using various techniques such as artificial intelligence, machine learning, and state machine modeling. In this paper, we first review these anomaly detection methods and then describe in detail a statistical signal processing technique based on abrupt change detection. We show that this signal processing technique is effective at detecting several network anomalies. Case studies from real network data that demonstrate the power of the signal processing approach to network anomaly detection are presented. The application of signal processing techniques to this area is still in its infancy, and we believe that it has great potential to enhance the field, and thereby improve the reliability of IP networks.

557 citations


Book ChapterDOI
01 Jan 2003
TL;DR: A new transistor sizing algorithm, which couples synchronous timing analysis with convex optimization techniques, is presented, which shows that any point found to be locally optimal is certain to be globally optimal.
Abstract: A new transistor sizing algorithm, which couples synchronous timing analysis with convex optimization techniques, is presented. Let A be the sum of transistor sizes, T the longest delay through the circuit, and K a positive constant. Using a distributed RC model, each of the following three programs is shown to be convex: 1) Minimize A subject to T < K. 2) Minimize T subject to A < K. 3) Minimize AT K . The convex equations describing T are a particular class of functions called posynomials. Convex programs have many pleasant properties, and chief among these is the fact that any point found to be locally optimal is certain to be globally optimal TILOS (Timed Logic Synthesizer) is a program that sizes transistors in CMOS circuits. Preliminary results of TILOS’s transistor sizing algorithm are presented.

542 citations


Journal ArticleDOI
TL;DR: An approach that provides analytic expressions for the statistics of throughput of the mutual information of multiple-antenna systems with arbitrary correlations, interferers, and noise is presented and a method to analytically optimize over the input signal covariance is developed.
Abstract: The use of multiple-antenna arrays in both transmission and reception promises huge increases in the throughput of wireless communication systems. It is therefore important to analyze the capacities of such systems in realistic situations, which may include spatially correlated channels and correlated noise, as well as correlated interferers with known channel at the receiver. Here, we present an approach that provides analytic expressions for the statistics, i.e., the moments of the distribution, of the mutual information of multiple-antenna systems with arbitrary correlations, interferers, and noise. We assume that the channels of the signal and the interference are Gaussian with arbitrary covariance. Although this method is valid formally for large antenna numbers, it produces extremely accurate results even for arrays with as few as two or three antennas. We also develop a method to analytically optimize over the input signal covariance, which enables us to calculate analytic capacities when the transmitter has knowledge of the statistics of the channel (i.e., the channel covariance). In many cases of interest, this capacity is very close to the full closed-loop capacity, in which the transmitter has instantaneous channel knowledge. We apply this analytic approach to a number of examples and we compare our results with simulations to establish the validity of this approach. This method provides a simple tool to analyze the statistics of throughput for arrays of any size. The emphasis of this paper is on elucidating the novel mathematical methods used.

441 citations


Proceedings ArticleDOI
20 May 2003
TL;DR: Conversation specifications as a formalism to define the conversations allowed by an e-service composition is proposed and a central notion of a "conversation", which is a sequence of messages observed by the watcher is proposed.
Abstract: This paper introduces a framework for modeling and specifying the global behavior of e-service compositions. Under this framework, peers (individual e-services) communicate through asynchronous messages and each peer maintains a queue for incoming messages. A global "watcher" keeps track of messages as they occur. We propose and study a central notion of a "conversation", which is a sequence of (classes of) messages observed by the watcher. We consider the case where the peers are represented by Mealy machines (finite state machines with input and output). The sets of conversations exhibit unexpected behaviors. For example, there exists a composite e-service based on Mealy peers whose set of conversations is not context free (and not regular). (The set of conversations is always context sensitive.) One cause for this is the queuing of messages; we introduce an operator "prepone" that simulates queue delays from a global perspective and show that the set of conversations of each Mealy e-service is closed under prepone. We illustrate that the global prepone fails to completely capture the queue delay effects and refine prepone to a "local" version on conversations seen by individual peers. On the other hand, Mealy implementations of a composite e-service will always generate conversations whose "projections" are consistent with individual e-services. We use projection-join to reflect such situations. However, there are still Mealy peers whose set of conversations is not the local prepone and projection-join closure of any regular language. Therefore, we propose conversation specifications as a formalism to define the conversations allowed by an e-service composition. We give two technical results concerning the interplay between the local behaviors of Mealy peers and the global behaviors of their compositions. One result shows that for each regular language, its local prepone and projection-join closure corresponds to the set of conversations by some Mealy peers effectively constructed from . The second result gives a condition on the shape of a composition which guarantees that the set of conversations that can be realized is the local prepone and projection-join closure of a regular language.

401 citations


Proceedings ArticleDOI
09 Jul 2003
TL;DR: This paper addresses the problem of integration of these two classes of networks to offer such seamless connectivity and describes two possible integration approaches - namely tight integration and loose integration and advocate the latter as the preferred approach.
Abstract: The third-generation (3G) wide area wireless networks and 802.11 local area wireless networks possess complementary characteristics. 3G networks promise to offer always-on, ubiquitous connectivity with relatively low data rates. 802.11 offers much higher data rates, comparable to wired networks, but can cover only smaller areas, suitable for hot-spot applications in hotels and airports. The performance and flexibility of wireless data services would be dramatically improved if users could seamlessly roam across the two networks. In this paper, we address the problem of integration of these two classes of networks to offer such seamless connectivity. Specifically, we describe two possible integration approaches - namely tight integration and loose integration and advocate the latter as the preferred approach. Our realization of the loose integration approach consists of two components: a new network element called IOTA gateway deployed in 802.11 networks, and a new client software. The IOTA gateway is composed of several software modules, and with cooperation from the client software offers integrated 802.11/3G wireless data services that support seamless intertechnology mobility, Quality of Service (QoS) guarantees and multiprovider roaming agreements. We describe the design and implementation of the IOTA gateway and the client software in detail and present experimental performance results that validate our architectural approach.

399 citations


Journal ArticleDOI
TL;DR: This work generalizes the zero-forcing beamforming technique to the multiple receive antennas case and uses this as the baseline for the packet data throughput evaluation, and examines the long-term average throughputs that can be achieved using the proportionally fair scheduling algorithm.
Abstract: Recently, the capacity region of a multiple-input multiple-output (MIMO) Gaussian broadcast channel, with Gaussian codebooks and known-interference cancellation through dirty paper coding, was shown to equal the union of the capacity regions of a collection of MIMO multiple-access channels. We use this duality result to evaluate the system capacity achievable in a cellular wireless network with multiple antennas at the base station and multiple antennas at each terminal. Some fundamental properties of the rate region are exhibited and algorithms for determining the optimal weighted rate sum and the optimal covariance matrices for achieving a given rate vector on the boundary of the rate region are presented. These algorithms are then used in a simulation study to determine potential capacity enhancements to a cellular system through known-interference cancellation. We study both the circuit data scenario in which each user requires a constant data rate in every frame and the packet data scenario in which users can be assigned a variable rate in each frame so as to maximize the long-term average throughput. In the case of circuit data, the outage probability as a function of the number of active users served at a given rate is determined through simulations. For the packet data case, long-term average throughputs that can be achieved using the proportionally fair scheduling algorithm are determined. We generalize the zero-forcing beamforming technique to the multiple receive antennas case and use this as the baseline for the packet data throughput evaluation.

Journal ArticleDOI
TL;DR: It is shown that the performance of recently proposed quasi-orthogonal space-time codes can be improved by phase-shifting the constellations of the symbols constituting the code, leading to substantially improved performance.
Abstract: In this letter, we show that the performance of recently proposed quasi-orthogonal space-time codes can be improved by phase-shifting the constellations of the symbols constituting the code. The optimal rotation of the symbols increases the minimum distance of the corresponding space-time codewords, leading to substantially improved performance.

Journal ArticleDOI
TL;DR: Analysis of the impact of antenna correlation, Ricean factors, polarization diversity, and out-of-cell interference on multiple-antenna capacity in the regime of low signal-to-noise ratio yields practical design lessons for arbitrary number of antennas in the transmit and receive arrays.
Abstract: This paper provides analytical characterizations of the impact on the multiple-antenna capacity of several important features that fall outside the standard multiple-antenna model, namely: (i) antenna correlation, (ii) Ricean factors, (iii) polarization diversity, and (iv) out-of-cell interference; all in the regime of low signal-to-noise ratio. The interplay of rate, bandwidth, and power is analyzed in the region of energy per bit close to its minimum value. The analysis yields practical design lessons for arbitrary number of antennas in the transmit and receive arrays.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: This paper identifies four different regions of TCP unfairness that depend on the buffer availability at the base station, with some regions exhibiting significant unfairness of over 10 in terms of throughput ratio between upstream and downstream TCP flows.
Abstract: As local area wireless networks based on the IEEE 802.11 standard see increasing public deployment, it is important to ensure that access to the network by different users remains fair. While fairness issues in 802.11 networks have been studied before, this paper is the first to focus on TCP fairness in 802.11 networks in the presence of both mobile senders and receivers. In this paper, we evaluate extensively through analysis, simulation, and experimentation the interaction between the 802.11 MAC protocol and TCP. We identify four different regions of TCP unfairness that depend on the buffer availability at the base station, with some regions exhibiting significant unfairness of over 10 in terms of throughput ratio between upstream and downstream TCP flows. We also propose a simple solution that can be implemented at the base station above the MAC layer that ensures that different TCP flows share the 802.11 bandwidth equitably irrespective of the buffer availability at the base station.

Journal ArticleDOI
TL;DR: Narrowband multiple-input-multiple-output (MIMO) measurements using 16 transmitters and 16 receivers at 2.11 GHz were carried out in Manhattan, finding that the antennas were largely uncorrelated even at antenna separations as small as two wavelengths.
Abstract: Narrowband multiple-input-multiple-output (MIMO) measurements using 16 transmitters and 16 receivers at 2.11 GHz were carried out in Manhattan. High capacities were found for full, as well as smaller array configurations, all within 80% of the fully scattering channel capacity. Correlation model parameters are derived from data. Spatial MIMO channel capacity statistics are found to be well represented by the separate transmitter and receiver correlation matrices, with a median relative error in capacity of 3%, in contrast with the 18% median relative error observed by assuming the antennas to be uncorrelated. A reduced parameter model, consisting of 4 parameters, has been developed to statistically represent the channel correlation matrices. These correlation matrices are, in turn, used to generate H matrices with capacities that are consistent within a few percent of those measured in New York. The spatial channel model reported allows simulations of H matrices for arbitrary antenna configurations. These channel matrices may be used to test receiver algorithms in system performance studies. These results may also be used for antenna array design, as the decay of mobile antenna correlation with antenna separation has been reported here. An important finding for the base transmitter array was that the antennas were largely uncorrelated even at antenna separations as small as two wavelengths.

Proceedings ArticleDOI
F. Zane1, Girija Narlikar1, A. Basu1
09 Jul 2003
TL;DR: The proposed architectures and algorithms for making TCAM-based routing tables more power efficient are simple to implement, use commodity TCAMs, and provide worst-case power consumption guarantees (independent of routing table contents).
Abstract: Ternary content-addressable memories (TCAMs) are becoming very popular for designing high-throughput forwarding engines on routers: they are fast, cost-effective and simple to manage. However, a major drawback of TCAMs is their high power consumption. This paper presents architectures and algorithms for making TCAM-based routing tables more power efficient. The proposed architectures and algorithms are simple to implement, use commodity TCAMs, and provide worst-case power consumption guarantees (independent of routing table contents).

Proceedings ArticleDOI
Sem Borst1
09 Jul 2003
TL;DR: This paper shows that in certain cases the user-level performance may be evaluated by means of a multiclass Processor-Sharing model where the total service rate varies with the total number of users, and shows that, in the presence of channel variations, greedy, myopic strategies which maximize throughput in a static scenario may result in sub-optimal throughput performance for a dynamic user configuration and cause potential instability effects.
Abstract: Channel-aware scheduling strategies, such as the Proportional Fair algorithm for the CDMA 1xEV-DO system, provide an effective mechanism for improving throughput performance in wireless data networks by exploiting channel fluctuations. The performance of channel-aware scheduling algorithms has mostly been explored at the packet level for a static user population, often assuming infinite backlogs. In the present paper, we focus on the performance at the flow level in a dynamic setting with random finite-size service demands. We show that in certain cases the user-level performance may be evaluated by means of a multiclass Processor-Sharing model where the total service rate varies with the total number of users. The latter model provides explicit formulas for the distribution of the number of active users of the various classes, the mean response times, the blocking probabilities, and the mean throughput. In addition we show that, in the presence of channel variations, greedy, myopic strategies which maximize throughput in a static scenario, may result in sub-optimal throughput performance for a dynamic user configuration and cause potential instability effects.

Journal ArticleDOI
S. ten Brink1, Gerhard Kramer1
TL;DR: Extrinsic information transfer charts are used to design systematic and nonsystematic repeat-accumulate (RA) codes for iterative detection and decoding and are shown to operate close to capacity.
Abstract: Extrinsic information transfer (EXIT) charts are used to design systematic and nonsystematic repeat-accumulate (RA) codes for iterative detection and decoding. The convergence problems of nonsystematic RA codes are solved by introducing a biregular, or doped, layer of check nodes. As examples, such nonsystematic codes are designed for multi-input/multi-output (MIMO) fading channels and are shown to operate close to capacity.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: It is demonstrated that in the case of asymmetric traffic distribution, where load imbalance is most pronounced, significant throughput gains can be obtained while the gains in the symmetric case are modest.
Abstract: Third generation code-division multiple access (CDMA) systems propose to provide packet data service through a high speed shared channel with intelligent and fast scheduling at the base-stations. In the current approach base-stations schedule independently of other base-stations. We consider scheduling schemes in which scheduling decisions are made jointly for a cluster of cells thereby enhancing performance through interference avoidance and dynamic load balancing. We consider algorithms that assume complete knowledge of the channel quality information from each of the base-stations to the terminals at the centralized scheduler as well as a two-tier scheduling strategy that assumes only the knowledge of the long term channel conditions at the centralized scheduler. We demonstrate that in the case of asymmetric traffic distribution, where load imbalance is most pronounced, significant throughput gains can be obtained while the gains in the symmetric case are modest. Since the load balancing is achieved through centralized scheduling, our scheme can adapt to time-varying traffic patterns dynamically.

Journal ArticleDOI
TL;DR: Some of the most basic architectural superstructures for wireless links with multiple antennas: M at the transmit site and N at the receive site are discussed, and those structures that can be composed using spatially one dimensional coders and decoders are emphasized.
Abstract: In this paper, we discuss some of the most basic architectural superstructures for wireless links with multiple antennas: M at the transmit site and N at the receive site. Toward leveraging the gains of the last half century of coding theory, we emphasize those structures that can be composed using spatially one dimensional coders and decoders. These structures are investigated primarily under a probability of outage constraint. The random matrix channel is assumed to hold steady for such a large number of M-dimensional vector symbol transmission times, that an infinite time horizon Shannon analysis provides useful insights. The resulting extraordinary capacities are contrasted for architectures that differ in the way that they manage self-interference in the presence of additive receiver noise. A universally optimal architecture with a diagonal space-time layering is treated, as is an architecture with horizontal space-time layering and an architecture with a single outer code. Some capacity asymptotes for large numbers of antennas are also included. Some results for frequency selective channels are presented: It is only necessary to feedback M rates, one per transmit antenna, to attain capacity. Also, capacity of an (M,N) link is, in a certain sense, invariant with respect to signaling format.

Journal ArticleDOI
TL;DR: An achievable region for the L-channel multiple description coding problem is presented and a new outer bound on the rate-distortion (RD) region for memoryless Gaussian sources with mean squared error distortion is derived.
Abstract: An achievable region for the L-channel multiple description coding problem is presented. This region generalizes two-channel results of El Gamal and Cover (1982) and of Zhang and Berger (1987). It further generalizes three-channel results of Gray and Wyner (1974) and of Zhang and Berger. A source that is successively refinable on chains is shown to be successively refinable on trees. A new outer bound on the rate-distortion (RD) region for memoryless Gaussian sources with mean squared error distortion is also derived. The achievable region meets this outer bound for certain symmetric cases.

Journal ArticleDOI
TL;DR: This work proposes a family of space-time codes that are especially designed for the case of four-transmitter antennas and that are shown to allow the attainment of a significant fraction of the open-loop Shannon capacity of the (4,1) channel.
Abstract: The design of space-time codes that are capable of approaching the capacity of multiple-input-single-output (MISO) antenna systems is a challenging problem, yet one of high practical importance. While a remarkably simple scheme of Alamouti (1998) is capable of attaining the channel capacity in the case of two-transmitter and one-receiver antennas (2,1), no such schemes are known for the case of more than two transmitter antennas. We propose a family of space-time codes that are especially designed for the case of four-transmitter antennas and that are shown to allow the attainment of a significant fraction of the open-loop Shannon capacity of the (4,1) channel.

Journal ArticleDOI
06 Jun 2003
TL;DR: This work introduces minimally invasive multiphoton fluorescence microendoscopes (350-1000 /spl mu/ in diameter) based on compound microlenses and demonstrates multipoton endoscopy in live animals by visualizing rodent hippocampal neurons and dendrites.
Abstract: We introduce minimally invasive multiphoton fluorescence microendoscopes (350-1000 /spl mu/ in diameter) based on compound microlenses. We demonstrate multiphoton endoscopy in live animals by visualizing rodent hippocampal neurons and dendrites.

Journal ArticleDOI
TL;DR: GGobi is a direct descendent of a data visualization system called XGobi that has been around since the early 1990s, and its new features include multiple plotting windows, a color lookup table manager, and an Extensible Markup Language file format for data.

Journal ArticleDOI
TL;DR: This work proposes a very simple and efficient algorithm that reduces the complexity of a layered space-time wireless system by a factor of M.
Abstract: Bell Laboratories layered space-time (BLAST) wireless systems are multiple-antenna communication schemes that can achieve very high spectral efficiencies in scattering environments with no increase in bandwidth or transmitted power. The most popular and, by far, the most practical architecture is the so-called vertical BLAST (V-BLAST). The signal detection algorithm of a V-BLAST system is computationally very intensive. If the number of transmitters is M and is equal to the number of receivers, this complexity is proportional to M/sup 4/ at each sample time. We propose a very simple and efficient algorithm that reduces the complexity by a factor of M.

Book ChapterDOI
08 Jan 2003
TL;DR: This work characterize the expressive power of these language fragments in terms of both logics and tree patterns, and investigates closure properties, focusing on the ability to perform basic Boolean operations while remaining within the fragment.
Abstract: We study structural properties of each of the main sublanguages of XPath [8] commonly used in practice First, we characterize the expressive power of these language fragments in terms of both logics and tree patterns Second, we investigate closure properties, focusing on the ability to perform basic Boolean operations while remaining within the fragment We give a complete picture of the closure properties of these fragments, treating XPath expressions both as functions of arbitrary nodes in a document tree, and as functions that are applied only at the root of the tree Finally, we provide sound and complete axiom systems and normal forms for several of these fragments These results are useful for simplification of XPath expressions and optimization of XML queries

Journal ArticleDOI
TL;DR: In this article, a microelectromechanical system-based beam steering optical crossconnect switch core with port count exceeding 1100 was presented, featuring mean fiber-to-fiber insertion loss of 2.1 dB and maximum insertion loss 4.0 dB across all possible connections.
Abstract: We present a microelectromechanical systems-based beam steering optical crossconnect switch core with port count exceeding 1100, featuring mean fiber-to-fiber insertion loss of 2.1 dB and maximum insertion loss of 4.0 dB across all possible connections. The challenge of efficient measurement and optimization of all possible connections was met by an automated testing facility. The resulting connections feature optical loss stability of better than 0.2 dB over days, without any feedback control under normal laboratory conditions.

Journal ArticleDOI
Yiteng Huang1, Jacob Benesty1
TL;DR: Simulations show that the frequency-domain adaptive approaches perform as well as or better than their time-domain counterparts and the cross-relation (CR) batch method in most practical cases.
Abstract: We extend our previous studies on adaptive blind channel identification from the time domain into the frequency domain. A class of frequency-domain adaptive approaches, including the multichannel frequency-domain LMS (MCFLMS) and constrained/unconstrained normalized multichannel frequency-domain LMS (NMCFLMS) algorithms, are proposed. By utilizing the fast Fourier transform (FFT) and overlap-save techniques, the convolution and correlation operations that are computationally intensive when performed by the time-domain multichannel LMS (MCLMS) or multichannel Newton (MCN) methods are efficiently implemented in the frequency domain, and the MCFLMS is rigorously derived. In order to achieve independent and uniform convergence for each filter coefficient and, therefore, accelerate the overall convergence, the coefficient updates are properly normalized at each iteration, and the NMCFLMS algorithms are developed. Simulations show that the frequency-domain adaptive approaches perform as well as or better than their time-domain counterparts and the cross-relation (CR) batch method in most practical cases. It is remarkable that for a three-channel acoustic system with long impulse responses (256 taps in each channel) excited by a male speech signal, only the proposed NMCFLMS algorithm succeeds in determining a reasonably accurate channel estimate, which is good enough for applications such as time delay estimation.

Journal ArticleDOI
Hoon Kim1, Alan H. Gnauck1
TL;DR: In this article, the performance degradation of differential phase-shift-keying transmission systems due to nonlinear phase noise was observed and studied in a 600-km nonzero dispersion-shifted fiber (NZDSF) link.
Abstract: It is known that amplitude fluctuations caused by amplified spontaneous emission noise can be converted into phase noise by Kerr nonlinearity, and this nonlinear phase noise can limit the performance of phase-shift-keying transmission systems. In this letter, we experimentally observe and study the performance degradation of differential phase-shift-keying transmission systems due to nonlinear phase noise. In order to clearly observe the effect, we intentionally add ASE noise to the DPSK signal at the transmitter, and then transmit the signal over a 600-km nonzero dispersion-shifted fiber (NZDSF) link. The results show that the probability density function of nonlinear phase noise deviates from the Gaussian distribution, and this characteristic negates the benefit of a balanced receiver.

Journal ArticleDOI
TL;DR: Binary fingerprinting codes secure against size-t coalitions are presented which enable the distributor (decoder) to recover at least one of the users from the coalition with probability of error exp(-/spl Omega/(n)) for M=exp(/spl omega/(n)).
Abstract: We consider a general fingerprinting problem of digital data under which coalitions of users can alter or erase some bits in their copies in order to create an illegal copy. Each user is assigned a fingerprint which is a word in a fingerprinting code of size M (the total number of users) and length n. We present binary fingerprinting codes secure against size-t coalitions which enable the distributor (decoder) to recover at least one of the users from the coalition with probability of error exp(-/spl Omega/(n)) for M=exp(/spl Omega/(n)). This is an improvement over the best known schemes that provide the error probability no better than exp(-/spl Omega/(n/sup 1/2/)) and for this probability support at most exp(O(n/sup 1/2/)) users. The construction complexity of codes is polynomial in n. We also present versions of these constructions that afford identification algorithms of complexity poly(n)=polylog(M), improving over the best previously known complexity of /spl Omega/(M). For the case t=2, we construct codes of exponential size with even stronger performance, namely, for which the distributor can either recover both users from the coalition with probability 1-exp(/spl Omega/(n)), or identify one traitor with probability 1.