scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2006"


Journal ArticleDOI
TL;DR: This paper relates the general Volterra representation to the classical Wiener, Hammerstein, Wiener-Hammerstein, and parallel Wiener structures, and describes some state-of-the-art predistortion models based on memory polynomials, and proposes a new generalizedMemory polynomial that achieves the best performance to date.
Abstract: Conventional radio-frequency (RF) power amplifiers operating with wideband signals, such as wideband code-division multiple access (WCDMA) in the Universal Mobile Telecommunications System (UMTS) must be backed off considerably from their peak power level in order to control out-of-band spurious emissions, also known as "spectral regrowth." Adapting these amplifiers to wideband operation therefore entails larger size and higher cost than would otherwise be required for the same power output. An alternative solution, which is gaining widespread popularity, is to employ digital baseband predistortion ahead of the amplifier to compensate for the nonlinearity effects, hence allowing it to run closer to its maximum output power while maintaining low spectral regrowth. Recent improvements to the technique have included memory effects in the predistortion model, which are essential as the bandwidth increases. In this paper, we relate the general Volterra representation to the classical Wiener, Hammerstein, Wiener-Hammerstein, and parallel Wiener structures, and go on to describe some state-of-the-art predistortion models based on memory polynomials. We then propose a new generalized memory polynomial that achieves the best performance to date, as demonstrated herein with experimental results obtained from a testbed using an actual 30-W, 2-GHz power amplifier

1,305 citations


Journal ArticleDOI
TL;DR: This article considers network coordination as a means to provide spectrally efficient communications in cellular downlink systems and describes how the antenna outputs are chosen in ways to minimize the out-of-cell interference, and hence to increase the downlink system capacity.
Abstract: In this article we consider network coordination as a means to provide spectrally efficient communications in cellular downlink systems. When network coordination is employed, all base antennas act together as a single network antenna array, and each mobile may receive useful signals from nearby base stations. Furthermore, the antenna outputs are chosen in ways to minimize the out-of-cell interference, and hence to increase the downlink system capacity. When the out-of-cell interference is mitigated, the links can operate in the high signal-to-noise ratio regime. This enables the cellular network to enjoy the great spectral efficiency improvement associated with using multiple antennas

1,074 citations


Journal ArticleDOI
TL;DR: Simulations show that adding gossiping to AODV results in significant performance improvement, even in networks as small as 150 nodes, and suggest that the improvement should be even more significant in larger networks.
Abstract: Many ad hoc routing protocols are based on some variant of flooding. Despite various optimizations of flooding, many routing messages are propagated unnecessarily. We propose a gossiping-based approach, where each node forwards a message with some probability, to reduce the overhead of the routing protocols. Gossiping exhibits bimodal behavior in sufficiently large networks: in some executions, the gossip dies out quickly and hardly any node gets the message; in the remaining executions, a substantial fraction of the nodes gets the message. The fraction of executions in which most nodes get the message depends on the gossiping probability and the topology of the network. In the networks we have considered, using gossiping probability between 0.6 and 0.8 suffices to ensure that almost every node gets the message in almost every execution. For large networks, this simple gossiping protocol uses up to 35% fewer messages than flooding, with improved performance. Gossiping can also be combined with various optimizations of flooding to yield further benefits. Simulations show that adding gossiping to AODV results in significant performance improvement, even in networks as small as 150 nodes. Our results suggest that the improvement should be even more significant in larger networks

828 citations


Journal ArticleDOI
05 Jun 2006
TL;DR: This paper discusses the generation and detection of multigigabit/s intensity- and phase-modulated formats, and highlights their resilience to key impairments found in optical networking, such as optical amplifier noise, multipath interference, chromatic dispersion, polarization-mode dispersion.
Abstract: Fiber-optic communication systems form the high-capacity transport infrastructure that enables global broadband data services and advanced Internet applications. The desire for higher per-fiber transport capacities and, at the same time, the drive for lower costs per end-to-end transmitted information bit has led to optically routed networks with high spectral efficiencies. Among other enabling technologies, advanced optical modulation formats have become key to the design of modern wavelength division multiplexed (WDM) fiber systems. In this paper, we review optical modulation formats in the broader context of optically routed WDM networks. We discuss the generation and detection of multigigabit/s intensity- and phase-modulated formats, and highlight their resilience to key impairments found in optical networking, such as optical amplifier noise, multipath interference, chromatic dispersion, polarization-mode dispersion, WDM crosstalk, concatenated optical filtering, and fiber nonlinearity

772 citations


Journal ArticleDOI
TL;DR: A solution is developed that optimizes the overall network throughput subject to fairness constraints on allocation of scarce wireless capacity among mobile clients, and the performance of the algorithms is within a constant factor of that of any optimal algorithm for the joint channel assignment and routing problem.
Abstract: Multihop infrastructure wireless mesh networks offer increased reliability, coverage, and reduced equipment costs over their single-hop counterpart, wireless local area networks. Equipping wireless routers with multiple radios further improves the capacity by transmitting over multiple radios simultaneously using orthogonal channels. Efficient channel assignment and routing is essential for throughput optimization of mesh clients. Efficient channel assignment schemes can greatly relieve the interference effect of close-by transmissions; effective routing schemes can alleviate potential congestion on any gateways to the Internet, thereby improving per-client throughput. Unlike previous heuristic approaches, we mathematically formulate the joint channel assignment and routing problem, taking into account the interference constraints, the number of channels in the network, and the number of radios available at each mesh router. We then use this formulation to develop a solution for our problem that optimizes the overall network throughput subject to fairness constraints on allocation of scarce wireless capacity among mobile clients. We show that the performance of our algorithms is within a constant factor of that of any optimal algorithm for the joint channel assignment and routing problem. Our evaluation demonstrates that our algorithm can effectively exploit the increased number of channels and radios, and it performs much better than the theoretical worst case bounds

679 citations


Journal ArticleDOI
TL;DR: This paper studies the quantitative performance behavior of the Wiener filter in the context of noise reduction and shows that in the single-channel case the a posteriori signal-to-noise ratio (SNR) is greater than or equal to the a priori SNR (defined before theWiener filter), indicating that the Wieners filter is always able to achieve noise reduction.
Abstract: The problem of noise reduction has attracted a considerable amount of research attention over the past several decades. Among the numerous techniques that were developed, the optimal Wiener filter can be considered as one of the most fundamental noise reduction approaches, which has been delineated in different forms and adopted in various applications. Although it is not a secret that the Wiener filter may cause some detrimental effects to the speech signal (appreciable or even significant degradation in quality or intelligibility), few efforts have been reported to show the inherent relationship between noise reduction and speech distortion. By defining a speech-distortion index to measure the degree to which the speech signal is deformed and two noise-reduction factors to quantify the amount of noise being attenuated, this paper studies the quantitative performance behavior of the Wiener filter in the context of noise reduction. We show that in the single-channel case the a posteriori signal-to-noise ratio (SNR) (defined after the Wiener filter) is greater than or equal to the a priori SNR (defined before the Wiener filter), indicating that the Wiener filter is always able to achieve noise reduction. However, the amount of noise reduction is in general proportional to the amount of speech degradation. This may seem discouraging as we always expect an algorithm to have maximal noise reduction without much speech distortion. Fortunately, we show that speech distortion can be better managed in three different ways. If we have some a priori knowledge (such as the linear prediction coefficients) of the clean speech signal, this a priori knowledge can be exploited to achieve noise reduction while maintaining a low level of speech distortion. When no a priori knowledge is available, we can still achieve a better control of noise reduction and speech distortion by properly manipulating the Wiener filter, resulting in a suboptimal Wiener filter. In case that we have multiple microphone sensors, the multiple observations of the speech signal can be used to reduce noise with less or even no speech distortion

563 citations


Proceedings ArticleDOI
Thomas L. Marzetta1
01 Oct 2006
TL;DR: In this paper, the authors assume that the base station derives its channel estimate from taurp pilot symbols which the terminals transmit on the reverse link, and they determine the optimum number of terminals to serve and the optimum reverse pilot symbols to employ by choosing these parameters to maximize a lower bound on the net sumthroughput.
Abstract: An M-element antenna array (the base station) transmits, on the downlink, K les M sequences of QAM symbols selectively and simultaneously to K autonomous single-antenna terminals through a linear pre-coder that is the pseudo-inverse of an estimate of the forward channel matrix. We assume time-division duplex (TDD) operation, so the base station derives its channel estimate from taurp pilot symbols which the terminals transmit on the reverse link. A coherence interval of T symbols is expended as follows: taurp reverse pilot symbols, one symbol for computations, and (T-l-taurp) forward QAM symbols for each terminal For a given coherence interval, number of base station antennas, and forward- and reverse-SINR's we determine the optimum number of terminals to serve simultaneously and the optimum number of reverse pilot symbols to employ by choosing these parameters to maximize a lower bound on the net sum-throughput. The lower bound rigorously accounts for channel estimation error, and is valid for all SINR's. Surprisingly it is always advantageous to increase the number of base station antennas, even when the reverse SINR is low and the channel estimate poor: greater numbers of antennas enable us to climb out of the noise and to serve more terminals. Even within short coherence intervals (T= 10 symbols) and with low SINR's (-10.0 dB reverse, 0.0 dB forward) given large numbers of base station antennas (M ges 16 ) it is both feasible and advantageous to learn the channel and to serve a multiplicity of terminals simultaneously as well.

546 citations


Journal ArticleDOI
TL;DR: This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions, and admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition.
Abstract: The mutual information of independent parallel Gaussian-noise channels is maximized, under an average power constraint, by independent Gaussian inputs whose power is allocated according to the waterfilling policy. In practice, discrete signaling constellations with limited peak-to-average ratios (m-PSK, m-QAM, etc.) are used in lieu of the ideal Gaussian signals. This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions. Such policy admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition. The relationship between mutual information of Gaussian channels and nonlinear minimum mean-square error (MMSE) proves key to solving the power allocation problem.

542 citations


Journal ArticleDOI
TL;DR: The generation and detection of multigigabit/second intensity- and phase-modulated formats are reviewed to highlight their resilience to key impairments found in optical networking, such as optical amplifier noise, chromatic dispersion, polarization-mode dispersion.
Abstract: Advanced optical modulation formats have become a key ingredient to the design of modern wavelength-division-multiplexed (WDM) optically routed networks. In this paper, we review the generation and detection of multigigabit/second intensity- and phase-modulated formats and highlight their resilience to key impairments found in optical networking, such as optical amplifier noise, chromatic dispersion, polarization-mode dispersion, WDM crosstalk, concatenated optical filtering, and fiber nonlinearity

490 citations


Journal ArticleDOI
31 Jul 2006
TL;DR: It is observed that CCT mutes intercell interference enough, so that enormous spectral efficiency improvement associated with using multiple antennas in isolated communication links occurs as well for the base-to-user links in a cellular network.
Abstract: Intercell interference limits the capacity of wireless networks. To mitigate this interference we explore coherently coordinated transmission (CCT) from multiple base stations to each user. To treat users fairly, we explore equal rate (ER) networks. We evaluate the downlink network efficiency of CCT as compared to serving each user with single base transmission (SBT) with a separate base uniquely assigned to each user. Efficiency of ER networks is measured as total network throughput relative to the number of network antennas at 10% user outage. Efficiency is compared relative to the baseline of single base transmission with power control, (ER-SBT), where base antenna transmissions are not coordinated and apart from power control and the assignment of 10% of the users to outage, nothing is done to mitigate interference. We control the transmit power of ER systems to maximise the common rate for ER-SBT, ER-CCT based on zero forcing, and ER-CCT employing dirty paper coding. We do so for (no. of transmit antennas per base, no. of receive antennas per user) equal to (1,1), (2,2) and (4,4). We observe that CCT mutes intercell interference enough, so that enormous spectral efficiency improvement associated with using multiple antennas in isolated communication links occurs as well for the base-to-user links in a cellular network.

396 citations


Journal ArticleDOI
TL;DR: It is shown that the time occupied in frequency-duplex CSI transfer is generally less than one might expect and falls as the number of antennas increases, and the advantages of having more antennas at the base station extend from having network gains to learning the channel information.
Abstract: Knowledge of accurate and timely channel state information (CSI) at the transmitter is becoming increasingly important in wireless communication systems. While it is often assumed that the receiver (whether base station or mobile) needs to know the channel for accurate power control, scheduling, and data demodulation, it is now known that the transmitter (especially the base station) can also benefit greatly from this information. For example, recent results in multiantenna multiuser systems show that large throughput gains are possible when the base station uses multiple antennas and a known channel to transmit distinct messages simultaneously and selectively to many single-antenna users. In time-division duplex systems, where the base station and mobiles share the same frequency band for transmission, the base station can exploit reciprocity to obtain the forward channel from pilots received over the reverse channel. Frequency-division duplex systems are more difficult because the base station transmits and receives on different frequencies and therefore cannot use the received pilot to infer anything about the multiantenna transmit channel. Nevertheless, we show that the time occupied in frequency-duplex CSI transfer is generally less than one might expect and falls as the number of antennas increases. Thus, although the total amount of channel information increases with the number of antennas at the base station, the burden of learning this information at the base station paradoxically decreases. Thus, the advantages of having more antennas at the base station extend from having network gains to learning the channel information. We quantify our gains using linear analog modulation which avoids digitizing and coding the CSI and therefore can convey information very rapidly and can be readily analyzed. The old paradigm that it is not worth the effort to learn channel information at the transmitter should be revisited since the effort decreases and the gain increases with the number of antennas.

Proceedings ArticleDOI
23 Apr 2006
TL;DR: A simple online algorithm, which assigns a newly arrived user to a base station that improves the generalized proportional fairness objective the most without changing existing users’ association, is very close to the offline optimal solution.
Abstract: In 3G data networks, network operators would like to balance system throughput while serving users in a fair manner. This is achieved using the notion of proportional fairness. However, so far, proportional fairness has been applied at each base station independently. Such an approach can result in non-Pareto optimal bandwidth allocation when considering the network as a whole. Therefore, it is important to consider proportional fairness in a network-wide context with user associations to base stations governed by optimizing a generalized proportional fairness objective. In this paper, we take the first step in formulating and studying this problem rigorously. We show that the general problem is NP-hard and it is also hard to obtain a close-to-optimal solution. We then consider a special case where multi-user diversity only depends on the number of users scheduled together. We propose efficient offline optimal algorithms and heuristic-based greedy online algorithms to solve this problem. Using detailed simulations based on the base station layout of a large service provider in the U.S., we show that our simple online algorithm, which assigns a newly arrived user to a base station that improves the generalized proportional fairness objective the most without changing existing users’ association, is very close to the offline optimal solution. The greedy algorithm can achieve significantly better throughput and fairness in heterogeneous user distributions, when compared to the approach that assigns a user to the base station with the best signal strength.

Proceedings ArticleDOI
27 Jun 2006
TL;DR: This paper proposes to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds, and explores algorithms in two categories: static and adaptive thresholds.
Abstract: Monitoring is an issue of primary concern in current and next generation networked systems. For ex, the objective of sensor networks is to monitor their surroundings for a variety of different applications like atmospheric conditions, wildlife behavior, and troop movements among others. Similarly, monitoring in data networks is critical not only for accounting and management, but also for detecting anomalies and attacks. Such monitoring applications are inherently continuous and distributed, and must be designed to minimize the communication overhead that they introduce. In this context we introduce and study a fundamental class of problems called "thresholded counts" where we must return the aggregate frequency count of an event that is continuously monitored by distributed nodes with a user-specified accuracy whenever the actual count exceeds a given threshold value.In this paper we propose to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds. We explore algorithms in two categories: static and adaptive thresholds. In the static case, we consider thresholds based on a linear combination of two alternate strategies, and show that there exists an optimal blend of the two strategies that results in minimum communication overhead. We further show that this optimal blend can be found using a steepest descent search. In the adaptive case, we propose algorithms that adjust the local thresholds based on the observed distributions of updated information. We use extensive simulations not only to verify the accuracy of our algorithms and validate our theoretical results, but also to evaluate the performance of our algorithms. We find that both approaches yield significant savings over the naive approach of centralized processing.

Proceedings ArticleDOI
22 Mar 2006
TL;DR: The results prove that there exists a single O(klogn)timesn measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals.
Abstract: In sparse approximation theory, the fundamental problem is to reconstruct a signal AisinRn from linear measurements (A,psii) with respect to a dictionary of psii's. Recently, there is focus on the novel direction of Compressed Sensing where the reconstruction can be done with very few-O(klogn)-linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, the results prove that there exists a single O(klogn)timesn measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying mathematics and because of its potential applications. In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1/epsiv, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1+epsiv and improves the reconstruction time from poly(n) to poly (klogn). Our second result is a randomized construction of O(kpolylog(n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on compressed sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including learning theory, streaming algorithms and complexity theory for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement.

Proceedings ArticleDOI
12 Nov 2006
TL;DR: A novel multiple string matching algorithm that can process multiple characters at a time thus achieving multi-gigabit rate search speeds and an architecture for an efficient implementation on TCAM-based hardware are proposed.
Abstract: The phenomenal growth of the Internet in the last decade and society?s increasing dependence on it has brought along, a flood of security attacks on the networking and computing infrastructure. Intrusion detection/prevention systems provide defenses against these attacks by monitoring headers and payload of packets flowing through the network. Multiple string matching that can compare hundreds of string patterns simultaneously is a critical component of these systems, and is a well-studied problem. Most of the string matching solutions today are based on the classic Aho-Corasick algorithm, which has an inherent limitation; they can process only one input character in one cycle. As memory speed is not growing at the same pace as network speed, this limitation has become a bottleneck in the current network, having speeds of tens of gigabits per second. In this paper, we propose a novel multiple string matching algorithm that can process multiple characters at a time thus achieving multi-gigabit rate search speeds. We also propose an architecture for an efficient implementation on TCAM-based hardware. We additionally propose novel optimizations by making use of the properties of TCAMs to significantly reduce the memory requirements of the proposed algorithm. We finally present extensive simulation results of network-based virus/worm detection using real signature databases to illustrate the effectiveness of the proposed scheme.

Journal ArticleDOI
TL;DR: In this article, the authors provide a quantum analysis of a cavity parametric amplifier employing a Kerr-like nonlinearity that is accompanied by a two-photon absorptive loss, which can degrade the performance of amplifiers and mixers.
Abstract: Two-photon loss mechanisms often accompany a Kerr nonlinearity. The kinetic inductance exhibited by superconducting transmission lines provides an example of a Kerr-like nonlinearity that is accompanied by a nonlinear resistance of the two-photon absorptive type. Such nonlinear dissipation can degrade the performance of amplifiers and mixers employing a Kerr-like nonlinearity as the gain or mixing medium. As an aid for parametric-amplifier design, the authors provide a quantum analysis of a cavity parametric amplifier employing a Kerr nonlinearity that is accompanied by a two-photon absorptive loss. Because of their usefulness in diagnostics, we obtain expressions for the pump amplitude within the cavity, the reflection coefficient for the pump amplitude reflected off of the cavity, the parametric gain, and the intermodulation gain. Expressions by which the degree of squeezing can be computed are also presented. Although the focus here is on providing aids for the design of kinetic-inductance parametric amplifiers, much of what is presented is directly applicable to analogous optical and mechanical amplifiers

Proceedings ArticleDOI
Gerhard Kramer1
21 Feb 2006
TL;DR: Several of the fundamental results known about ICs are reviewed with special emphasis on Gaussian channels, and four recent results including an improvement of a standard achievable rate region are discussed.
Abstract: Interference affects many types of communication channels including digital subscriber lines and wireless links. A basic model for studying coding for such scenarios is known as the interference channel (IC). Several of the fundamental results known about ICs are reviewed with special emphasis on Gaussian channels. We further discuss four recent results including an improvement of a standard achievable rate region

Journal ArticleDOI
H.R. Stuart1, A. Pidwerbetsky1
TL;DR: In this article, a quasi-static analysis of the resonant properties of a sub-wavelength negative permittivity sphere was performed and it was shown that such a resonator will have a Q-factor that is only 1.5 times the Chu limit, matching the performance of other known electrically small spherical antenna designs such as the folded spherical helix and the spherical capped dipole.
Abstract: We show how resonators composed of negative permittivity materials can form the basis of effective small antenna elements. A quasi-static analysis of the resonant properties of a sub-wavelength negative permittivity sphere predicts that such a resonator will have a Q-factor that is only 1.5 times the Chu limit, matching the performance of other known electrically small spherical antenna designs, such as the folded spherical helix and the spherical capped dipole. Finite element simulation is used to demonstrate an impedance-matched radiating structure formed by coupling the resonator (a half-sphere above a ground plane) to a 50 ohm coaxial transmission line, where the coupling is mediated by a small conducting stub extending partially into the half-sphere. The resulting antenna has a ka<0.5, and its bandwidth and efficiency performance corresponds well to that predicted by the quasi-static analysis of the resonator.

Journal ArticleDOI
TL;DR: A linear programming framework for determining optimum routing and scheduling of flows that maximizes throughput in a wireless mesh network and accounts for the effect of interference and variable-rate transmission is developed.
Abstract: Wireless backhaul communication is expected to play a significant role in providing the necessary backhaul resources for future high-rate wireless networks. Mesh networking, in which information is routed from source to destination over multiple wireless links, has potential advantages over traditional single-hop networking, especially for backhaul communication. We develop a linear programming framework for determining optimum routing and scheduling of flows that maximizes throughput in a wireless mesh network and accounts for the effect of interference and variable-rate transmission. We then apply this framework to examine the throughput and range capabilities for providing wireless backhaul to a hexagonal grid of base stations, for both single-hop and multihop transmissions for various network scenarios. We then discuss the application of mesh networking for load balancing of wired backhaul traffic under unequal access traffic conditions. Numerical results show a significant benefit for mesh networking under unbalanced loading.

Proceedings ArticleDOI
05 Jun 2006
TL;DR: In this article, large-area superhydrophobic test surfaces have been fabricated and tested in a water tunnel, measuring drag in both the laminar and transitional regimes at velocities up to 1.4 m/s.
Abstract: Superhydrophobic surfaces are known to exhibit reduced viscous drag due to "slip" associated with a layer of air trapped at the liquid-solid interface. It is expected that this slip will lead to reduced turbulent skin-friction drag in external flows at higher Reynolds numbers in both the laminar and turbulent regimes. Results are presented from experiments exploring this effect. Large-area Superhydrophobic test surfaces have been fabricated and tested in a water tunnel, measuring drag in both the laminar and transitional regimes at velocities up to 1.4 m/s. Drag reduction of approximately 50% is observed for laminar flow. Lower levels of drag reduction are observed at higher speeds after the flow has transitioned to turbulence.

Journal ArticleDOI
TL;DR: This work examines unitary encoded space-time transmission of MIMO systems and derives the received signal distribution when the channel matrix is correlated at the transmitter end.
Abstract: A promising new method from the field of representations of Lie groups is applied to calculate integrals over unitary groups, which are important for multiantenna communications. To demonstrate the power and simplicity of this technique, a number of recent results are rederived, using only a few simple steps. In particular, we derive the joint probability distribution of eigenvalues of the matrix GGdagger , with G a nonzero mean or a semicorrelated Gaussian random matrix. These joint probability distribution functions can then be used to calculate the moment generating function of the mutual information for Gaussian multiple-input multiple-output (MIMO) channels with these probability distribution of their channel matrices G. We then turn to the previously unsolved problem of calculating the moment generating function of the mutual information of MIMO channels, which are correlated at both the receiver and the transmitter. From this moment generating function we obtain the ergodic average of the mutual information and study the outage probability. These methods can be applied to a number of other problems. As a particular example, we examine unitary encoded space-time transmission of MIMO systems and we derive the received signal distribution when the channel matrix is correlated at the transmitter end

Journal ArticleDOI
TL;DR: Models to quantify the interdependencies of critical infrastructures in the U.S. and evaluate plans to compensate for vulnerabilities are presented to enhance public safety and infrastructure resiliency.
Abstract: One of the top 10 priorities of the U.S. Department of Homeland Security is protection of our critical national infrastructures including power, communications, transportation, and water. This paper presents models to quantify the interdependencies of critical infrastructures in the U.S. and evaluate plans to compensate for vulnerabilities. Communications is a key infrastructure, central to all others, so that understanding and modeling the risk due to communications disruptions is a high priority in order to enhance public safety and infrastructure resiliency. This paper discusses reliability modeling and analysis at a higher level than usual. Reliability analysis typically deals at the component or sub-system level and talks about “mean time to failure” and “mean time to repair” to derive availability estimates of equipment. Here, we deal with aggregate scales of failures, restoration, and mitigation across national infrastructures. This aggregate scale is useful when examining multiple infrastructures simultaneously with their interdependencies. System dynamics simulation models have been created for both communication networks and for the infrastructure interaction models that quantify these interactions using a risk-informed decision process for the evaluation of alternate protective measures and investment strategies in support of critical infrastructure protection. We will describe an example development of these coupled infrastructure consequence models and their application to the analysis of a power disruption and its cascading effect on the telecommunications infrastructure as well as the emergency services infrastructure. The results show significant impacts across infrastructures that can become increasingly exacerbated if the consumer population moves more and more to telecom services without power lifeline.

Journal ArticleDOI
R. J. Archer1, M. M. Atalla1
TL;DR: In this article, applied reverse bias was applied to the potential drop across separation between metal and semiconductor with applied bias V,. Work function of metal, semiconductor, and surface states on semiconductor.
Abstract: Applied reverse bias. Negative of intercept on V , axis of plot of 1/C2 versus TI, . Energy gap in semiconductor, 1.100 ev for silicon. Potential drop across separation between metal and semiconductor at equilibrium. Potential drop across separation between metal and semiconductor with applied bias V , . Work function of metal. Electronegativity of semiconductor. Charge in space charge region in semiconductor. Charge in surface states on semiconductor (positive for donor and

Journal ArticleDOI
Christophe Dorrer1
TL;DR: In this paper, a survey of characterization techniques used to measure statistical representations of data-encoded sources and complete representations of periodic sources is presented and complemented by the description of experimental implementations and results.
Abstract: The field of high-speed measurements for optical telecommunication systems is reviewed. An emphasis is placed on diagnostics evaluating the temporal electric field of light, i.e., broadly speaking, those that temporally resolve the shape of the optical wave in these systems. Sensitivity to the electric field can be obtained via sampling techniques operating directly in the time domain or via indirect approaches. A survey of characterization techniques used to measure statistical representations of data-encoded sources and complete representations of periodic sources is presented and complemented by the description of experimental implementations and results

Proceedings ArticleDOI
26 Jun 2006
TL;DR: This work presents the first deterministic algorithms for answering biased quantiles queries accurately with small—sublinear in the input size—space and time bounds in one pass, and shows it uses less space than existing methods in many practical settings, and is fast to maintain.
Abstract: Skew is prevalent in data streams, and should be taken into account by algorithms that analyze the data. The problem of finding "biased quantiles"—that is, approximate quantiles which must be more accurate for more extreme values—is a framework for summarizing such skewed data on data streams. We present the first deterministic algorithms for answering biased quantiles queries accurately with small—sublinear in the input size—space and time bounds in one pass. The space bound is near-optimal, and the amortized update cost is close to constant, making it practical for handling high speed network data streams. We not only demonstrate theoretical properties of the algorithm, but also show it uses less space than existing methods in many practical settings, and is fast to maintain.

Journal ArticleDOI
TL;DR: This paper proposes a localized greedy algorithm that discovers for each multicast receiver the proxy with the highest 3G downlink channel rate, and derives a polynomial-time 4-approximation algorithm for the construction of the multicast forest.
Abstract: In third generation (3G) wireless data networks, multicast throughput decreases with the increase in multicast group size, since a conservative strategy for the base station is to use the lowest data rate of all the receivers so that the receiver with the worst downlink channel condition can decode the transmission correctly. This paper proposes ICAM, integrated cellular and ad hoc multicast, to increase 3G multicast throughput through opportunistic use of ad hoc relays. In ICAM, a 3G base station delivers packets to proxy mobile devices with better 3G channel quality. The proxy then forwards the packets to the receivers through an IEEE 802.11-based ad hoc network. In this paper, we first propose a localized greedy algorithm that discovers for each multicast receiver the proxy with the highest 3G downlink channel rate. We discover that due to capacity limitations and interference of the ad hoc relay network, maximizing the 3G downlink data rate of each multicast receiver's proxy does not lead to maximum throughput for the multicast group. We then show that the optimal ICAM problem is NP-hard, and derive a polynomial-time 4-approximation algorithm for the construction of the multicast forest. This bound holds when the underlying wireless MAC supports broadcast or unicast, single rate or multiple rates (4(1 + isin) approximation scheme for the latter), and even when there are multiple simultaneous multicast sessions. Through both analysis and simulations, we show that our algorithms achieve throughput gains up to 840 percent for 3G downlink multicast with modest overhead on the 3G uplink

Journal ArticleDOI
David T. Neilson1
TL;DR: How photonics' primary role will be to provide interconnection between racks of electronic routing elements and how fast wavelength switching can provide a high-capacity distributed switch fabric that will allow these packet routers to scale to higher capacities are discussed.
Abstract: The ongoing growth of data traffic from existing and new applications poses a challenge to the packet-switched network infrastructure. High-capacity transport can be achieved in such networks by using dense wavelength-division-multiplexed systems, and reconfigurable optical add-drop multiplexers allow the optical layer to provision wavelength-based circuits between routing nodes. However, construction of the high-capacity packet routers provides significant scaling issues due to complexity, interconnect, and thermal limits. In this paper we will not seek to cover all aspects of optical packet switching and routing but outline some of the challenges facing the construction of such future routers and describe the role photonics can have in overcoming some of these issues. We will discuss how photonics' primary role will be to provide interconnection between racks of electronic routing elements and describe how fast wavelength switching can provide a high-capacity distributed switch fabric that will allow these packet routers to scale to higher capacities. The fast wavelength switching can be seen as the packet analog of wavelength-based circuit switching of today's transparent optical networks

Journal ArticleDOI
TL;DR: In this paper, it was shown that the melting temperature of a 2D solid formed by electrons in a semiconductor sample under a strong perpendicular magnetic field is determined by the quantum correlation between the electrons through the Landau level filling factor ν =n h/e B (where h is the Planck constant and e is the electronic charge).
Abstract: The melting temperature Tm of a solid is generally determined by its solid–liquid transition on being heated at a fixed pressure, usually ambient pressure. It is also determined indirectly by the density n by means of the equation of state. This remains true even for solid helium1, in which quantum effects often lead to unusual properties2. Here, we present experimental evidence to show that for a two-dimensional (2D) solid formed by electrons in a semiconductor sample under a strong perpendicular magnetic field3 (B), Tm is not controlled by n, but effectively by the quantum correlation between the electrons through the Landau level filling factor ν=n h/e B (where h is the Planck constant and e is the electronic charge). Such melting behaviour, different from that of all other known solids (including a classical 2D electron solid at zero magnetic field4), suggests the quantum nature of the magnetic-field-induced electron solid. Moreover, Tm increases with the strength of the sample-dependent disorder that tends to pin the electron solid in place.

Proceedings ArticleDOI
22 Mar 2006
TL;DR: A general distributed scheme that uses dynamic link weights to "move" the link-throughput allocation within the Pareto boundary to a desired point optimizing a specific objective, and proposes an algorithm seeking to optimize weighted proportional fairness objective subject to minimum link- throughput constraints.
Abstract: We consider a model for random-access communication in networks of arbitrary topology. We characterize the efficient (Pareto) boundary of the network throughput region as the family of solutions optimizing weighted proportional fairness objective, parameterized by link weights. Based on this characterization we propose a general distributed scheme that uses dynamic link weights to "move" the link-throughput allocation within the Pareto boundary to a desired point optimizing a specific objective. As a specific application of the general scheme, we propose an algorithm seeking to optimize weighted proportional fairness objective subject to minimum link-throughput constraints. We study asymptotic behavior of the algorithm and show that link throughputs converge to optimal values as long as link dynamic weights converge. Finally, we present simulation experiments that show good performance of the algorithm.

Journal ArticleDOI
TL;DR: The algorithm is a recursive greedy algorithm adapted from the greedy algorithm for the directed Steiner tree problem and gives an O((log Σi|gi|1+eċ log m) approximation in polynomial time) for every fixed constant e > 0.