scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2009"


Posted Content
TL;DR: In this paper, a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission is considered, where the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner.
Abstract: This paper considers a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission. For precoding at the base stations, channel state information (CSI) is essential at the base stations. A popular technique for obtaining this CSI in time division duplex (TDD) systems is uplink training by utilizing the reciprocity of the wireless medium. This paper mathematically characterizes the impact that uplink training has on the performance of such multi-cell multiple antenna systems. When non-orthogonal training sequences are used for uplink training, the paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner. This paper analyzes this fundamental problem of pilot contamination in multi-cell systems. Furthermore, it develops a new multi-cell MMSE-based precoding method that mitigate this problem. In addition to being a linear precoding method, this precoding method has a simple closed-form expression that results from an intuitive optimization problem formulation. Numerical results show significant performance gains compared to certain popular single-cell precoding methods.

1,040 citations


Journal ArticleDOI
TL;DR: By establishing many of the basic attributes of monolayer graphene resonators, the groundwork for applications of these devices, including high-sensitivity mass detectors, is put in place.
Abstract: The enormous stiffness and low density of graphene make it an ideal material for nanoelectromechanical applications. Here, we demonstrate the fabrication and electrical readout of monolayer graphene resonators, and test their response to changes in mass and temperature. The devices show resonances in the megahertz range, and the strong dependence of resonant frequency on applied gate voltage can be fitted to a membrane model to yield the mass density and built-in strain of the graphene. Following the removal and addition of mass, changes in both density and strain are observed, indicating that adsorbates impart tension to the graphene. On cooling, the frequency increases, and the shift rate can be used to measure the unusual negative thermal expansion coefficient of graphene. The quality factor increases with decreasing temperature, reaching approximately 1 x 10(4) at 5 K. By establishing many of the basic attributes of monolayer graphene resonators, the groundwork for applications of these devices, including high-sensitivity mass detectors, is put in place.

955 citations


Journal ArticleDOI
Ward Whitt1
TL;DR: Approximations for a basic queueing model, which has m identical servers in parallel, unlimited waiting room, and the first-come first-served queue discipline, are developed and evaluated and are useful supplements to algorithms for computing the exact values that have been developed in recent years.
Abstract: Queueing models can usefully represent production systems experiencing congestion due to irregular flows, but exact analyses of these queueing models can be difficult. Thus it is natural to seek relatively simple approximations that are suitably accurate for engineering purposes. Here approximations for a basic queueing model are developed and evaluated. The model is the GI/G/m queue, which has m identical servers in parallel, unlimited waiting room, and the first-come first-served queue discipline, with service and interarrival times coming from independent sequences of independent and identically distributed random variables with general distributions. The approximations depend on the general interarrival-time and service-time distributions only through their first two moments. The main focus is on the expected waiting time and the probability of having to wait before beginning service, but approximations are also developed for other congestion measures, including the entire distributions of waiting time, queue-length and number in system. These relatively simple approximations are useful supplements to algorithms for computing the exact values that have been developed in recent years. The simple approximations can serve as starting points for developing approximations for more complicated systems for which exact solutions are not yet available. These approximations are especially useful for incorporating GI/G/m models in larger models, such as queueing networks, wherein the approximations can be components of rapid modeling tools.

362 citations


Book ChapterDOI
23 Nov 2009
TL;DR: This work considers generic Garbled Circuit-based techniques for Secure Function Evaluation (SFE) in the semi-honest model and describes efficient GC constructions for addition, subtraction, multiplication, and comparison functions.
Abstract: We consider generic Garbled Circuit (GC)-based techniques for Secure Function Evaluation (SFE) in the semi-honest model. We describe efficient GC constructions for addition, subtraction, multiplication, and comparison functions. Our circuits for subtraction and comparison are approximately two times smaller (in terms of garbled tables) than previous constructions. This implies corresponding computation and communication improvements in SFE of functions using our efficient building blocks. The techniques rely on recently proposed "free XOR" GC technique. Further, we present concrete and detailed improved GC protocols for the problem of secure integer comparison, and related problems of auctions, minimum selection, and minimal distance. Performance improvement comes both from building on our efficient basic blocks and several problem-specific GC optimizations. We provide precise cost evaluation of our constructions, which serves as a baseline for future protocols.

267 citations


Proceedings Article
01 Sep 2009
TL;DR: In this paper, the authors demonstrate the generation of a 1.2-Tb/s NGI-CO-OFDM superchannel comprising of 24 frequency-locked 12.5GHz spaced PDM-QPSK carriers, and transmit it over 72×100-km of ultra-large-area fiber, achieving 3.7b /s/Hz channel spectral efficiency.
Abstract: We demonstrate the generation of a novel 1.2-Tb/s NGI-CO-OFDM superchannel comprising of 24 frequency-locked 12.5-GHz spaced PDM-QPSK carriers, and transmit it over 72×100-km of ultra-large-area fiber, achieving 3.7-b/s/Hz channel spectral-efficiency (SE) and a record SE-distance product of 27,000-km·b/s/Hz.

264 citations


Journal ArticleDOI
TL;DR: This paper investigates the impact of e-waste regulation on new product introduction in a stylized model of the electronics industry and finds that existing “fee-upon-sale” types of E-Waste regulation fail to motivate manufacturers to design for recyclability.
Abstract: This paper investigates the impact of e-waste regulation on new product introduction in a stylized model of the electronics industry. Manufacturers choose the development time and expenditure for each new version of a durable product, which together determine its quality. Consumers purchase the new product and dispose of the last-generation product, which becomes e-waste. The price of a new product strictly increases with its quality and consumers' rational expectation about the time until the next new product will be introduced. “Fee-upon-sale” types of e-waste regulation cause manufacturers to increase their equilibrium development time and expenditure, and thus the incremental quality for each new product. As new products are introduced (and disposed of) less frequently, the quantity of e-waste decreases and, even excluding the environmental benefits, social welfare may increase. Consumers pay a higher price for each new product because they anticipate using it for longer, which increases manufacturers' profits. Unfortunately, existing “fee-upon-sale” types of e-waste regulation fail to motivate manufacturers to design for recyclability. In contrast, “fee-upon-disposal” types of e-waste regulation such as individual extended producer responsibility motivate design for recyclability but, in competitive product categories, fail to reduce the frequency of new product introduction.

237 citations


Journal ArticleDOI
TL;DR: Network MIMO coordination is found to increase throughput by a factor of 1.8 with intra-site coordination among antennas belonging to the same cell site, and intra- site coordination performs almost as well as a highly sectorized system with 12 sectors per site.
Abstract: Single-user, multiuser, and network MIMO performance is evaluated for downlink cellular networks with 12 antennas per site, sectorization, universal frequency reuse, scheduled packet-data, and a dense population of stationary users. Compared to a single-user MIMO baseline system with 3 sectors per site, network MIMO coordination is found to increase throughput by a factor of 1.8 with intra-site coordination among antennas belonging to the same cell site. Intra-site coordination performs almost as well as a highly sectorized system with 12 sectors per site. Increasing the coordination cluster size from 1 to 7 sites increases the throughput gain factor to 2.5.

233 citations


Journal ArticleDOI
TL;DR: The results demonstrate the central role of crystallinity and purity in photogeneration processes and will constrain the design of future photovoltaic devices.
Abstract: We present a comparative study of ultrafast photoconversion dynamics in tetracene (Tc) and pentacene (Pc) single crystals and Pc films using optical pump-probe spectroscopy. Photoinduced absorption in Tc and Pc crystals is activated and temperature-independent, respectively, demonstrating dominant singlettriplet exciton fission. In Pc films (as well as C60-doped films) this decay channel is suppressed by electron trapping. These results demonstrate the central role of crystallinity and purity in photogeneration processes and will constrain the design of future photovoltaic devices.

227 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: It is shown that maximizing the number of supported connections is NP-hard, even when there is no background noise, in contrast to the problem of determining whether or not a given set of connections is feasible since that problem can be solved via linear programming.
Abstract: In this paper we consider the problem of maximizing the number of supported connections in arbitrary wireless networks where a transmission is supported if and only if the signal-to-interference-plus-noise ratio at the receiver is greater than some threshold. The aim is to choose transmission powers for each connection so as to maximize the number of connections for which this threshold is met. We believe that analyzing this problem is important both in its own right and also because it arises as a subproblem in many other areas of wireless networking. We study both the complexity of the problem and also present some game theoretic results regarding capacity that is achieved by completely distributed algorithms. We also feel that this problem is intriguing since it involves both continuous aspects (i.e. choosing the transmission powers) as well as discrete aspects (i.e. which connections should be supported). Our results are: ldr We show that maximizing the number of supported connections is NP-hard, even when there is no background noise. This is in contrast to the problem of determining whether or not a given set of connections is feasible since that problem can be solved via linear programming. ldr We present a number of approximation algorithms for the problem. All of these approximation algorithms run in polynomial time and have an approximation ratio that is independent of the number of connections. ldr We examine a completely distributed algorithm and analyze it as a game in which a connection receives a positive payoff if it is successful and a negative payoff if it is unsuccessful while transmitting with nonzero power. We show that in this game there is not necessarily a pure Nash equilibrium but if such an equilibrium does exist the corresponding price of anarchy is independent of the number of connections. We also show that a mixed Nash equilibrium corresponds to a probabilistic transmission strategy and in this case such an equilibrium always exists and has a price of anarchy that is independent of the number of connections. This work was supported by NSF contract CCF-0728980 and was performed while the second author was visiting Bell Labs in Summer, 2008.

220 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: This work proposes algorithms that automatically create efficient, soft fractional frequency reuse (FFR) patterns for enhancing performance of orthogonal frequency division multiple access (OFDMA) based cellular systems for forward link best effort traffic.
Abstract: Self-optimization of the network, for the purposes of improving overall capacity and/or cell edge data rates, is an important objective for next generation cellular systems. We propose algorithms that automatically create efficient, soft fractional frequency reuse (FFR) patterns for enhancing performance of orthogonal frequency division multiple access (OFDMA) based cellular systems for forward link best effort traffic. The Multi- sector Gradient (MGR) algorithm adjusts the transmit powers of the different sub-bands by systematically pursuing maximization of the overall network utility. We show that the maximization can be done by sectors operating in a semi-autonomous way, with only some gradient information exchanged periodically by neighboring sectors. The Sector Autonomous (SA) algorithm adjusts its transmit powers in each sub-band independently in each sector using a non-trivial heuristic to achieve out- of-cell interference mitigation. This algorithm is completely autonomous and requires no exchange of information between sectors. Through extensive simulations, we demonstrate that both algorithms provide substantial performance improvements. In particular, they can improve the cell edge data throughputs significantly, by up to 66% in some cases for the MGR, while maintaining the overall sector throughput at the same level as that achieved by the traditional approach. The simulations also show that both algorithms lead the system to "self-organize" into efficient, soft FFR patterns with no a priori frequency planning.

220 citations


Journal ArticleDOI
TL;DR: This paper presents a new load balancing technique by controlling the size of WLAN cells (i.e., AP's coverage range), which is conceptually similar to cell breathing in cellular networks, and develops a set of polynomial time algorithms that find the optimal beacon power settings which minimize the load of the most congested AP.
Abstract: Maximizing network throughput while providing fairness is one of the key challenges in wireless LANs (WLANs). This goal is typically achieved when the load of access points (APs) is balanced. Recent studies on operational WLANs, however, have shown that AP load is often substantially uneven. To alleviate such imbalance of load, several load balancing schemes have been proposed. These schemes commonly require proprietary software or hardware at the user side for controlling the user-AP association. In this paper we present a new load balancing technique by controlling the size of WLAN cells (i.e., AP's coverage range), which is conceptually similar to cell breathing in cellular networks. The proposed scheme does not require any modification to the users neither the IEEE 802.11 standard. It only requires the ability of dynamically changing the transmission power of the AP beacon messages. We develop a set of polynomial time algorithms that find the optimal beacon power settings which minimize the load of the most congested AP. We also consider the problem of network-wide min-max load balancing. Simulation results show that the performance of the proposed method is comparable with or superior to the best existing association-based methods.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: A multi-cell MMSE-based precoding is proposed that, when combined with frequency/time/pilot reuse techniques, mitigate this problem of pilot contamination.
Abstract: This paper considers a multi-cell multiple antenna system with precoding at the base stations for downlink transmission. To enable precoding, channel state information (CSI) is obtained via uplink training. This paper mathematically characterizes the impact that uplink training has on the performance of multi-cell multiple antenna systems. When non-orthogonal training sequences are used for uplink training, it is shown that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells. This problem of pilot contamination is analyzed in this paper. A multi-cell MMSE-based precoding is proposed that, when combined with frequency/time/pilot reuse techniques, mitigate this problem.

Journal Article
TL;DR: This work comprises an advancement of the FA that has been initially proposed within end-to-end reconfigurability - phase 2 (E2R II) project and has been enhanced to incorporate cognitive and self-x capabilities to address the newly born B3G challenges.
Abstract: In this article, we propose and describe a functional architecture (FA) for the efficient radio and spectrum resource management of the anticipated future compound communication systems. This work comprises an advancement of the FA that has been initially proposed within end-to-end reconfigurability - phase 2 (E2R II) project and has been enhanced to incorporate cognitive and self-x capabilities to address the newly born B3G challenges. Furthermore, the proposed FA is currently being elaborated within the Working Group 3 (WG3) of the Reconfigurable Radio Systems Technical Committee (RRS TC). This committee was created by European Standards Telecommunication Institute (ETSI) Board with the aim to study the feasibility of standardization activities related to reconfigurable radio systems (including software defined and cognitive radios). It should be also mentioned that a relevant functional architecture for optimized radio resource usage in heterogeneous wireless networks is currently under standardization within IEEE.

Journal ArticleDOI
TL;DR: The capacity to perform interferometry on 5/2 excitations is demonstrated and properties important for understanding this state and its excitations are revealed.
Abstract: A standing problem in low-dimensional electron systems is the nature of the 5/2 fractional quantum Hall (FQH) state: Its elementary excitations are a focus for both elucidating the state's properties and as candidates in methods to perform topological quantum computation. Interferometric devices may be used to manipulate and measure quantum Hall edge excitations. Here we use a small-area edge state interferometer designed to observe quasiparticle interference effects. Oscillations consistent in detail with the Aharonov–Bohm effect are observed for integer quantum Hall and FQH states (filling factors ν = 2, 5/3, and 7/3) with periods corresponding to their respective charges and magnetic field positions. With these factors as charge calibrations, periodic transmission through the device consistent with quasiparticle charge e/4 is observed at ν = 5/2 and at lowest temperatures. The principal finding of this work is that, in addition to these e/4 oscillations, periodic structures corresponding to e/2 are also observed at 5/2 ν and at lowest temperatures. Properties of the e/4 and e/2 oscillations are examined with the device sensitivity sufficient to observe temperature evolution of the 5/2 quasiparticle interference. In the model of quasiparticle interference, this presence of an effective e/2 period may empirically reflect an e/2 quasiparticle charge or may reflect multiple passes of the e/4 quasiparticle around the interferometer. These results are discussed within a picture of e/4 quasiparticle excitations potentially possessing non-Abelian statistics. These studies demonstrate the capacity to perform interferometry on 5/2 excitations and reveal properties important for understanding this state and its excitations.

Proceedings ArticleDOI
13 Sep 2009
TL;DR: A power efficient transceiver will be developed that adapts to changing traffic load for an energy efficient operation in mobile radio systems and will enable a sustainable increase of mobile data rates.
Abstract: EARTH is a major new European research project starting in 2010 with 15 partners from 10 countries. Its main technical objective is to achieve a reduction of the overall energy consumption of mobile broadband networks by 50%. In contrast to previous efforts, EARTH regards both network aspects and individual radio components from a holistic point of view. Considering that the signal strength strongly decreases with the distance to the base station, small cells are more energy efficient than large cells. EARTH will develop corresponding deployment strategies as well as management algorithms and protocols on the network level. On the component level, the project focuses on base station optimizations as power amplifiers consume the most energy in the system. A power efficient transceiver will be developed that adapts to changing traffic load for an energy efficient operation in mobile radio systems. With these results EARTH will reduce energy costs and carbon dioxide emissions and will thus enable a sustainable increase of mobile data rates.

Journal ArticleDOI
TL;DR: A significant asymmetry in the optical conductivity upon electrostatic doping of electrons and holes arises from a marked asymmetry between the valence and conduction bands, which is mainly due to the inequivalence of the two sublattices within the graphene layer and the next-nearest-neighbor interlayer coupling.
Abstract: We report on infrared spectroscopy of bilayer graphene integrated in gated structures. We observe a significant asymmetry in the optical conductivity upon electrostatic doping of electrons and holes. We show that this finding arises from a marked asymmetry between the valence and conduction bands, which is mainly due to the inequivalence of the two sublattices within the graphene layer and the next-nearest-neighbor interlayer coupling. From the conductivity data, the energy difference of the two sublattices and the interlayer coupling energy are directly determined.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of multiple multicast sessions with intra-session network coding in time-varying networks and proposed dynamic algorithms for multicast routing, network coding, power allocation, session scheduling, and rate allocation across correlated sources.
Abstract: The problem of multiple multicast sessions with intra-session network coding in time-varying networks is considered. The network-layer capacity region of input rates that can be stably supported is established. Dynamic algorithms for multicast routing, network coding, power allocation, session scheduling, and rate allocation across correlated sources, which achieve stability for rates within the capacity region, are presented. This work builds on the back-pressure approach introduced by Tassiulas , extending it to network coding and correlated sources. In the proposed algorithms, decisions on routing, network coding, and scheduling between different sessions at a node are made locally at each node based on virtual queues for different sinks. For correlated sources, the sinks locally determine and control transmission rates across the sources. The proposed approach yields a completely distributed algorithm for wired networks. In the wireless case, power control among different transmitters is centralized while routing, network coding, and scheduling between different sessions at a given node are distributed.

Book ChapterDOI
19 Aug 2009
TL;DR: A more flexible family of differential paths and a new variable birthdaying search space are described, leading to just three pairs of near-collision blocks to generate the collision, enabling construction of RSA moduli that are sufficiently short to be accepted by current CAs.
Abstract: We present a refined chosen-prefix collision construction for MD5 that allowed creation of a rogue Certification Authority (CA) certificate, based on a collision with a regular end-user website certificate provided by a commercial CA. Compared to the previous construction from Eurocrypt 2007, this paper describes a more flexible family of differential paths and a new variable birthdaying search space. Combined with a time-memory trade-off, these improvements lead to just three pairs of near-collision blocks to generate the collision, enabling construction of RSA moduli that are sufficiently short to be accepted by current CAs. The entire construction is fast enough to allow for adequate prediction of certificate serial number and validity period: it can be made to require about 249 MD5 compression function calls. Finally, we improve the complexity of identical-prefix collisions for MD5 to about 216 MD5 compression function calls and use it to derive a practical single-block chosen-prefix collision construction of which an example is given.

Journal ArticleDOI
TL;DR: In this paper, the authors show that modulating a two-dimensional electron gas with a long-wavelength periodic potential with honeycomb symmetry can lead to the creation of isolated massless Dirac points with tunable Fermi velocity.
Abstract: At low energy, electrons in doped graphene sheets behave like massless Dirac fermions with a Fermi velocity, which does not depend on carrier density. Here we show that modulating a two-dimensional electron gas with a long-wavelength periodic potential with honeycomb symmetry can lead to the creation of isolated massless Dirac points with tunable Fermi velocity. We provide detailed theoretical estimates to realize such artificial graphenelike system and discuss an experimental realization in a modulation-doped GaAs quantum well. Ultrahigh-mobility electrons with linearly dispersing bands might open new venues for the studies of Dirac-fermion physics in semiconductors.

Journal ArticleDOI
TL;DR: In this article, the authors study coordination mechanisms for four classes of multiprocessor machine scheduling problems and derive upper and lower bounds on the price of anarchy of these mechanisms, and prove that the system converges to a pure-strategy Nash equilibrium in a linear number of rounds.

Journal ArticleDOI
TL;DR: In this paper, a multistage narrowband optical pole-zero notch filter is presented, which allows for reconfigurable and independent tuning of the center frequency, null depth, and bandwidth for one or more notches simultaneously.
Abstract: We present a fully tunable multistage narrowband optical pole-zero notch filter that is fabricated in a silicon complementary metal oxide semiconductor (CMOS) foundry. The filter allows for the reconfigurable and independent tuning of the center frequency, null depth, and bandwidth for one or more notches simultaneously. It is constructed using a Mach-Zehnder interferometer (MZI) with cascaded tunable all-pass filter (APF) ring resonators in its arms. Measured filter nulling response exhibits ultranarrow notch 3 dB BW of 0.6350 GHz, and nulling depth of 33 dB. This filter is compact and integrated in an area of 1.75 mm2. Using this device, a novel method to cancel undesired bands of 3 dB bandwidth of < 910 MHz in microwave-photonic systems is demonstrated. The ultranarrow filter response properties have been realized based on our developed low-propagation loss silicon channel waveguide and tunable ring-resonator designs. Experimentally, they yielded a loss of 0.25 dB/cm and 0.18 dB/round trip, respectively.

Book ChapterDOI
19 Aug 2009
TL;DR: A study was carried out to examine the effectiveness of using Support Vector Machines to accurately identify if a mobile phone should be allowed access to a local cellular base station using differences imbued upon the signal as it passes through the analogue stages of its radio transmitter.
Abstract: Mobile phone proliferation and increasing broadband penetration presents the possibility of placing small cellular base stations within homes to act as local access points. This can potentially lead to a very large increase in authentication requests hitting the centralized authentication infrastructure unless access is mediated at a lower protocol level. A study was carried out to examine the effectiveness of using Support Vector Machines to accurately identify if a mobile phone should be allowed access to a local cellular base station using differences imbued upon the signal as it passes through the analogue stages of its radio transmitter. Whilst allowing prohibited transmitters to gain access at the local level is undesirable and costly, denying service to a permitted transmitter is simply unacceptable. Two different learning approaches were employed, the first using One Class Classifiers (OCCs) and the second using customized ensemble classifiers. OCCs were found to perform poorly, with a true positive (TP) rate of only 50% (where TP refers to correctly identifying a permitted transmitter) and a true negative (TN) rate of 98% (where TN refers to correctly identifying a prohibited transmitter). The customized ensemble classifier approach was found to considerably outperform the OCCs with a 97% TP rate and an 80% TN rate.

Proceedings ArticleDOI
17 Aug 2009
TL;DR: This paper describes a preliminary prototype system, built using Openflow components, that demonstrates the feasibility of this architecture in enabling seamless migration of virtual machines and in enhancing delivery of cloud-based services.
Abstract: It is envisaged that services and applications will migrate to a cloud-computing paradigm where thin-clients on user devices access, over the network, applications hosted in data centers by application service providers. Examples are cloud based gaming applications and cloud-supported virtual desktops. For good performance and efficiency, it is critical that these services are delivered from locations that are the best for the current (dynamically changing) set of users. To achieve this, we expect that services will be hosted on virtual machines in interconnected data centers and that these virtual machines will migrate dynamically to locations best suited for the current user population. A basic network infrastructure need then is the ability to migrate virtual machines across multiple networks without losing service continuity. In this paper, we develop mechanisms to accomplish this using a network-virtualization architecture that relies on a set of distributed forwarding elements with centralized control (borrowing on several recent proposals in a similar vein). We describe a preliminary prototype system, built using Openflow components, that demonstrates the feasibility of this architecture in enabling seamless migration of virtual machines and in enhancing delivery of cloud-based services.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: The Distributed and Load Balanced Bloom Filters to address these drawbacks of Bloom filter-based IP lookup algorithms and develop the practical IP lookup algorithm for use in 100Gbps line cards.
Abstract: Internet line speeds are expected to reach 100Gbps in a few years. To match these line rates, a single router line card needs to forward more than 150 million packets per second. This requires a corresponding amount of longest prefix match operations. Furthermore, the increased use of IPv6 requires core routers to perform the longest prefix match on several hundred thousand prefixes varying in length up to 64 bits. It is a challenge to scale existing algorithms simultaneously in the three dimensions of increased throughput, table size and prefix length. Recently, Bloom filter-based IP lookup algorithms have been proposed. While these algorithms can take advantage of hardware parallelism and fast on-chip memory to achieve high performance, they have significant drawbacks (discussed in the paper) that impede their use in practice. In this paper, we present the Distributed and Load Balanced Bloom Filters to address these drawbacks. We develop the practical IP lookup algorithm for use in 100Gbps line cards. The regular and modular hardware architecture of our scheme directly maps to the state-of-art ASICs and FPGAs with reasonable resource consumption. Also, our scheme outperforms TCAMs on most metrics including cost, power dissipation, and board footprint. I. INTRODUCTION To keep up with ever-increasing optical transmission rates, Internet core routers need to forward packets as fast as possible. This requires faster and faster implementations of packet-processing functions such as IP lookup - the Longest Prefix Matching (LPM) operation needed to determine the next-hop for incoming packets. The next hop is determined by first performing a longest prefix match of the destination IP address of incoming packets against a set of prefixes stored in a prefix table. Once a match is found, the next-hop information associated with the matched prefix is retrieved. The prefix table can typically have a few hundred thousand prefixes, with prefix lengths varying from 8 to 32 for IPv4 addresses. For IPv6, the

Journal ArticleDOI
24 Jun 2009
TL;DR: The objective of this study is to analyze the richnesses and weaknesses of the Mashup tools with respect to the data integration aspect.
Abstract: Mashup is a new application development approach that allows users to aggregate multiple services to create a service for a new purpose. Even if the Mashup approach opens new and broader opportunities for data/service consumers, the development process still requires the users to know not only how to write code using programming languages, but also how to use the different Web APIs from different services. In order to solve this problem, there is increasing effort put into developing tools which are designed to support users with little programming knowledge in Mashup applications development. The objective of this study is to analyze the richnesses and weaknesses of the Mashup tools with respect to the data integration aspect.

Journal ArticleDOI
TL;DR: Using geometric and probabilistic analysis of an idealized model, it is proved that the achievable spatial resolution in localizing a target's trajectory is of the order of 1/ρR, where R is the sensing radius andπ is the sensor density per unit area.
Abstract: We explore fundamental performance limits of tracking a target in a two-dimensional field of binary proximity sensors, and design algorithms that attain those limits while providing minimal descriptions of the estimated target trajectory. Using geometric and probabilistic analysis of an idealized model, we prove that the achievable spatial resolution in localizing a target's trajectory is of the order of 1/ρR, where R is the sensing radius and ρ is the sensor density per unit area. We provide a geometric algorithm for computing an economical (in descriptive complexity) piecewise linear path that approximates the trajectory within this fundamental limit of accuracy. We employ analogies between binary sensing and sampling theory to contend that only a “lowpass” approximation of the trajectory is attainable, and explore the implications of this observation for estimating the target's velocity. We also consider nonideal sensing, employing particle filters to average over noisy sensor observations, and geometric geometric postprocessing of the particle filter output to provide an economical piecewise linear description of the trajectory. In addition to simulation results validating our approaches for both idealized and nonideal sensing, we report on lab-scale experiments using motes with acoustic sensors.

Proceedings ArticleDOI
04 Apr 2009
TL;DR: This work proposes to discriminate, among thumb gestures, those it calls MicroRolls, characterized by zero tangential velocity of the skin relative to the screen surface, and shows that at least 16 elemental gestures can be automatically recognized.
Abstract: The input vocabulary for touch-screen interaction on handhelds is dramatically limited, especially when the thumb must be used. To enrich that vocabulary we propose to discriminate, among thumb gestures, those we call MicroRolls, characterized by zero tangential velocity of the skin relative to the screen surface. Combining four categories of thumb gestures, Drags, Swipes, Rubbings and MicroRolls, with other classification dimensions, we show that at least 16 elemental gestures can be automatically recognized. We also report the results of two experiments showing that the roll vs. slide distinction facilitates thumb input in a realistic copy and paste task, relative to existing interaction techniques.

Journal ArticleDOI
TL;DR: This work experimentally investigates the performance of a spectrally efficient multi-carrier channel consisting of two or more optical carriers spaced around the baud rate, with each carrier modulated with polarization division multiplexed (PDM) quadrature phase shift keyed (QPSK) format and finds that 4 x oversampling, together with a constant modulus algorithm (CMA) based digital equalizer having multiple quarter-symbol spaced taps, gives much better overall performance.
Abstract: We experimentally investigate the performance of a spectrally efficient multi-carrier channel consisting of two or more optical carriers spaced around the baud rate, with each carrier modulated with polarization division multiplexed (PDM) quadrature phase shift keyed (QPSK) format. We first study the performance of a 100-Gb/s 2-carrier PDM-QPSK channel with each carrier modulated at 12.5 Gbaud as a function of various design parameters such as the time alignment between the modulated carriers, the frequency separation between the carriers, the oversampling factor at the receiver, and the bandwidth of the digital pre-filter used for carrier separation. While the measurements confirm the previously reported observations, they also reveal some interesting additional features. The coherent crosstalk between the modulated carriers is found to be minimized when these carriers are symbol aligned. Spacing the carriers at the baud rate, corresponding to the orthogonal frequency-division multiplexing (OFDM) condition, leads to a local maximum in performance only for some specific cases where large oversampling (>2 x ) is applied. It is found that 4 x oversampling, together with a constant modulus algorithm (CMA) based digital equalizer having multiple quarter-symbol (T/4) spaced taps, gives much better overall performance than 2 x oversampling with a CMA-based equalizer having T/2 spaced taps. In addition, using a T/4-delay-and-add filter (DAF) as a pre-filter for assist carrier separation is found to give better performance than the commonly used T/2-DAF. In addition, it is possible to set the carrier spacing to be as small as 80% of the baud rate while incurring negligible penalty at BER approximately 10(-3). 3-carrier and 5-carrier PDM-QPSK channels at 12.5-Gbaud with frequency-locked carriers spaced at 12.5 GHz and 4 x oversampling are also studied, and shown to perform reasonably well with small relative penalties. Finally, increasing the baud rate of the 2-carrier PDM-QPSK to 25 Gbaud and 28 Gbaud is investigated. It is found that with a fixed sampling speed of 50 Gsamples/s, scaling from 12.5 Gbaud to 25 and 28 Gbaud causes excess crosstalk penalties of about 2.8 dB and 4.8 dB, respectively, indicating the need to increase the sampling speed and transmitter bandwidth in order to support these high-data-rate channels without excessive coherent crosstalk.

Book ChapterDOI
21 Sep 2009
TL;DR: In this paper, the authors present new, more efficient privacy-protecting protocols for remote evaluation of such classification/diagnostic programs, in addition to efficiency improvements, they generalize previous solutions -they securely evaluate private linear branching programs (LBP), a useful generalization of Branching programs that they introduce.
Abstract: Diagnostic and classification algorithms play an important role in data analysis, with applications in areas such as health care, fault diagnostics, or benchmarking. Branching programs (BP) is a popular representation model for describing the underlying classification/diagnostics algorithms. Typical application scenarios involve a client who provides data and a service provider (server) whose diagnostic program is run on client's data. Both parties need to keep their inputs private. We present new, more efficient privacy-protecting protocols for remote evaluation of such classification/diagnostic programs. In addition to efficiency improvements, we generalize previous solutions - we securely evaluate private linear branching programs (LBP), a useful generalization of BP that we introduce. We show practicality of our solutions: we apply our protocols to the privacy-preserving classification of medical ElectroCardioGram (ECG) signals and present implementation results. Finally, we discover and fix a subtle security weakness of the most recent remote diagnostic proposal, which allowed malicious clients to learn partial information about the program.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of characterizing the throughput scaling in extended wireless networks with arbitrary node placement and propose a more general novel cooperative communication scheme that works for arbitrarily placed nodes.
Abstract: In recent work, Ozgur, Leveque, and Tse (2007) obtained a complete scaling characterization of throughput scaling for random extended wireless networks (i.e., n nodes are placed uniformly at random in a square region of area n). They showed that for small path-loss exponents alpha isin (2,3], cooperative communication is order optimal, and for large path-loss exponents alpha > 3, multihop communication is order optimal. However, their results (both the communication scheme and the proof technique) are strongly dependent on the regularity induced with high probability by the random node placement. In this paper, we consider the problem of characterizing the throughput scaling in extended wireless networks with arbitrary node placement. As a main result, we propose a more general novel cooperative communication scheme that works for arbitrarily placed nodes. For small path-loss exponents alpha isin (2,3], we show that our scheme is order optimal for all node placements, and achieves exactly the same throughput scaling as in Ozgur. This shows that the regularity of the node placement does not affect the scaling of the achievable rates for alpha isin (2,3]. The situation is, however, markedly different for large path-loss exponents alpha > 3. We show that in this regime the scaling of the achievable per-node rates depends crucially on the regularity of the node placement. We then present a family of schemes that smoothly ldquointerpolaterdquo between multihop and cooperative communication, depending upon the level of regularity in the node placement. We establish order optimality of these schemes under adversarial node placement for alpha > 3.