scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2008"


Journal ArticleDOI
TL;DR: In this article, the authors describe the mathematical underpinnings of topological quantum computation and the physics of the subject are addressed, using the ''ensuremath{ u}=5∕2$ fractional quantum Hall state as the archetype of a non-Abelian topological state enabling fault-tolerant quantum computation.
Abstract: Topological quantum computation has emerged as one of the most exciting approaches to constructing a fault-tolerant quantum computer. The proposal relies on the existence of topological states of matter whose quasiparticle excitations are neither bosons nor fermions, but are particles known as non-Abelian anyons, meaning that they obey non-Abelian braiding statistics. Quantum information is stored in states with multiple quasiparticles, which have a topological degeneracy. The unitary gate operations that are necessary for quantum computation are carried out by braiding quasiparticles and then measuring the multiquasiparticle states. The fault tolerance of a topological quantum computer arises from the nonlocal encoding of the quasiparticle states, which makes them immune to errors caused by local perturbations. To date, the only such topological states thought to have been found in nature are fractional quantum Hall states, most prominently the $\ensuremath{ u}=5∕2$ state, although several other prospective candidates have been proposed in systems as disparate as ultracold atoms in optical lattices and thin-film superconductors. In this review article, current research in this field is described, focusing on the general theoretical concepts of non-Abelian statistics as it relates to topological quantum computation, on understanding non-Abelian quantum Hall states, on proposed experiments to detect non-Abelian anyons, and on proposed architectures for a topological quantum computer. Both the mathematical underpinnings of topological quantum computation and the physics of the subject are addressed, using the $\ensuremath{ u}=5∕2$ fractional quantum Hall state as the archetype of a non-Abelian topological state enabling fault-tolerant quantum computation.

4,457 citations


Journal ArticleDOI
TL;DR: In this article, an infrared spectromicroscopy study of charge dynamics in graphene integrated in gated devices is presented, which reveals significant departures of the quasiparticle dynamics from predictions made for Dirac fermions in idealized, free-standing graphene.
Abstract: A remarkable manifestation of the quantum character of electrons in matter is offered by graphene, a single atomic layer of graphite. Unlike conventional solids where electrons are described with the Schrodinger equation, electronic excitations in graphene are governed by the Dirac hamiltonian. Some of the intriguing electronic properties of graphene, such as massless Dirac quasiparticles with linear energy-momentum dispersion, have been confirmed by recent observations. Here, we report an infrared spectromicroscopy study of charge dynamics in graphene integrated in gated devices. Our measurements verify the expected characteristics of graphene and, owing to the previously unattainable accuracy of infrared experiments, also uncover significant departures of the quasiparticle dynamics from predictions made for Dirac fermions in idealized, free-standing graphene. Several observations reported here indicate the relevance of many-body interactions to the electromagnetic response of graphene.

1,137 citations


Journal ArticleDOI
TL;DR: It is shown that many aspects of OWL have been thoroughly reengineered in OWL 2, thus producing a robust platform for future development of the language.

897 citations


Book ChapterDOI
07 Jul 2008
TL;DR: In this one-round protocol, XOR gates are evaluated "for free", which results in the corresponding improvement over the best garbled circuit implementations (e.g. Fairplay) and improves integer addition and equality testing by factor of up to 2.
Abstract: We present a new garbled circuit construction for two-party secure function evaluation (SFE). In our one-round protocol, XOR gates are evaluated "for free", which results in the corresponding improvement over the best garbled circuit implementations (e.g. Fairplay [19]). We build permutation networks [26] and Universal Circuits (UC) [25] almost exclusively of XOR gates; this results in a factor of up to 4 improvement (in both computation and communication) of their SFE. We also improve integer addition and equality testing by factor of up to 2. We rely on the Random Oracle (RO) assumption. Our constructions are proven secure in the semi-honest model.

817 citations


Journal ArticleDOI
TL;DR: In this article, Bengtsson et al. showed that the ensemble size required for a successful particle filter scales exponentially with the problem size and that the required ensemble size scales with the state dimension.
Abstract: Particle filters are ensemble-based assimilation schemes that, unlike the ensemble Kalman filter, employ a fully nonlinear and non-Gaussian analysis step to compute the probability distribution function (pdf) of a system’s state conditioned on a set of observations. Evidence is provided that the ensemble size required for a successful particle filter scales exponentially with the problem size. For the simple example in which each component of the state vector is independent, Gaussian, and of unit variance and the observations are of each state component separately with independent, Gaussian errors, simulations indicate that the required ensemble size scales exponentially with the state dimension. In this example, the particle filter requires at least 1011 members when applied to a 200-dimensional state. Asymptotic results, following the work of Bengtsson, Bickel, and collaborators, are provided for two cases: one in which each prior state component is independent and identically distributed, and ...

654 citations


Journal ArticleDOI
TL;DR: It is proposed that copper(II) oxides (containing Cu2+ ions) having large magnetic superexchange interactions can be good candidates for induced-multiferroics with high Curie temperature (T(C), and ferroelectricity is demonstrated with T(C)=230 K in cupric oxide, CuO (tenorite), which is known as a starting material for the synthesis of high-T(c) (critical temperature) superconductors.
Abstract: Induced multiferroics, where ferroelectricity arises through the magnetic order, have attracted significant interest, despite maximum Curie temperatures of only 40 K. The discovery of multiferroic coupling up to 230 K in CuO therefore represents a major advance towards high-TC multiferroics. Materials that combine coupled electric and magnetic dipole order are termed ‘magnetoelectric multiferroics’1,2,3,4. In the past few years, a new class of such materials, ‘induced-multiferroics’, has been discovered5,6, wherein non-collinear spiral magnetic order breaks inversion symmetry, thus inducing ferroelectricity7,8,9. Spiral magnetic order often arises from the existence of competing magnetic interactions that reduce the ordering temperature of a more conventional collinear phase10. Hence, spiral-phase-induced ferroelectricity tends to exist only at temperatures lower than ∼40 K. Here, we propose that copper(II) oxides (containing Cu2+ ions) having large magnetic superexchange interactions11 can be good candidates for induced-multiferroics with high Curie temperature (TC). In fact, we demonstrate ferroelectricity with TC=230 K in cupric oxide, CuO (tenorite), which is known as a starting material for the synthesis of high-Tc (critical temperature) superconductors. Our result provides an important contribution to the search for high-temperature magnetoelectric multiferroics.

417 citations


Journal IssueDOI
TL;DR: A user-deployed Femtocell solution based on the base station router (BSR) flat Internet Protocol (IP) cellular architecture is presented that addresses problems of the femtocell, and several aspects of the proposed solution are discussed.
Abstract: The femtocell concept aims to combine fixed-line broadband access with cellular telephony using the deployment of ultra-low-cost, low-power third generation (3G) base stations in the subscribers' homes or premises. It enables operators to address new markets and introduce new high-speed services and disruptive pricing strategies to capture wireline voice minutes and to grow revenues. One of the main design challenges of the femtocell is that the hierarchical architecture and manual cell planning processes used in macrocell networks do not scale to support millions of femtocells. In this paper, a user-deployed femtocell solution based on the base station router (BSR) flat Internet Protocol (IP) cellular architecture is presented that addresses these problems, and several aspects of the proposed solution are discussed. The overall concept and key requirements are presented in detail. The auto-configuration and self-optimization process from purchase by the end user to the integration into an existing macrocellular network is described. Then the theoretical performance of a co-channel femtocell deployment is analyzed and its impact on the macrocell underlay is assessed. Finally, a financial analysis of a femtocellular home base station deployment in a macrocellular network is presented. It is shown that in urban areas, the deployment of publicly accessible home base stations with slightly increased coverage can significantly reduce the operator's annual network costs (up to 70 percent in the investigated scenario) compared to a pure macrocellular network.

393 citations


Proceedings ArticleDOI
09 Jun 2008
TL;DR: This is the first work to compute graph summaries using the MDL principle, and use the summaries (along with corrections) to compress graphs with bounded error.
Abstract: We propose a highly compact two-part representation of a given graph G consisting of a graph summary and a set of corrections. The graph summary is an aggregate graph in which each node corresponds to a set of nodes in G, and each edge represents the edges between all pair of nodes in the two sets. On the other hand, the corrections portion specifies the list of edge-corrections that should be applied to the summary to recreate G. Our representations allow for both lossless and lossy graph compression with bounds on the introduced error. Further, in combination with the MDL principle, they yield highly intuitive coarse-level summaries of the input graph G. We develop algorithms to construct highly compressed graph representations with small sizes and guaranteed accuracy, and validate our approach through an extensive set of experiments with multiple real-life graph data sets.To the best of our knowledge, this is the first work to compute graph summaries using the MDL principle, and use the summaries (along with corrections) to compress graphs with bounded error.

352 citations


Proceedings ArticleDOI
24 Apr 2008
TL;DR: It is shown that the proposed mobility event based self-optimization can significantly outperform simpler methods that aim to achieve a constant cell radius.
Abstract: In femtocell deployments, leakage of the pilot signal to the outside of a house can result in a highly increased signalling load to the core network as a result of the higher number of mobility events caused by passing users. This effect can be minimized by reducing the pilot power. However this approach could lead to insufficient indoor coverage, which also causes an increase in mobility events. In this paper, coverage adaptation is proposed for femtocell deployments that uses information on mobility events of passing and indoor users to optimize the femtocell coverage in order to minimize the increase in core network mobility signalling. Different coverage adaptation methods are discussed and it is shown that the proposed mobility event based self-optimization can significantly outperform simpler methods that aim to achieve a constant cell radius.

299 citations


Book
Gerhard Kramer1
25 Jun 2008
TL;DR: This survey builds up knowledge on random coding, binning, superposition coding, and capacity converses by introducing progressively more sophisticated tools for a selection of source and channel models.
Abstract: This survey reviews fundamental concepts of multi-user information theory. Starting with typical sequences, the survey builds up knowledge on random coding, binning, superposition coding, and capacity converses by introducing progressively more sophisticated tools for a selection of source and channel models. The problems addressed include: Source Coding; Rate-Distortion and Multiple Descriptions; Capacity-Cost; The Slepian–Wolf Problem; The Wyner-Ziv Problem; The Gelfand-Pinsker Problem; The Broadcast Channel; The Multiaccess Channel; The Relay Channel; The Multiple Relay Channel; and The Multiaccess Channel with Generalized Feedback. The survey also includes a review of basic probability and information theory.

290 citations


Journal ArticleDOI
TL;DR: This work designed a microfluidic device to reliably change the environment of single cells over a range of frequencies and measured the bandwidth of the Saccharomyces cerevisiae signaling pathway that responds to high osmolarity, finding that the two-component Ssk1 branch of this pathway is capable of fast signal integration, whereas the kinase Ste11 branch is not.
Abstract: Signaling pathways relay information about changes in the external environment so that cells can respond appropriately. How much information a pathway can carry depends on its bandwidth. We designed a microfluidic device to reliably change the environment of single cells over a range of frequencies. Using this device, we measured the bandwidth of the Saccharomyces cerevisiae signaling pathway that responds to high osmolarity. This prototypical pathway, the HOG pathway, is shown to act as a low-pass filter, integrating the signal when it changes rapidly and following it faithfully when it changes more slowly. We study the dependence of the pathway's bandwidth on its architecture. We measure previously unknown bounds on all of the in vivo reaction rates acting in this pathway. We find that the two-component Ssk1 branch of this pathway is capable of fast signal integration, whereas the kinase Ste11 branch is not. Our experimental techniques can be applied to other signaling pathways, allowing the measurement of their in vivo kinetics and the quantification of their information capacity.

Journal ArticleDOI
TL;DR: It is shown that in the context of noise reduction the squared PCC has many appealing properties and can be used as an optimization cost function to derive many optimal and suboptimal noise-reduction filters.
Abstract: Noise reduction, which aims at estimating a clean speech from noisy observations, has attracted a considerable amount of research and engineering attention over the past few decades. In the single-channel scenario, an estimate of the clean speech can be obtained by passing the noisy signal picked up by the microphone through a linear filter/transformation. The core issue, then, is how to find an optimal filter/transformation such that, after the filtering process, the signal-to-noise ratio (SNR) is improved but the desired speech signal is not noticeably distorted. Most of the existing optimal filters (such as the Wiener filter and subspace transformation) are formulated from the mean-square error (MSE) criterion. However, with the MSE formulation, many desired properties of the optimal noise-reduction filters such as the SNR behavior cannot be seen. In this paper, we present a new criterion based on the Pearson correlation coefficient (PCC). We show that in the context of noise reduction the squared PCC (SPCC) has many appealing properties and can be used as an optimization cost function to derive many optimal and suboptimal noise-reduction filters. The clear advantage of using the SPCC over the MSE is that the noise-reduction performance (in terms of the SNR improvement and speech distortion) of the resulting optimal filters can be easily analyzed. This shows that, as far as noise reduction is concerned, the SPCC-based cost function serves as a more natural criterion to optimize as compared to the MSE.

Journal ArticleDOI
Xiang Liu1, Fred Buchali1
TL;DR: An efficient channel estimation method for coherent optical OFDM (CO-OFDM) based on intra-symbol frequency-domain averaging (ISFA) and the subsequent channel compensation are found to be highly robust against transmission impairments in typical optical transport systems.
Abstract: We present an efficient channel estimation method for coherent optical OFDM (CO-OFDM) based on intra-symbol frequency-domain averaging (ISFA), and systematically study its robustness against transmission impairments such as optical noise, chromatic dispersion (CD), polarization-mode dispersion (PMD), polarization-dependent loss (PDL), and fiber nonlinearity. Numerical simulations are performed for a 112-Gb/s polarization-division multiplexed (PDM) CO-OFDM signal, and the ISFA-based channel estimation and the subsequent channel compensation are found to be highly robust against these transmission impairments in typical optical transport systems.

Journal ArticleDOI
TL;DR: A conservative estimate of the "fiber channel" capacity in an optically routed network is presented and it is shown that the fiber capacity per unit bandwidth for a given distance significantly exceeds current record experimental demonstrations.
Abstract: The instantaneous optical Kerr effect in optical fibers is a nonlinear phenomenon that can impose limits on the ability of fiber-optic communication systems to transport information. We present here a conservative estimate of the "fiber channel" capacity in an optically routed network. We show that the fiber capacity per unit bandwidth for a given distance significantly exceeds current record experimental demonstrations.

Proceedings ArticleDOI
13 Apr 2008
TL;DR: An algorithm for sub-carrier and power allocation that achieves out-of-cell interference avoidance through dynamic fractional frequency reuse (FFR) in downlink of cellular systems based on orthogonal frequency division multiple access (OFDMA).
Abstract: We describe an algorithm for sub-carrier and power allocation that achieves out-of-cell interference avoidance through dynamic fractional frequency reuse (FFR) in downlink of cellular systems based on orthogonal frequency division multiple access (OFDMA). The focus in on the constant-bit-rate (CBR) traffic type flows (e.g., VoIP). Our approach is based on the continuous "selfish" optimization of resource allocation by each sector. No a priori frequency planning and/or inter-cell coordination is required. We show, both analytically (on a simple illustrative example) and by simulations (of a more realistic system), that the algorithm leads the system to "self-organize" into efficient frequency reuse patterns.

Proceedings ArticleDOI
13 Apr 2008
TL;DR: These algorithms are the first approximation algorithms in the literature with a tight worst-case guarantee for the NP-hard problem and can obtain an aggregate throughput which can be as much as 2.3 times more than that of the max-min fair allocation in 802.11b.
Abstract: In multi-rate wireless LANs, throughput-based fair bandwidth allocation can lead to drastically reduced aggregate throughput. To balance aggregate throughput while serving users in a fair manner, proportional fair or time-based fair scheduling has been proposed to apply at each access point (AP). However, since a realistic deployment of wireless LANs can consist of a network of APs, this paper considers proportional fairness in this much wider setting. Our technique is to intelligently associate users with APs to achieve optimal proportional fairness in a network of APs. We propose two approximation algorithms for periodical offline optimization. Our algorithms are the first approximation algorithms in the literature with a tight worst-case guarantee for the NP-hard problem. Our simulation results demonstrate that our algorithms can obtain an aggregate throughput which can be as much as 2.3 times more than that of the max-min fair allocation in 802.11b. While maintaining aggregate throughput, our approximation algorithms outperform the default user-AP association method in the 802.11b standard significantly in terms of fairness.

Journal ArticleDOI
TL;DR: It is shown that any point set has an @e-core-set of size @?1/@[email protected]?
Abstract: Given a set of points [email protected]?R^d and value @e>0, an @[email protected]?P has the property that the smallest ball containing S has radius within [email protected] of the radius of the smallest ball containing P. This paper shows that any point set has an @e-core-set of size @?1/@[email protected]?, and this bound is tight in the worst case. Some experimental results are also given, comparing this algorithm with a previous one, and with a more powerful, but slower one.

Book
08 Jan 2008
TL;DR: This tutorial deals with wireless and content distribution networks, considered to be the most likely applications of network coding, and it also reviews emerging applications ofnetwork coding such as network monitoring and management.
Abstract: Network coding is an elegant and novel technique introduced at the turn of the millennium to improve network throughput and performance. It is expected to be a critical technology for networks of the future. This tutorial deals with wireless and content distribution networks, considered to be the most likely applications of network coding, and it also reviews emerging applications of network coding such as network monitoring and management. Multiple unicasts, security, networks with unreliable links, and quantum networks are also addressed. The preceding companion deals with theoretical foundations of network coding.

Journal IssueDOI
TL;DR: The literature on social networking sites is examined and studies of how students on American college campuses engage in social networking are conducted to find out how these sites affect individual relationships.
Abstract: Social networks and the need to communicate are universal human conditions. A general assumption is that communication technologies help to increase and strengthen social ties. The Internet provides many social networking opportunities. But how do social networking sites affect individual relationships? Do people use social networking sites to expand their personal networks, to find people who have had similar experiences, to discuss a common hobby, for the potential of offline dating? Or, do people spend time on networking sites to deepen their existing personal networks and stay connected to old friends or distant family? What is the nature of the communications that transpire on social networking sites? Is it personal, emotional, private, and important; or trivial, informal, and public? We examined the literature on social networking sites and conducted our own studies of how students on American college campuses engage in social networking.

Journal ArticleDOI
TL;DR: The problem of a nomadic terminal sending information to a remote destination via agents with lossless connections to the destination is investigated and the Gaussian codebook capacity is characterized for the deterministic channel.
Abstract: The problem of a nomadic terminal sending information to a remote destination via agents with lossless connections to the destination is investigated. Such a setting suits, e.g., access points of a wireless network where each access point is connected by a wire to a wireline-based network. The Gaussian codebook capacity for the case where the agents do not have any decoding ability is characterized for the Gaussian channel. This restriction is demonstrated to be severe, and allowing the nomadic transmitter to use other signaling improves the rate. For both general and degraded discrete memoryless channels, lower and upper bounds on the capacity are derived. An achievable rate with unrestricted agents, which are capable of decoding, is also given and then used to characterize the capacity for the deterministic channel.

Proceedings ArticleDOI
13 Apr 2008
TL;DR: A wireless greedy primal dual algorithm for combined congestion control and scheduling that aims to solve the problem of jointly performing scheduling and congestion control in mobile ad-hoc networks and is shown to significantly outperforms standard protocols such as 802.11 operating in conjunction with TCP.
Abstract: In this paper we study the problem of jointly performing scheduling and congestion control in mobile ad-hoc networks so that network queues remain bounded and the resulting flow rates satisfy an associated network utility maximization problem. In recent years a number of papers have presented theoretical solutions to this problem that are based on combining differential-backlog scheduling algorithms with utility-based congestion control. However, this work typically does not address a number of issues such as how signaling should be performed and how the new algorithms interact with other wireless protocols. In this paper we address such issues. In particular: ldr We define a specific network utility maximization problem that we believe is appropriate for mobile adhoc networks. ldr We describe a wireless greedy primal dual (wGPD) algorithm for combined congestion control and scheduling that aims to solve this problem. ldr We show how the wGPD algorithm and its associated signaling can be implemented in practice with minimal disruption to existing wireless protocols. ldr We show via OPNET simulation that wGPD significantly outperforms standard protocols such as 802.11 operating in conjunction with TCP. This work was supported by the DARPA CBMANET program.

Journal ArticleDOI
TL;DR: Inner and outer bounds are established on the capacity region of two-sender, two-receiver interference channels where one transmitter knows both messages and the transmitter with this extra message knowledge is referred to as being cognitive.
Abstract: SUMMARY Inner and outer bounds are established on the capacity region of two-sender, two-receiver interference channels (IC) where one transmitter knows both messages. The transmitter with this extra message knowledge is referred to as being cognitive. The inner bound is based on strategies that generalise prior work, and includes rate-splitting, Gel’fand–Pinsker (GP) coding and cooperative transmission. A general outer bound is based on the Nair–El Gamal outer bound for broadcast channels (BCs). A simpler bound is presented for the case in which one of the decoders can decode both messages. The bounds are evaluated and compared for Gaussian channels. Copyright © 2008 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work studies the performance of a large dense network with one mobile relay and shows that network lifetime improves over that of a purely static network by up to a factor of four and constructs a joint mobility and routing algorithm which can yield a network lifetime close to the upper bound.
Abstract: We investigate the benefits of a heterogeneous architecture for wireless sensor networks (WSNs) composed of a few resource rich mobile relay nodes and a large number of simple static nodes. The mobile relays have more energy than the static sensors. They can dynamically move around the network and help relieve sensors that are heavily burdened by high network traffic, thus extending the latter's lifetime. We first study the performance of a large dense network with one mobile relay and show that network lifetime improves over that of a purely static network by up to a factor of four. Also, the mobile relay needs to stay only within a two-hop radius of the sink. We then construct a joint mobility and routing algorithm which can yield a network lifetime close to the upper bound. The advantage of this algorithm is that it only requires a limited number of nodes in the network to be aware of the location of the mobile relay. Our simulation results show that one mobile relay can at least double the network lifetime in a randomly deployed WSN. By comparing the mobile relay approach with various static energy-provisioning methods, we demonstrate the importance of node mobility for resource provisioning in a WSN.

Journal ArticleDOI
TL;DR: In this article, the properties of single layer graphene have been investigated and the existence of unusual charge car-riers, analogous to massless, chiral Dirac particles having a Berry's phase ofand resulting in a new, half-integer quantum Hall effect.
Abstract: Experiments into the properties of single layer graphene have demonstrated the existence of unusual charge car- riers, analogous to massless, chiral Dirac particles having a Berry's phase ofand resulting in a new, half-integer quantum Hall effect (1,2). Bilayer graphene has shown equally exciting properties, creating a system of massive chiral particles with a Berry's phase of 2� and exhibiting another, distinct quantum Hall effect (3). Both phenomena derive from the unusual energy dispersion of graphene leading to the presence of a distinctive Landau level (LL) at zero energy (4 -7). The peculiar dispersion relations of graphene create unique LL spectra in the presence of a magnetic field, B. In single layer graphene, the linear dispersion leads to distinctive ���

Journal ArticleDOI
TL;DR: An analytic characterization of the achievable throughput in the case of many users is provided and it is shown how additional receive antennas or higher multiuser diversity can reduce the required feedback rate to achieve a target throughput.
Abstract: We consider a MIMO broadcast channel where both the transmitter and receivers are equipped with multiple antennas. Channel state information at the transmitter (CSIT) is obtained through limited (i.e., finite-bandwidth) feedback from the receivers that index a set of precoding vectors contained in a predefined codebook. We propose a novel transceiver architecture based on zero-forcing beamforming and linear receiver combining. The receiver combining and quantization for CSIT feedback are jointly designed in order to maximize the expected SINR for each user. We provide an analytic characterization of the achievable throughput in the case of many users and show how additional receive antennas or higher multiuser diversity can reduce the required feedback rate to achieve a target throughput.We also propose a design methodology for generating codebooks tailored for arbitrary spatial correlation statistics. The resulting codebooks have a tree structure that can be utilized in time-correlated MIMO channels to significantly reduce feedback overhead. Simulation results show the effectiveness of the overall transceiver design strategy and codebook design methodology compared to prior techniques in a variety of correlation environments.

Proceedings ArticleDOI
13 Apr 2008
TL;DR: This work model the multicast resource allocation problem in WiMAX and demonstrate this problem to be NP-hard, and presents a fast greedy algorithm that is provably within a constant approximation of the optimal solution and performs within 87-95% of the ideal solution as demonstrated by realistic simulations.
Abstract: IEEE 802.16e WiMAX is a promising new technology for broadband access networks. Amongst the class of applications that can be supported is real time video services (such as IPTV, broadcast of live events etc.). These applications are bandwidth hungry and have stringent delay constraints. Thus, scalable support for such applications is a challenging problem. To address this challenge, we consider a combination of approaches using multicast, layer encoded video and adaptive modulation of transmissions. Using these, we develop algorithms to ensure efficient, fair and timely delivery of video in WiMAX networks. The corresponding resource allocation problem is challenging because scheduling decisions (within a WiMAX base station) are performed in real-time across two dimensions, time and frequency. Moreover, combining layered video with appropriate modulation calls for novel MAC algorithms. We model the multicast resource allocation problem in WiMAX and demonstrate this problem to be NP-hard. We present a fast greedy algorithm that is (i) provably within a constant approximation of the optimal solution (based on a metric that reflects video quality as perceived by the user), and (ii) performs within 87-95% of the optimal as demonstrated by realistic simulations. We also demonstrate that our algorithm offers a 25% improvement over a naive algorithm. Moreover, in terms of the average rate received by each user, our algorithm out-performs the naive algorithm by more than 50%.

Book ChapterDOI
01 May 2008
TL;DR: A new simple and efficient UC construction that appears to be the best fit for many practical PF-SFE and results in corresponding performance improvement of SFE of (small) private functions.
Abstract: We consider general secure function evaluation (SFE) of private functions(PF-SFE). Recall, privacy of functions is often most efficiently achieved by general SFE [18,19,10] of a Universal Circuit (UC). Our main contribution is a new simple and efficient UC construction. Our circuit UC k , universal for circuits of kgates, has size ~1.5 klog2kand depth ~klogk. It is up to 50% smaller than the best UC (of Valiant [16], of size ~19klogk) for circuits of size up to ≈ 5000 gates. Our improvement results in corresponding performance improvement of SFE of (small) private functions. Since, due to cost, only small circuits (i.e. < 5000 gates) are practical for PF-SFE, our construction appears to be the best fit for many practical PF-SFE. We implement PF-SFE based on our UC and Fairplay SFE system [11].

Proceedings ArticleDOI
08 Dec 2008
TL;DR: The benefits of a joint macro- and picocell deployment will increase further as both technologies mature, and will result in a significant reduction of the network energy consumption as the user demand for high data rates increases.
Abstract: Rising energy costs and the recent international focus on climate change issues has resulted in a high interest in improving the energy efficiency in the telecommunications industry. In this paper the effects of a joint deployment of macrocells for area coverage and publicly accessible residential picocells on the total energy consumption of the network is investigated. The decreasing energy efficiency of todaypsilas macrocellular technologies with increasing user demand for high data rates is discussed. It is shown that a joint deployment of macro- and publicly accessible residential picocells can reduce the total energy consumption by up to 60% in an urban area with todaypsilas technology. Furthermore the impact of future technologies for both macro- and femtocells on the energy efficiency is investigated. It is shown that the benefits of a joint macro- and picocell deployment will increase further as both technologies mature, and will result in a significant reduction of the network energy consumption as the user demand for high data rates increases.

Proceedings ArticleDOI
24 Oct 2008
TL;DR: A novel technique for radio transmitter identification based on frequency domain characteristics that is the first to propose the use of discriminatory classifiers based on steady state spectral features and achieves 97% accuracy in laboratory experiments.
Abstract: We present a novel technique for radio transmitter identification based on frequency domain characteristics. Our technique detects the unique features imbued in a signal as it passes through a transmit chain. We are the first to propose the use of discriminatory classifiers based on steady state spectral features. In laboratory experiments, we achieve 97% accuracy at 30 dB SNR and 66% accuracy at OdB SNR based on eight identical universal software radio peripherals (USRP) transmitters. Our technique can be implemented using today's low cost high-volume receivers and requires no manual performance tuning.

Posted Content
TL;DR: In this article, the authors present a method of analyzing a series of independent cross-sectional surveys in which some questions are not answered in some surveys and some respondents do not answer some of the questions posed.
Abstract: We present a method of analyzing a series of independent cross-sectional surveys in which some questions are not answered in some surveys and some respondents do not answer some of the questions posed. The method is also applicable to a single survey in which different questions are asked or different sampling methods are used in different strata or clusters. Our method involves multiply imputing the missing items and questions by adding to existing methods of imputation designed for single surveys a hierarchical regression model that allows covariates at the individual and survey levels. Information from survey weights is exploited by including in the analysis the variables on which the weights are based, and then reweighting individual responses (observed and imputed) to estimate population quantities. We also develop diagnostics for checking the fit of the imputation model based on comparing imputed data to nonimputed data. We illustrate with the example that motivated this project: a study of pre-election public opinion polls in which not all the questions of interest are asked in all the surveys, so that it is infeasible to impute within each survey separately.