scispace - formally typeset
Search or ask a question

Showing papers by "David Tse published in 2012"


Journal ArticleDOI
TL;DR: In this paper, the authors show that in an MIMO broadcast channel with transmit antennas and receivers each with 1 receive antenna, K/1+1/2+···+ 1/K (>;1) degrees of freedom is achievable even when the fed back channel state is completely independent of the current channel state.
Abstract: Transmitter channel state information (CSIT) is crucial for the multiplexing gains offered by advanced interference management techniques such as multiuser multiple-input multiple-output (MIMO) and interference alignment. Such CSIT is usually obtained by feedback from the receivers, but the feedback is subject to delays. The usual approach is to use the fed back information to predict the current channel state and then apply a scheme designed assuming perfect CSIT. When the feedback delay is large compared to the channel coherence time, such a prediction approach completely fails to achieve any multiplexing gain. In this paper, we show that even in this case, the completely stale CSI is still very useful. More concretely, we show that in an MIMO broadcast channel with transmit antennas and receivers each with 1 receive antenna, K/1+1/2+···+1/K (>;1) degrees of freedom is achievable even when the fed back channel state is completely independent of the current channel state. Moreover, we establish that if all receivers have independent and identically distributed channels, then this is the optimal number of degrees of freedom achievable. In the optimal scheme, the transmitter uses the fed back CSI to learn the side information that the receivers receive from previous transmissions rather than to predict the current channel state. Our result can be viewed as the first example of feedback providing a degree-of-freedom gain in memoryless channels.

525 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive SC-List decoder for polar codes with CRC was proposed, which iteratively increases the list size until at least one survival path can pass CRC.
Abstract: In this letter, we propose an adaptive SC (Successive Cancellation)-List decoder for polar codes with CRC. This adaptive SC-List decoder iteratively increases the list size until at least one survival path can pass CRC. Simulation shows that the adaptive SC-List decoder provides significant complexity reduction. We also demonstrate that polar code (2048, 1024) with 24-bit CRC decoded by our proposed adaptive SC-List decoder with very large maximum list size can achieve a frame error rate FER ≤ 10-3{-3} at Eb/No = 1.1dB, which is about 0.25dB from the information theoretic limit at this block length.

316 citations


Posted Content
TL;DR: It is demonstrated that polar code with 24-bit CRC decoded by the proposed adaptive SC-List decoder with very large maximum list size can achieve a frame error rate FER ≤ 10-3{-3} at Eb/No = 1.1dB, which is about 0.25dB from the information theoretic limit at this block length.
Abstract: In this letter, we propose an adaptive SC (Successive Cancellation)-List decoder for polar codes with CRC. This adaptive SC-List decoder iteratively increases the list size until the decoder outputs contain at least one survival path which can pass CRC. Simulation shows that the adaptive SC-List decoder provides significant complexity reduction. We also demonstrate that polar code (2048, 1024) with 24-bit CRC decoded by our proposed adaptive SC-List decoder with very large list size can achieve a frame error rate FER=0.001 at Eb/No=1.1dB, which is about 0.2dB from the information theoretic limit at this block length.

245 citations


Proceedings ArticleDOI
01 Dec 2012
TL;DR: In this paper, a primal and a dual algorithm is proposed to coordinate the smaller subproblems decomposed from the convexified OPF, which can be solved sequentially and cumulatively in a central node or solved in parallel in distributed nodes.
Abstract: Optimal power flow (OPF) is an important problem for power generation and it is in general non-convex. With the employment of renewable energy, it will be desirable if OPF can be solved very efficiently so that its solution can be used in real time. With some special network structure, e.g. trees, the problem has been shown to have a zero duality gap and the convex dual problem yields the optimal solution. In this paper, we propose a primal and a dual algorithm to coordinate the smaller subproblems decomposed from the convexified OPF. We can arrange the subproblems to be solved sequentially and cumulatively in a central node or solved in parallel in distributed nodes. We test the algorithms on IEEE radial distribution test feeders, some random tree-structured networks, and the IEEE transmission system benchmarks. Simulation results show that the computation time can be improved dramatically with our algorithms over the centralized approach of solving the problem without decomposition, especially in tree-structured problems. The computation time grows linearly with the problem size with the cumulative approach while the distributed one can have size-independent computation time.

149 citations


Posted Content
23 Apr 2012
TL;DR: This paper addresses the problem of voltage regulation in distribution networks with deep-penetration of distributed energy resources (DERs), e.g., renewable-b as d generation, and storage-capable loads such as plugin hybrid electric vehicles, with an efficient distributed algorithm.
Abstract: This paper addresses the problem of voltage regulation in po wer distribution networks with deep-penetration of distributed energy resources (DERs), e.g., renewable-b as d generation, and storage-capable loads such as plugin hybrid electric vehicles. We cast the problem as an optimi zation program, where the objective is to minimize the losses in the network subject to constraints on bus volta ge magnitudes, limits on active and reactive power injections, transmission line thermal limits and losses. W e provide sufficient conditions under which the optimizatio n problem can be solved via its convex relaxation. Using data f rom existing networks, we show that these sufficient conditions are expected to be satisfied by most networks. We a lso provide an efficient distributed algorithm to solve the problem. The algorithm adheres to a communication topol ogy described by a graph that is the same as the graph that describes the electrical network topology. We illustr ate the operation of the algorithm, including its robustnes s against communication link failures, through several case studies involving 5-, 34and 123-bus power distribution systems. I. I NTRODUCTION Electric power distribution systems will undergo radical t r nsformations in structure and functionality due to the advent of initiatives like the US DOESmart Grid [1], and its European counterpart Electricity Networks of the Future [2]. These transformations are enabled by the integration o f i) advanced communication and control, ii) renewable-based variable generation resources, e.g., pho tovoltaics (PVs), and iii) new storage-capable loads, e.g. , plug-in hybrid electric vehicles (PHEVs). These distribut ed generation and storage resources are commonly referred to as distributed energy resources (DERs). It has been ackno wledged (see, e.g., [3]) that massive penetration of The first two authors contributed equally to this work. B. Zhang is with the Department of Civil and Environmental En gineering and Management Science & Engineering at Stanford University. E-mail: baosen.zhang@gmail.com A.Y.S. Lam is wit h the Department of Computer Science at Hong Kong Baptist Uni versity. Email:ayslam@comp.hkbu.edu.hk D. Tse is with the Departme n of Electrical Engineering and Computer Sciences of the Un iversity of California at Berkeley. E-mail: dtse@eecs.berkeley.edu A. Domı́ngue z-Garcı́a is with the Department of Electrical and Computer Engineering of the University of Illinois at Urbana-Champaign. E-mail: aleda n@illinois.edu The work of B. Zhang and David Tse was supported in part by the N ational Science Foundation (NSF) under grand CCF-0830796. B. Zhang was also supported by a National Sciences and Engineering Re search Council of Canada Postgraduate scholarship. The wor k of A.Y.S. Lam was supported in part by the Croucher Foundation. The work of Dom ı́nguez-Garcı́a was supported in part by NSF under grant ECC SPS-1135598 and Career Award ECCS-CAR-0954420; and by the Consortium fo r Electric Reliability Technology Solutions (CERTS).

103 citations


Proceedings ArticleDOI
22 Jul 2012
TL;DR: The optimal power flow problem can be convexified and efficiently solved, and this result improves upon earlier works since it does not make any assumptions about the active bus power constraints.
Abstract: We investigate the problem of power flow and its relationship to optimization in tree networks. We show that due to the tree topology of the network, the general optimal power flow problem simplifies greatly. Our approach is to look at the injection region of the power network. The injection region is simply the set of all vectors of bus power injections that satisfy the network and operation constraints. The geometrical object of interest is the set of Pareto-optimal points of the injection region, since they are the solutions to the minimization of increasing functions. We view the injection region as a linear transformation of the higher dimensional power flow region, which is the set of all feasible power flows, one for each direction of each line. We show that if the voltage magnitudes are fixed, then the injection region becomes a product of two-bus power flow regions, one for each line in the network. Using this decomposition, we show that under the practical condition that the angle difference across each line is not too large, the set of Pareto-optimal points of the injection region remains unchanged by taking the convex hull. Therefore, the optimal power flow problem can be convexified and efficiently solved. This result improves upon earlier works since it does not make any assumptions about the active bus power constraints. We also obtain some partial results for the variable voltage magnitude case.

79 citations


Journal ArticleDOI
TL;DR: It is shown that a simple coding scheme where active senders transmit a single message is optimum for a binary-expansion deterministic channel and achieves within one bit of the optimum in the case of a Gaussian channel.
Abstract: This paper considers a random access system where each sender is in one of two possible states, active or not active, and the states are only known to the common receiver. Active senders encode data into independent information streams, a subset of which is decoded depending on the collective interference. An information-theoretic formulation of the problem is presented and the set of achievable rates is characterized with a guaranteed gap to optimality. Inner and outer bounds on the capacity region of a two-sender system are tight in the case of a binary-expansion deterministic channel and differ by less than one bit in the case of a Gaussian channel. In systems with an arbitrary number of senders, the symmetric scenario of equal access probabilities and received power constraints is studied and the system throughput, i.e., the maximum achievable expected sum rate, is characterized. It is shown that a simple coding scheme where active senders transmit a single message is optimum for a binary-expansion deterministic channel and achieves within one bit of the optimum in the case of a Gaussian channel. Finally, a comparison with the slotted ALOHA protocol is provided, showing that encoding rate adaptation at the transmitters achieves constant (rather than zero) throughput as the number of users tends to infinity.

78 citations


Proceedings ArticleDOI
01 Jul 2012
TL;DR: This work finds that the backward IC can be more efficiently used for feedback rather than if it were used for independent backward-message transmission, and shows that feedback can provide a net increase in capacity even if feedback cost is taken into consideration.
Abstract: We consider two-way interference channels (ICs) where forward and backward channels are ICs but not necessarily the same. We first consider a scenario where there are only two forward messages and feedback is offered through the backward IC for aiding forward-message transmission. For a linear deterministic model of this channel, we develop inner and outer bounds that match for a wide range of channel parameters. We find that the backward IC can be more efficiently used for feedback rather than if it were used for independent backward-message transmission. As a consequence, we show that feedback can provide a net increase in capacity even if feedback cost is taken into consideration. Moreover we extend this to a more general scenario with two additional independent backward messages, from which we find that interaction can provide an arbitrarily large gain in capacity.

50 citations


Proceedings ArticleDOI
01 Jul 2012
TL;DR: In this paper, the authors studied the problem of compressing a source sequence in the presence of side-information that is related to the source via insertions, deletions and substitutions.
Abstract: We study the problem of compressing a source sequence in the presence of side-information that is related to the source via insertions, deletions and substitutions. We propose a simple algorithm to compress the source sequence when the side-information is present at both the encoder and decoder. A key attribute of the algorithm is that it encodes the edits contained in runs of different extents separately. For small insertion and deletion probabilities, the compression rate of the algorithm is shown to be asymptotically optimal.

19 citations


Posted Content
28 Mar 2012
TL;DR: The sequencing capacity is calculated explicitly for a simple statistical model of the DNA sequence and the read process using an analogy between the DNA se-quencing problem and the classic communication problem in terms of an information theoretic notion of sequencing capacity.
Abstract: DNA sequencing is the basic workhorse of modern day biology and medicine. Shotgun sequencing is the dominant technique used: many randomly located short fragments called reads are extracted from the DNA sequence, and these reads are assembled to reconstruct the original sequence. A basic question is: given a sequencing technology and the statistics of the DNA sequence, what is the minimum number of reads required for reliable reconstruction? This number provides a fundamental limit to the performance of any assembly algorithm. By drawing an analogy between the DNA se-quencing problem and the classic communication problem, we formulate this question in terms of an information theoretic notion of sequencing capacity. This is the asymp-totic ratio of the length of the DNA sequence to the minimum number of reads required to reconstruct it reliably. We compute the sequencing capacity explicitly for a simple statistical model of the DNA sequence and the read process. Using this framework, we also study the impact of noise in the read process on the sequencing capacity.

17 citations


Proceedings ArticleDOI
TL;DR: In this paper, the backward IC can be more efficiently used for feedback rather than if it were used for sending its own independent backward messages, and it is shown that feedback can provide a net increase in capacity even if feedback cost is taken into consideration.
Abstract: We consider two-way interference channels (ICs) where forward and backward channels are ICs but not necessarily the same. We first consider a scenario where there are only two forward messages and feedback is offered through the backward IC for aiding forward-message transmission. For a linear deterministic model of this channel, we develop inner and outer bounds that match for a wide range of channel parameters. We find that the backward IC can be more efficiently used for feedback rather than if it were used for sending its own independent backward messages. As a consequence, we show that feedback can provide a net increase in capacity even if feedback cost is taken into consideration. Moreover we extend this to a more general scenario with two additional independent backward messages, from which we find that interaction can provide an arbitrarily large gain in capacity.

Proceedings ArticleDOI
01 Jul 2012
TL;DR: By drawing an analogy between the DNA sequencing problem and the classic communication problem, an information theoretic notion of sequencing capacity is defined, which is the maximum number of DNA base pairs that can be resolved reliably per read.
Abstract: DNA sequencing is the basic workhorse of modern day biology and medicine. Shotgun sequencing is the dominant technique used: many randomly located short fragments called reads are extracted from the DNA sequence, and these reads are assembled to reconstruct the original sequence. By drawing an analogy between the DNA sequencing problem and the classic communication problem, we define an information theoretic notion of sequencing capacity. This is the maximum number of DNA base pairs that can be resolved reliably per read, and provides a fundamental limit to the performance that can be achieved by any assembly algorithm. We compute the sequencing capacity explicitly for a simple statistical model of the DNA sequence and the read process.

Posted Content
TL;DR: In this article, the authors consider a two-stage stochastic economic dispatch problem and derive the price of uncertainty, a number that characterizes the intrinsic impact of uncertainty on the integration cost of renewables.
Abstract: Increased uncertainty due to high penetration of renewables imposes significant costs to the system operators. The added costs depend on several factors including market design, performance of renewable generation forecasting and the specific dispatch procedure. Quantifying these costs has been limited to small sample Monte Carlo approaches applied specific dispatch algorithms. The computational complexity and accuracy of these approaches has limited the understanding of tradeoffs between different factors. {In this work we consider a two-stage stochastic economic dispatch problem. Our goal is to provide an analytical quantification and an intuitive understanding of the effects of uncertainties and network congestion on the dispatch procedure and the optimal cost.} We first consider an uncongested network and calculate the risk limiting dispatch. In addition, we derive the price of uncertainty, a number that characterizes the intrinsic impact of uncertainty on the integration cost of renewables. Then we extend the results to a network where one link can become congested. Under mild conditions, we calculate price of uncertainty even in this case. We show that risk limiting dispatch is given by a set of deterministic equilibrium equations. The dispatch solution yields an important insight: congested links do not create isolated nodes, even in a two-node network. In fact, the network can support backflows in congested links, that are useful to reduce the uncertainty by averaging supply across the network. We demonstrate the performance of our approach in standard IEEE benchmark networks.

Proceedings ArticleDOI
01 Oct 2012
TL;DR: The network size can be reduced tremendously by carefully considering the possible congestions in the network and the key insight is to use first stage forecast values of renewables to predict the likely real-time congestions.
Abstract: Increased uncertainty due to high penetration of renewables imposes significant costs to the system operators. The added costs depend on several factors including market design, performance of renewable generation forecasting and the specific dispatch procedure. Quantifying these costs has been limited to small sample Monte Carlo approaches applied specific dispatch algorithms. The computational complexity and accuracy of these approaches has limited the understanding of tradeoffs between different factors. In this work we follow a different approach by considering a two-stage stochastic economic dispatch problem, where the optimal dispatch is called risk limiting dispatch. First we consider an uncongested network and derive the price of uncertainty, a number that characterizes the intrinsic impact of uncertainty on the integration cost of renewables. Then we extend the results to a two bus network where a transmission line may become congested. We demonstrate the existence of the price of uncertainty even in this case, under mild assumptions. We show that risk limiting dispatch is given by a set of deterministic equilibrium equations. The dispatch solution yields an important insight: congested links do not create isolated nodes, even in a two-node network. In fact, the network can support backflows in congested links, that are useful to reduce the uncertainty by averaging supply across the network.

Posted Content
TL;DR: In this article, the problem of voltage regulation in power distribution networks with deep-penetration of distributed energy resources, e.g., renewable-based generation, and storage-capable loads such as plug-in hybrid electric vehicles, is addressed.
Abstract: This paper addresses the problem of voltage regulation in power distribution networks with deep-penetration of distributed energy resources, e.g., renewable-based generation, and storage-capable loads such as plug-in hybrid electric vehicles. We cast the problem as an optimization program, where the objective is to minimize the losses in the network subject to constraints on bus voltage magnitudes, limits on active and reactive power injections, transmission line thermal limits and losses. We provide sufficient conditions under which the optimization problem can be solved via its convex relaxation. Using data from existing networks, we show that these sufficient conditions are expected to be satisfied by most networks. We also provide an efficient distributed algorithm to solve the problem. The algorithm adheres to a communication topology described by a graph that is the same as the graph that describes the electrical network topology. We illustrate the operation of the algorithm, including its robustness against communication link failures, through several case studies involving 5-, 34-, and 123-bus power distribution systems.

Posted Content
19 Apr 2012
TL;DR: In this paper, the geometry of injection regions and its relationship to optimization of power flows in tree networks were investigated, and it was shown that under the practical condition that the angle difference across each line is not too large, the set of Pareto-optimal points of the injection region remains unchanged by taking the convex hull.
Abstract: We investigate the geometry of injection regions and its relationship to optimization of power flows in tree networks. The injection region is the set of all vectors of bus power injections that satisfy the network and operation constraints. The geometrical object of interest is the set of Pareto-optimal points of the injection region. If the voltage magnitudes are fixed, the injection region of a tree network can be written as a linear transformation of the product of two-bus injection regions, one for each line in the network. Using this decomposition, we show that under the practical condition that the angle difference across each line is not too large, the set of Pareto-optimal points of the injection region remains unchanged by taking the convex hull. Moreover, the resulting convexified optimal power flow problem can be efficiently solved via }{ semi-definite programming or second order cone relaxations. These results improve upon earlier works by removing the assumptions on active power lower bounds. It is also shown that our practical angle assumption guarantees two other properties: (i) the uniqueness of the solution of the power flow problem, and (ii) the non-negativity of the locational marginal prices. Partial results are presented for the case when the voltage magnitudes are not fixed but can lie within certain bounds.

Posted Content
TL;DR: In this paper, it was shown that if the read length is below a threshold, reconstruction is impossible no matter how many reads are observed, and if the length is above the threshold, having enough reads to cover the DNA sequence is sufficient to reconstruct.
Abstract: DNA sequencing is the basic workhorse of modern day biology and medicine. Shotgun sequencing is the dominant technique used: many randomly located short fragments called reads are extracted from the DNA sequence, and these reads are assembled to reconstruct the original sequence. A basic question is: given a sequencing technology and the statistics of the DNA sequence, what is the minimum number of reads required for reliable reconstruction? This number provides a fundamental limit to the performance of {\em any} assembly algorithm. For a simple statistical model of the DNA sequence and the read process, we show that the answer admits a critical phenomena in the asymptotic limit of long DNA sequences: if the read length is below a threshold, reconstruction is impossible no matter how many reads are observed, and if the read length is above the threshold, having enough reads to cover the DNA sequence is sufficient to reconstruct. The threshold is computed in terms of the Renyi entropy rate of the DNA sequence. We also study the impact of noise in the read process on the performance.

Proceedings ArticleDOI
TL;DR: A simple algorithm to compress the source sequence when the side-information is present at both the encoder and decoder is proposed, which encodes the edits contained in runs of different extents separately.
Abstract: We study the problem of compressing a source sequence in the presence of side-information that is related to the source via insertions, deletions and substitutions. We propose a simple algorithm to compress the source sequence when the side-information is present at both the encoder and decoder. A key attribute of the algorithm is that it encodes the edits contained in runs of different extents separately. For small insertion and deletion probabilities, the compression rate of the algorithm is shown to be asymptotically optimal.

Patent
09 Nov 2012
TL;DR: In this paper, an embossing device includes an electric drive system to provide high quality embossings, which can be used with any conventional die, and a folder is provided that allows for debossing a material in a single pass through the device.
Abstract: An embossing device includes an electric drive system to provide for high quality embossing. The device may be used with any conventional die. In a preferred embodiment, an embossing folder is provided that allows for embossing and debossing of a material in a single pass through the embossing device.

Journal ArticleDOI
TL;DR: In this article, a low-complexity coding scheme and system design framework for the half-duplex relay channel based on the Quantize-Map-and-Forward (QMF) relay- ing scheme was proposed.
Abstract: In this paper we develop a low-complexity coding scheme and system design framework for the half duplex relay channel based on the Quantize-Map-and-Forward (QMF) relay- ing scheme. The proposed framework allows linear complexity operations at all network terminals. We propose the use of binary LDPC codes for encoding at the source and LDGM codes for mapping at the relay. We express joint decoding at the destination as a belief propagation algorithm over a factor graph. This graph has the LDPC and LDGM codes as subgraphs connected via probabilistic constraints that model the QMF relay operations. We show that this coding framework extends naturally to the high SNR regime using bit interleaved coded modulation (BICM). We develop density evolution analysis tools for this factor graph and demonstrate the design of practical codes for the half-duplex relay channel that perform within 1dB of information theoretic QMF threshold.