scispace - formally typeset
Search or ask a question
Author

Yao Li

Bio: Yao Li is an academic researcher from Rutgers University. The author has contributed to research in topics: Decoding methods & Low-density parity-check code. The author has an hindex of 8, co-authored 24 publications receiving 322 citations. Previous affiliations of Yao Li include University of California, Los Angeles & Akamai Technologies.

Papers
More filters
Journal ArticleDOI
TL;DR: This work model coding over generations with random generation scheduling as a coupon collector's brotherhood problem to derive the expected number of coded packets needed for successful decoding of the entire content as well as the probability of decoding failure and quantify the tradeoff between computational complexity and throughput.
Abstract: To reduce computational complexity and delay in randomized network coded content distribution, and for some other practical reasons, coding is not performed simultaneously over all content blocks, but over much smaller, possibly overlapping subsets of these blocks, known as generations. A penalty of this strategy is throughput reduction. To analyze the throughput loss, we model coding over generations with random generation scheduling as a coupon collector's brotherhood problem. This model enables us to derive the expected number of coded packets needed for successful decoding of the entire content as well as the probability of decoding failure (the latter only when generations do not overlap) and further, to quantify the tradeoff between computational complexity and throughput. Interestingly, with a moderate increase in the generation size, throughput quickly approaches link capacity. Overlaps between generations can further improve throughput substantially for relatively small generation sizes.

92 citations

Journal ArticleDOI
TL;DR: This paper proposes and analyze a simple scheme for detecting permanent errors that exploits the parity check equations of the code itself and reuses the existing hardware to locate permanent errors in memory blocks.
Abstract: This paper studies the performance of a noisy Gallager B decoder for regular LDPC codes. We assume that the noisy decoder is subject to both transient processor errors and permanent memory errors. We permit different error rates at different functional components. In addition, for the sake of generality, we allow asymmetry in the permanent error rates of component outputs, and thus we model error propagation in the decoder via a suitable asymmetric channel. We then develop a density evolution-type analysis on this asymmetric channel. The recursive expression for the bit error probability is derived as a function of the code parameters (node degrees), codeword weight, transmission error rate, and the error rates of the permanent and the transient errors. Based on this analysis, we then derive the residual error of the Gallager B decoder for the regime where the transmission error rate and the processing error rates are small. In this regime, we further observe that the residual error rate can be well approximated by a suitable combination of the transient error rate and the permanent error rate at variable nodes, provided that the check node degree is large enough. Based on this insight, we then propose and analyze a scheme for detecting permanent errors and correcting detected residual errors. The scheme exploits the parity check equations of the code and reuses the existing hardware to locate permanent errors in memory blocks. Performance analysis and simulation results show that, with high probability, the detection scheme discovers correct locations of permanent memory errors, while, with low probability, it mislabels the functional memory as being defective. The proposed error detection-and-correction scheme can be implemented in-circuit and is useful in combating failures arising from aging.

55 citations

Journal ArticleDOI
TL;DR: The performance of a popular statistical inference algorithm, belief propagation on probabilistic graphical models, implemented on noisy hardware is investigated, and two robust implementations of the BP algorithm targeting different computation noise distributions are proposed.
Abstract: The wide recognition that emerging nano-devices will be inherently unreliable motivates the evaluation of information processing algorithms running on noisy hardware as well as the design of robust schemes for reliable performance against hardware errors of varied characteristics In this paper, we investigate the performance of a popular statistical inference algorithm, belief propagation (BP) on probabilistic graphical models, implemented on noisy hardware, and we propose two robust implementations of the BP algorithm targeting different computation noise distributions We assume that the BP messages are subject to zero-mean transient additive computation noise We focus on graphical models satisfying the contraction mapping condition that guarantees the convergence of the noise-free BP We first upper bound the distances between the noisy BP messages and the fixed point of (noise-free) BP as a function of the iteration number Next, we propose two implementations of BP, namely, censoring BP and averaging BP, that are robust to computation noise Censoring BP rejects incorrect computations to keep the algorithm on the right track to convergence, while averaging BP takes the average of the messages in all iterations up to date to mitigate the effects of computation noise Censoring BP works effectively when, with high probability, the computation noise is exactly zero, and averaging BP, although having a slightly larger overhead, works effectively for general zero-mean computation noise distributions Sufficient conditions on the convergence of censoring BP and averaging BP are derived Simulations on the Ising model demonstrate that the two proposed implementations successfully converge to the fixed point achieved by noise-free BP Additionally, we apply averaging BP to a BP-based image denoising algorithm and as a BP decoder for LDPC codes In the image denoising application, averaging BP successfully denoises an image even when nominal BP fails to do so in the presence of computation noise In the BP LDPC decoder application, the power of averaging BP is manifested by the reduction in the residual error rates compared with the nominal BP decoder

42 citations

Proceedings ArticleDOI
30 Sep 2009
TL;DR: Simulation experiments confirm the usability of the optimization results obtained for the asymptotic regime as a guideline for finite-length code design and propose several performance measures, and optimize the performance of the rateless code used at the server through the design of the code degree distribution.
Abstract: We investigate the performance of rateless codes for single-server streaming to diverse users, assuming that diversity in users is present not only because they have different channel conditions, but also because they demand different amounts of information and have different decoding capabilities. The LT encoding scheme is employed. While some users accept output symbols of all degrees and decode using belief propagation, others only collect degree-1 output symbols and run no decoding algorithm. We propose several performance measures, and optimize the performance of the rateless code used at the server through the design of the code degree distribution. Optimization problems are formulated for the asymptotic regime and solved as linear programming problems. Optimized performance shows great improvement in total bandwidth consumption over using the conventional ideal soliton distribution, or simply sending separately encoded streams to different types of user nodes. Simulation experiments confirm the usability of the optimization results obtained for the asymptotic regime as a guideline for finite-length code design.

21 citations

Proceedings ArticleDOI
29 Jun 2012
TL;DR: It is shown that coding at the application layer brings about a significant increase in net data throughput, and thereby reduction in energy consumption due to reduced communication time, and on devices with constrained computing resources, heavy coding operations cause packet drops in higher layers and negatively affect the net throughput.
Abstract: We consider three types of application layer coding for streaming over lossy links: random linear coding, systematic random linear coding, and structured coding. The file being streamed is divided into sub-blocks (generations). Code symbols are formed by combining data belonging to the same generation, and transmitted in a round-robin fashion. We compare the schemes based on delivery packet count, net throughput, and energy consumption for a range of generation sizes. We determine these performance measures both analytically and in an experimental configuration. We find our analytical predictions to match the experimental results. We show that coding at the application layer brings about a significant increase in net data throughput, and thereby reduction in energy consumption due to reduced communication time. On the other hand, on devices with constrained computing resources, heavy coding operations cause packet drops in higher layers and negatively affect the net throughput. We find from our experimental results that low-rate MDS codes are best for small generation sizes, whereas systematic random linear coding has the best net throughput and lowest energy consumption for larger generation sizes due to its low decoding complexity.

21 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new architecture based on distributed caching of the content in femtobasestations with small or non-existing backhaul capacity but with considerable storage space, called helper nodes is presented, which allows an improvement in the video throughput without deployment of any additional infrastructure.
Abstract: We present a new architecture to handle the ongoing explosive increase in the demand for video content in wireless networks. It is based on distributed caching of the content in femtobasestations with small or non-existing backhaul capacity but with considerable storage space, called helper nodes. We also consider using the wireless terminals themselves as caching helpers, which can distribute video through device-todevice communications. This approach allows an improvement in the video throughput without deployment of any additional infrastructure. The new architecture can improve video throughput by one to two orders-of-magnitude.

690 citations

Journal ArticleDOI
TL;DR: In this paper, the authors compare the performance of the D2D caching and coded multicasting with the conventional unicasting and harmonic broadcasting in terms of the scaling law of wireless networks.
Abstract: As wireless video is the fastest growing form of data traffic, methods for spectrally efficient on-demand wireless video streaming are essential to both service providers and users. A key property of video on-demand is the asynchronous content reuse , such that a few popular files account for a large part of the traffic but are viewed by users at different times. Caching of content on wireless devices in conjunction with device-to-device (D2D) communications allows to exploit this property, and provide a network throughput that is significantly in excess of both the conventional approach of unicasting from cellular base stations and the traditional D2D networks for “regular” data traffic. This paper presents in a tutorial and concise form some recent results on the throughput scaling laws of wireless networks with caching and asynchronous content reuse, contrasting the D2D approach with other alternative approaches such as conventional unicasting, harmonic broadcasting , and a novel coded multicasting approach based on caching in the user devices and network-coded transmission from the cellular base station only. Somehow surprisingly, the D2D scheme with spatial reuse and simple decentralized random caching achieves the same near-optimal throughput scaling law as coded multicasting. Both schemes achieve an unbounded throughput gain (in terms of scaling law) with respect to conventional unicasting and harmonic broadcasting, in the relevant regime where the number of video files in the library is smaller than the total size of the distributed cache capacity in the network. To better understand the relative merits of these competing approaches, we consider a holistic D2D system design incorporating traditional microwave (2 GHz) and millimeter-wave (mm-wave) D2D links; the direct connections to the base station can be used to provide those rare video requests that cannot be found in local caches. We provide extensive simulation results under a variety of system settings and compare our scheme with the systems that exploit transmission from the base station only. We show that, also in realistic conditions and nonasymptotic regimes, the proposed D2D approach offers very significant throughput gains.

617 citations

Journal ArticleDOI
TL;DR: The state-of-the-art in energy-harvesting WSNs for environmental monitoring applications, including Animal Tracking, Air Quality Monitoring, Water quality Monitoring, and Disaster Monitoring, are reviewed to improve the ecosystem and human life.
Abstract: Wireless Sensor Networks (WSNs) are crucial in supporting continuous environmental monitoring, where sensor nodes are deployed and must remain operational to collect and transfer data from the environment to a base-station. However, sensor nodes have limited energy in their primary power storage unit, and this energy may be quickly drained if the sensor node remains operational over long periods of time. Therefore, the idea of harvesting ambient energy from the immediate surroundings of the deployed sensors, to recharge the batteries and to directly power the sensor nodes, has recently been proposed. The deployment of energy harvesting in environmental field systems eliminates the dependency of sensor nodes on battery power, drastically reducing the maintenance costs required to replace batteries. In this article, we review the state-of-the-art in energy-harvesting WSNs for environmental monitoring applications, including Animal Tracking, Air Quality Monitoring, Water Quality Monitoring, and Disaster Monitoring to improve the ecosystem and human life. In addition to presenting the technologies for harvesting energy from ambient sources and the protocols that can take advantage of the harvested energy, we present challenges that must be addressed to further advance energy-harvesting-based WSNs, along with some future work directions to address these challenges.

274 citations

Book ChapterDOI
13 May 2011
TL;DR: This paper introduces the Kodo network coding library, an open source C++ library intended to be used in practical studies of network coding algorithms, and introduces potential users to the goals, the structure, and the use of the library.
Abstract: This paper introduces theKodo network coding library. Kodo is an open source C++ library intended to be used in practical studies of network coding algorithms. The target users for the library are researchers working with or interested in network coding. To provide a research friendly library Kodo provides a number of algorithms and building blocks, with which new and experimental algorithms can be implemented and tested. In this paper we introduce potential users to the goals, the structure, and the use of the library. To demonstrate the use of the library we provide a number of simple programming examples. It is our hope that network coding practitioners will use Kodo as a starting point, and in time contribute by improving and extending the functionality of Kodo.

161 citations