scispace - formally typeset
Search or ask a question

Showing papers by "Yin Sun published in 2020"


Posted Content
TL;DR: The current state of the art in the design and optimization of low-latency cyberphysical systems and applications in which sources send time-stamped status updates to interested recipients is described and AoI timeliness metrics are described.
Abstract: We summarize recent contributions in the broad area of age of information (AoI). In particular, we describe the current state of the art in the design and optimization of low-latency cyberphysical systems and applications in which sources send time-stamped status updates to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited system resources. We describe AoI timeliness metrics and present general methods of AoI evaluation analysis that are applicable to a wide variety of sources and systems. Starting from elementary single-server queues, we apply these AoI methods to a range of increasingly complex systems, including energy harvesting sensors transmitting over noisy channels, parallel server systems, queueing networks, and various single-hop and multi-hop wireless networks. We also explore how update age is related to MMSE methods of sampling, estimation and control of stochastic processes. The paper concludes with a review of efforts to employ age optimization in cyberphysical applications.

265 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a problem of sampling a Wiener process, with samples forwarded to a remote estimator over a channel that is modeled as a queue, and study the optimal online sampling strategy that minimizes the mean square estimation error subject to a sampling rate constraint.
Abstract: In this paper, we consider a problem of sampling a Wiener process, with samples forwarded to a remote estimator over a channel that is modeled as a queue. The estimator reconstructs an estimate of the real-time signal value from causally received samples. We study the optimal online sampling strategy that minimizes the mean square estimation error subject to a sampling rate constraint. We prove that the optimal sampling strategy is a threshold policy, and find the optimal threshold. This threshold is determined by how much the Wiener process varies during the random service time and the maximum allowed sampling rate. Further, if the sampling times are independent of the observed Wiener process, the above sampling problem for minimizing the estimation error is equivalent to a sampling problem for minimizing the age of information. This reveals an interesting connection between the age of information and remote estimation error. Our comparisons show that the estimation error achieved by the optimal sampling policy can be much smaller than those of age-optimal sampling, zero-wait sampling, and periodic sampling.

108 citations


Proceedings ArticleDOI
11 Oct 2020
TL;DR: A low-complexity solution is devised to solve the problem of optimizing the freshness of status updates that are sent from a large number of low-power source nodes to a common access point and it is proved that the solution is within a small gap from the optimum AoI performance.
Abstract: In this paper, we consider the problem of optimizing the freshness of status updates that are sent from a large number of low-power source nodes to a common access point. The source nodes utilize carrier sensing to reduce collisions and adopt an asychronized sleep-wake strategy to achieve an extended battery lifetime (e.g., 10-15 years). We use age of information (AoI) to measure the freshness of status updates, and design the sleep-wake parameters for minimizing the weighted-sum peak AoI of the sources, subject to per-source battery lifetime constraints. When the sensing time is zero, this sleep-wake design problem can be solved by resorting to a two-layer nested convex optimization procedure; however, for positive sensing times, the problem is non-convex. We devise a low-complexity solution to solve this problem and prove that, for practical sensing times that are short and positive, the solution is within a small gap from the optimum AoI performance. Our numerical and NS-3 simulation results show that our solution can indeed elongate the batteries lifetime of information sources, while providing a competitive AoI performance.

30 citations


Posted Content
TL;DR: In this paper, a joint sampling and scheduling problem for optimizing data freshness in multi-source systems is considered, where all the sources have the same age-penalty function.
Abstract: We consider a joint sampling and scheduling problem for optimizing data freshness in multi-source systems. Data freshness is measured by a non-decreasing penalty function of \emph{age of information}, where all sources have the same age-penalty function. Sources take turns to generate update packets, and forward them to their destinations one-by-one through a shared channel with random delay. There is a scheduler, that chooses the update order of the sources, and a sampler, that determines when a source should generate a new packet in its turn. We aim to find the optimal scheduler-sampler pairs that minimize the total-average age-penalty at delivery times (Ta-APD) and the total-average age-penalty (Ta-AP). We prove that the Maximum Age First (MAF) scheduler and the zero-wait sampler are jointly optimal for minimizing the Ta-APD. Meanwhile, the MAF scheduler and a relative value iteration with reduced complexity (RVI-RC) sampler are jointly optimal for minimizing the Ta-AP. The RVI-RC sampler is based on a relative value iteration algorithm whose complexity is reduced by exploiting a threshold property in the optimal sampler. Finally, a low-complexity threshold-type sampler is devised via an approximate analysis of Bellman's equation. This threshold-type sampler reduces to a simple water-filling sampler for a linear age-penalty function.

29 citations


Journal ArticleDOI
TL;DR: This paper develops a low-complexity closed-form policy named Large-arraY Reliability and Rate Control (LYRRC), which is proven to be asymptotically latency-optimal as the number of antennas increases.
Abstract: One fundamental challenge in 5G URLLC is how to optimize massive MIMO systems for achieving low latency and high reliability. A natural design choice to maximize reliability and minimize retransmission is to select the lowest allowed target error rate. However, the overall latency is the sum of queueing latency and retransmission latency, hence choosing the lowest target error rate does not always minimize the overall latency. In this paper, we minimize the overall latency by jointly designing the target error rate and transmission rate adaptation, which leads to a fundamental tradeoff point between queueing and retransmission latency. This design problem can be formulated as a Markov decision process, which is theoretically optimal, but its complexity is prohibitively high for real-system deployments. We managed to develop a low-complexity closed-form policy named Large-arraY Reliability and Rate Control (LYRRC), which is proven to be asymptotically latency-optimal as the number of antennas increases. In LYRRC, the transmission rate is twice of the arrival rate, and the target error rate is a function of the antenna number, arrival rate, and channel estimation error. With simulated and measured channels, our evaluations find LYRRC satisfies the latency and reliability requirements of URLLC in all the tested scenarios.

15 citations


Posted Content
TL;DR: Learning algorithms based on the UCB principle are developed which utilize these additional side observations appropriately while performing exploration-exploitation trade-off in the classical multi-armed bandit problem.
Abstract: We study a variant of the classical multi-armed bandit problem (MABP) which we call as Multi-Armed Bandits with dependent arms. More specifically, multiple arms are grouped together to form a cluster, and the reward distributions of arms belonging to the same cluster are known functions of an unknown parameter that is a characteristic of the cluster. Thus, pulling an arm $i$ not only reveals information about its own reward distribution, but also about all those arms that share the same cluster with arm $i$. This "correlation" amongst the arms complicates the exploration-exploitation trade-off that is encountered in the MABP because the observation dependencies allow us to test simultaneously multiple hypotheses regarding the optimality of an arm. We develop learning algorithms based on the UCB principle which utilize these additional side observations appropriately while performing exploration-exploitation trade-off. We show that the regret of our algorithms grows as $O(K\log T)$, where $K$ is the number of clusters. In contrast, for an algorithm such as the vanilla UCB that is optimal for the classical MABP and does not utilize these dependencies, the regret scales as $O(M\log T)$ where $M$ is the number of arms.

10 citations


Posted Content
TL;DR: It is shown that there exists a multi-dimensional threshold-based scheduling policy that is optimal for minimizing the age of information and a low-complexity bisection algorithm is further devised to compute the optimal thresholds.
Abstract: In this paper, we study the problem of minimizing the age of information when a source can transmit status updates over two heterogeneous channels. Our work is motivated by recent developments in 5G mmWave technology, where transmissions may occur over an unreliable but fast (e.g., mmWave) channel or a slow reliable (e.g., sub-6GHz) channel. The unreliable channel is modeled as a time-correlated Gilbert-Elliot channel, where information can be transmitted at a high rate when the channel is in the ''ON'' state. The reliable channel provides a deterministic but lower data rate. The scheduling strategy determines the channel to be used for transmission with the aim to minimize the time-average age of information (AoI). The optimal scheduling problem is formulated as a Markov Decision Process (MDP), which in our setting poses some significant challenges because e.g., supermodularity does not hold for part of the state space. We show that there exists a multi-dimensional threshold-based scheduling policy that is optimal for minimizing the age. A low-complexity bisection algorithm is further devised to compute the optimal thresholds. Numerical simulations are provided to compare different scheduling policies.

10 citations


Proceedings Article
15 Jun 2020
TL;DR: In this article, a lexicographic age optimality, or simply lex-age-optimality, was introduced to evaluate the performance of multi-class status update policies, and a new scheduling policy named preemptive priority, maximum age first, last-generated, first-served (PP-MAFLGFS) was proposed.
Abstract: In this paper, we consider a transmission scheduling problem, in which several streams of status update packets with diverse priority levels are sent through a shared channel to their destinations. We introduce a notion of Lexicographic age optimality, or simply lex-age-optimality, to evaluate the performance of multi-class status update policies. In particular, a lex-age-optimal scheduling policy first minimizes the Age of Information (AoI) metrics for high-priority streams, and then, within the set of optimal policies for high-priority streams, achieves the minimum AoI metrics for low-priority streams. We propose a new scheduling policy named Preemptive Priority, Maximum Age First, Last-Generated, First-Served (PP-MAFLGFS), and prove that the PP-MAF-LGFS scheduling policy is lex-age-optimal in the single exponential server settings. This result holds (i) for minimizing any time-dependent, symmetric, and non-decreasing age penalty function; (ii) for minimizing any non-decreasing functional of the stochastic process formed by the age penalty function; and (iii) for the cases where different priority classes have distinct arrival traffic patterns, age penalty functions, and age penalty functionals. For example, the PPMAF-LGFS scheduling policy is lex-age-optimal for minimizing the mean peak age of a high-priority stream and the time-average age of a low-priority stream. Numerical results are provided to illustrate our theoretical findings.

7 citations


Posted Content
TL;DR: A new scheduling policy named Preemptive Priority, Maximum Age First, Last-Generated, First-Served (PP-MAFLGFS), and it is proved that the PP-MAF-LGFS scheduling policy is lex-age-optimal in the single exponential server settings are proposed.
Abstract: In this paper, we consider a transmission scheduling problem, in which several streams of status update packets with diverse priority levels are sent through a shared channel to their destinations. We introduce a notion of Lexicographic age optimality, or simply lex-age-optimality, to evaluate the performance of multi-class status update policies. In particular, a lex-age-optimal scheduling policy first minimizes the Age of Information (AoI) metrics for high-priority streams, and then, within the set of optimal policies for high-priority streams, achieves the minimum AoI metrics for low-priority streams. We propose a new scheduling policy named Preemptive Priority, Maximum Age First, Last-Generated, First-Served (PP-MAF-LGFS), and prove that the PP-MAF-LGFS scheduling policy is lex-age-optimal. This result holds (i) for minimizing any time-dependent, symmetric, and non-decreasing age penalty function; (ii) for minimizing any non-decreasing functional of the stochastic process formed by the age penalty function; and (iii) for the cases where different priority classes have distinct arrival traffic patterns, age penalty functions, and age penalty functionals. For example, the PP-MAF-LGFS scheduling policy is lex-age-optimal for minimizing the mean peak age of a high-priority stream and the time-average age of a low-priority stream. Numerical results are provided to illustrate our theoretical findings.

7 citations


Proceedings ArticleDOI
01 Jun 2020
TL;DR: By-products of this study are connections between statistics such as Rényi entropy, Gallager's reliability function, and the concept of anytime capacity.
Abstract: We consider the problem of tracking an unstable stochastic process X t by using causal knowledge of another stochastic process Y t . We obtain necessary conditions and sufficient conditions for maintaining a finite tracking error. We provide necessary conditions as well as sufficient conditions for the success of this estimation, which is defined as order m moment trackability. By-products of this study are connections between statistics such as Renyi entropy, Gallager’s reliability function, and the concept of anytime capacity.

3 citations


Posted Content
TL;DR: By-products of this study are connections between statistics such as Renyi entropy, Gallager's reliability function, and the concept of anytime capacity.
Abstract: We consider the problem of tracking an unstable stochastic process $X_t$ by using causal knowledge of another stochastic process $Y_t$. We obtain necessary conditions and sufficient conditions for maintaining a finite tracking error. We provide necessary conditions as well as sufficient conditions for the success of this estimation, which is defined as order $m$ moment trackability. By-products of this study are connections between statistics such as Renyi entropy, Gallager's reliability function, and the concept of anytime capacity.

Journal ArticleDOI
TL;DR: The round-robin scheduler combined with max-min and waterfilling power controls is considered, showing that user scheduling combined with power allocation dampens the negative effect of channel misreporting compared to the purely physical layer analysis of channels misreporting without scheduling.
Abstract: We study the sensitivity of multi-user scheduling performance to channel magnitude misreporting in systems with massive antennas. We consider the round-robin scheduler combined with max-min and waterfilling power controls, respectively. We show that user scheduling combined with power allocation, in general, dampens the negative effect of channel misreporting compared to the purely physical layer analysis of channel misreporting without scheduling. We discover several interesting results. First, we observe a periodicity in rate-loss behavior as the number of misreporting users increases. Second, we find that the waterfilling power control is more robust to channel misreporting compared with max-min power control. Third, for homogeneous users with equal average signal-to-noise ratios (SNRs), channel underreporting is harmful but overreporting is beneficial for max-min power control; the opposite impact is found for waterfilling power control. For heterogeneous users with various average SNRs, however, both underreporting and overreporting harm the system for both power control policies, demonstrating the complex interactions across network layers due to channel misreporting.