scispace - formally typeset
Search or ask a question

Showing papers by "Themistoklis Charalambous published in 2021"


Journal ArticleDOI
01 Jul 2021
TL;DR: This is the first work for which asymptotic convergence of consensus is proven for general linear MASs with arbitrary heterogeneous delays, and a distributed predictive observer is proposed to estimate the consensus tracking error and to construct the control input that does not involve any integral term.
Abstract: This letter studies the consensus tracking control for multi-agent systems (MASs) of general linear dynamics considering heterogeneous constant known input and communication delays under a directed communication graph containing a spanning tree. First, for open-loop stable MASs, a distributed predictive observer is proposed to estimate the consensus tracking error and to construct the control input that does not involve any integral term (which is time-efficient in calculation). Then, using the generalized Nyquist criterion, we derive the conditions for asymptotic convergence of the closed-loop system and show that is delay-independent. Subsequently, another observer is designed that allows the MASs to be open-loop unstable. Next, we use the generalized Nyquist criterion to compute the observer’s gain matrix. Towards this end, we choose a specific structure with which the problem boils down to computing a single parameter, herein called the predictive observer parameter. Two algorithms are proposed for choosing this parameter: one for general linear systems and one for monotone systems. To the best of the authors’ knowledge, this is the first work for which asymptotic convergence of consensus is proven for general linear MASs with arbitrary heterogeneous delays. Finally, the validity of our results is demonstrated via a vehicle platooning example.

29 citations


Journal ArticleDOI
TL;DR: In this article, the authors present how buffering at the relays can be integrated with FD and NOMA in order to improve the performance of relay networks, and the key points for the successful integration of buffer-aided relays with spatial sharing paradigms are presented, and details on buffer-state-information-based relay selection and its fully distributed implementation are discussed.
Abstract: Full duplex (FD) systems are able to transmit and receive signals over the same frequency band simultaneously with the potential of even doubling the spectral efficiency in comparison with traditional half duplex (HD) systems. When combined with relaying, FD systems are expected to dramatically increase the throughput of future wireless networks. However, the degrading effect of self-interference (SI) due to the simultaneous transmission and reception at the relay threatens their efficient rollout in real-world topologies. At the same time, non-orthogonal multiple access (NOMA) can further increase the spectral efficiency of the network. In this article, we present how buffering at the relays can be integrated with FD and NOMA in order to improve the performance of relay networks. More specifically, the key points for the successful integration of buffer-aided relays with spatial sharing paradigms are presented, and details on buffer-state-information-based relay selection and its fully distributed implementation are discussed. Furthermore, a mathematical framework for the analysis of FD and HD buffer-aided relay networks is rigorously discussed. Performance evaluation shows the importance of data buffering toward mitigating SI and improving the sum-rate of the network. Finally, the goal of this work is both to summarize the current state of the art and, by calling attention to open problems, to spark interest toward targeting these and related problems in relay networks.

16 citations



Posted Content
01 Jul 2021
TL;DR: In this article, a machine learning-based nominal model update mechanism, which utilizes the linear regression technique to update the nominal model at each ILC trial only using the current trial information, is proposed for non-repetitive TVSs.
Abstract: The repetitive tracking task for time-varying systems (TVSs) with non-repetitive time-varying parameters, which is also called non-repetitive TVSs, is realized in this paper using iterative learning control (ILC). A machine learning (ML) based nominal model update mechanism, which utilizes the linear regression technique to update the nominal model at each ILC trial only using the current trial information, is proposed for non-repetitive TVSs in order to enhance the ILC performance. Given that the ML mechanism forces the model uncertainties to remain within the ILC robust tolerance, an ILC update law is proposed to deal with non-repetitive TVSs. How to tune parameters inside ML and ILC algorithms to achieve the desired aggregate performance is also provided. The robustness and reliability of the proposed method are verified by simulations. Comparison with current state-of-the-art demonstrates its superior control performance in terms of controlling precision. This paper broadens ILC applications from time-invariant systems to non-repetitive TVSs, adopts ML regression technique to estimate non-repetitive time-varying parameters between two ILC trials and proposes a detailed parameter tuning mechanism to achieve desired performance, which are the main contributions.

10 citations


Posted Content
TL;DR: In this paper, the authors proposed a fast distributed iterative algorithm which operates over a large scale network of nodes and allows each of the interconnected nodes to reach agreement to an optimal solution in a finite number of time steps.
Abstract: In this paper we analyze the problem of optimal task scheduling for data centers. Given the available resources and tasks, we propose a fast distributed iterative algorithm which operates over a large scale network of nodes and allows each of the interconnected nodes to reach agreement to an optimal solution in a finite number of time steps. More specifically, the algorithm (i) is guaranteed to converge to the exact optimal scheduling plan in a finite number of time steps and, (ii) once the goal of task scheduling is achieved, it exhibits distributed stopping capabilities (i.e., it allows the nodes to distributely determine whether they can terminate the operation of the algorithm). Furthermore, the proposed algorithm operates exclusively with quantized values (i.e., the information stored, processed and exchanged between neighboring agents is subject to deterministic uniform quantization) and relies on event-driven updates (e.g., to reduce energy consumption, communication bandwidth, network congestion, and/or processor usage). We also provide examples to illustrate the operation, performance, and potential advantages of the proposed algorithm. Finally, by using extensive empirical evaluations through simulations we show that the proposed algorithm exhibits state-of-the-art performance.

10 citations


Proceedings ArticleDOI
12 Jul 2021
TL;DR: In this article, the joint rate distortion function (RDF) for a tuple of correlated sources taking values in abstract alphabet spaces (i.e., continuous) subject to two individual distortion criteria is analyzed.
Abstract: In this paper we analyze the joint rate distortion function (RDF), for a tuple of correlated sources taking values in abstract alphabet spaces (i.e., continuous) subject to two individual distortion criteria. First, we derive structural properties of the realizations of the reproduction Random Variables (RVs), which induce the corresponding optimal test channel distributions of the joint RDF. Second, we consider a tuple of correlated multivariate jointly Gaussian RVs, $X_{1}:\Omega\rightarrow \mathbb{R}^{p_{1}}, X_{2}:\Omega\rightarrow \mathbb{R}^{p_{2}}$ with two square-error fidelity criteria, and we derive additional structural properties of the optimal realizations, and use these to characterize the RDF as a convex optimization problem with respect to the parameters of the realizations. We show that the computation of the joint RDF can be performed by semidefinite programming. Further, we derive closed-form expressions of the joint RDF, such that Gray's [1] lower bounds hold with equality, and verify their consistency with the semidefinite programming computations.

9 citations


Proceedings ArticleDOI
11 Aug 2021
TL;DR: In this article, the authors consider distributed estimation of linear systems when the state observations are corrupted with Gaussian noise of unbounded support and under possible random adversarial attacks, and they consider sensors equipped with single time-scale estimators and local chi-square $(\chi^{2})$ detectors.
Abstract: This paper considers distributed estimation of linear systems when the state observations are corrupted with Gaussian noise of unbounded support and under possible random adversarial attacks. We consider sensors equipped with single time-scale estimators and local chi-square $(\chi^{2})$ detectors to simultaneously observe the states, share information, fuse the noise/attack-corrupted data locally, and detect possible anomalies in their own observations. While this scheme is applicable to a wide variety of systems associated with full-rank (invertible) matrices, we discuss it within the context of distributed inference in social networks. The proposed technique outperforms existing results in the sense that: (i) we consider Gaussian noise with no simplifying upper-bound assumption on the support; (ii) all existing $\chi^{2}$-based techniques are centralized while our proposed technique is distributed, where the sensors locally detect attacks, with no central coordinator, using specific probabilistic thresholds; and (iii) no local-observability assumption at a sensor is made, which makes our method feasible for large-scale social networks. Moreover, we consider a Linear Matrix Inequalities (LMI) approach to design block-diagonal gain (estimator) matrices under appropriate constraints for isolating the attacks.

8 citations


Proceedings ArticleDOI
11 Apr 2021
TL;DR: In this paper, the authors combine the two notions of timely delivery of information to study their interplay; namely, deadline-constrained packet delivery due to latency constraints and freshness of information.
Abstract: In this work, we combine the two notions of timely delivery of information to study their interplay; namely, deadline-constrained packet delivery due to latency constraints and freshness of information. More specifically, we consider a two-user multiple access setup with random-access, in which user 1 is a wireless device with a queue and has external bursty traffic which is deadline-constrained, while user 2 monitors a sensor and transmits status updates to the destination. We provide analytical expressions for the throughput and drop probability of user 1, and an analytical expression for the average Age of Information (AoI) of user 2 monitoring the sensor. The relations reveal that there is a trade-off between the average AoI of user 2 and the drop rate of user 1: the lower the average AoI, the higher the drop rate, and vice versa. Simulations corroborate the validity of our theoretical results.

6 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this paper, a bandit-based power control algorithm for full-duplex relay networks is proposed, relying on acknowledgements/negative-acknowledgements observations by the relay.
Abstract: Full-duplex relaying is an enabling technique of sixth generation (6G) mobile networks, promising tremendous rate and spectral efficiency gains. In order to improve the performance of full-duplex communications, power control is a viable way of avoiding excessive loop interference at the relay. Unfortunately, power control requires channel state information of both source-relay and relay-destination channels, as well as of the loop interference channel, thus resulting in increased overheads. Aiming to offer a low-complexity alternative for power control in full-duplex relay networks, we adopt reward-based learning in the sense of multi-armed bandits. More specifically, we provide bandit-based power control algorithms, relying on acknowledgements/negative-acknowledgements observations by the relay. The proposed algorithms avoid the need for channel state information acquisition and exchange, and can be employed in a distributed manner. Performance evaluation results in terms of outage probability, average throughput and accumulated regret over time highlight an interesting performance-complexity trade-off compared to optimal power control with full channel knowledge and significant performance gains over the cases without power control and random power level selection.

5 citations


Journal ArticleDOI
03 May 2021
TL;DR: In this paper, a hybrid FD DDA algorithm is presented, namely LoLa4SOR, switching between SuR and HD half-duplex (HD) relaying, which provides a tradeoff among channel state information requirements and performance.
Abstract: Buffer-aided (BA) relaying improves the diversity of cooperative networks often at the cost of increasing end-to-end packet delays. This characteristic renders BA relaying unsuitable for delay-sensitive applications. However, the increased diversity makes BA relaying appealing for ultra-reliable communications. Towards enabling ultra-reliable low-latency communication (URLLC), we aim at enhancing BA relaying for supporting delay-sensitive applications. In this paper, reliable full-duplex (FD) network operation is targeted and for this purpose, hybrid relay selection algorithms are formulated, combining BA successive relaying (SuR) and delay- and diversity-aware (DDA) half-duplex (HD) algorithms. In this context, a hybrid FD DDA algorithm is presented, namely LoLa4SOR, switching between SuR and HD operation. Additionally, a low-complexity distributed version is given, namely d-LoLa4SOR, providing a trade-off among channel state information requirements and performance. The theoretical analysis shows that the diversity of LoLa4SOR equals to two times the number of available relays $K$ , i.e., $2K$ , when the buffer size $L$ is greater than or equal to 3. Comparisons with other HD, SuR and hybrid algorithms reveal that LoLa4SOR offers superior outage and throughput performance while, the average delay is reduced due to SuR-based FD operation and the consideration of buffer state information for relay-pair selection. d-LoLa4SOR, as one of the few distributed algorithms in the literature, has a reasonable performance that makes it a more practical approach.

5 citations


Posted Content
TL;DR: In this article, the authors propose an asynchronous iterative scheme which allows a set of interconnected nodes to distributively reach an agreement within a pre-specified bound in a finite number of steps.
Abstract: We propose an asynchronous iterative scheme which allows a set of interconnected nodes to distributively reach an agreement within a pre-specified bound in a finite number of steps. While this scheme could be adopted in a wide variety of applications, we discuss it within the context of task scheduling for data centers. In this context, the algorithm is guaranteed to approximately converge to the optimal scheduling plan, given the available resources, in a finite number of steps. Furthermore, being asynchronous, the proposed scheme is able to take into account the uncertainty that can be introduced from straggler nodes or communication issues in the form of latency variability while still converging to the target objective. In addition, by using extensive empirical evaluation through simulations we show that the proposed method exhibits state-of-the-art performance.

Posted Content
TL;DR: In this article, the authors considered the asynchronous distributed optimization problem in which each node has its own convex cost function and can communicate directly only with its neighbors, as determined by a directed communication topology (directed graph or digraph).
Abstract: In this work, we consider the asynchronous distributed optimization problem in which each node has its own convex cost function and can communicate directly only with its neighbors, as determined by a directed communication topology (directed graph or digraph). First, we reformulate the optimization problem so that Alternating Direction Method of Multipliers (ADMM) can be utilized. Then, we propose an algorithm, herein called Asynchronous Approximate Distributed Alternating Direction Method of Multipliers (AsyAD-ADMM), using finite-time asynchronous approximate ratio consensus, to solve the multi-node convex optimization problem, in which every node performs iterative computations and exchanges information with its neighbors asynchronously. More specifically, at every iteration of AsyAD-ADMM, each node solves a local convex optimization problem for one of the primal variables and utilizes a finite-time asynchronous approximate consensus protocol to obtain the value of the other variable which is close to the optimal value, since the cost function for the second primal variable is not decomposable. If the individual cost functions are convex but not necessarily differentiable, the proposed algorithm converges at a rate of $\mathcal{O}(1/k)$, where $k$ is the iteration counter. The efficacy of AsyAD-ADMM is exemplified via a proof-of-concept distributed least-square optimization problem with different performance-influencing factors investigated.

Posted Content
10 Sep 2021
TL;DR: In this article, a general nonlinear 1st-order consensus-based solution for distributed constrained convex optimization is considered for applications in network resource allocation, which is used to optimize continuously-differentiable strictly convex cost functions over weakly-connected undirected multi-agent networks.
Abstract: In this paper, a general nonlinear 1st-order consensus-based solution for distributed constrained convex optimization is considered for applications in network resource allocation. The proposed continuous-time solution is used to optimize continuously-differentiable strictly convex cost functions over weakly-connected undirected multi-agent networks. The solution is anytime feasible and models various nonlinearities to account for imperfections and constraints on the (physical model of) agents in terms of their limited actuation capabilities, e.g., quantization and saturation constraints among others. Moreover, different applications impose specific nonlinearities to the model, e.g., convergence in fixed/finite-time, robustness to uncertainties, and noise-tolerant dynamics. Our proposed distributed resource allocation protocol generalizes such nonlinear models. Putting convex set analysis together with the Lyapunov theorem, we provide a general technique to prove convergence (i) regardless of the particular type of nonlinearity (ii) with weak network-connectivity requirement (i.e., uniform-connectivity). We simulate the performance of the protocol in continuous-time coordination of generators, known as the economic dispatch problem (EDP).

Proceedings ArticleDOI
11 Aug 2021
TL;DR: In this article, the correlation between the average size/number of contractions and the global clustering coefficient (GCC) of the system graph is studied, and the empirical results show that estimating systems with high GCC requires fewer measurements, and in case of measurement failure, there are fewer possible options to find substitute measurement that recovers the system's observability.
Abstract: Observability and estimation are closely tied to the system structure, which can be visualized as a system graph–a graph that captures the inter-dependencies within the state variables. For example, in social system graphs such inter-dependencies represent the social interactions of different individuals. It was recently shown that contractions, a key concept from graph theory, in the system graph are critical to system observability, as (at least) one state measurement in every contraction is necessary for observability. Thus, the size and number of contractions are critical in recovering for loss of observability. In this paper, the correlation between the average-size/number of contractions and the global clustering coefficient (GCC) of the system graph is studied. Our empirical results show that estimating systems with high GCC requires fewer measurements, and in case of measurement failure, there are fewer possible options to find substitute measurement that recovers the system’s observability. This is significant as by tuning the GCC, we can improve the observability properties of large-scale engineered networks, such as social networks and smart grid.

Posted Content
12 Feb 2021
TL;DR: In this paper, the authors consider the problem of privacy preservation in the average consensus problem when communication among nodes is quantized and propose two privacy-preserving event-triggered quantized average consensus algorithms that can be followed by any node wishing to maintain its privacy and not reveal the initial state it contributes to the average computation.
Abstract: In this paper, we consider the problem of privacy preservation in the average consensus problem when communication among nodes is quantized. More specifically, we consider a setting where some nodes in the network are curious but not malicious and they try to identify the initial states of other nodes based on the data they receive during their operation (without interfering in the computation in any other way), while some nodes in the network want to ensure that their initial states cannot be inferred exactly by the curious nodes. We propose two privacy-preserving event-triggered quantized average consensus algorithms that can be followed by any node wishing to maintain its privacy and not reveal the initial state it contributes to the average computation. Every node in the network (including the curious nodes) is allowed to execute a privacy-preserving algorithm or its underlying average consensus algorithm. Under certain topological conditions, both algorithms allow the nodes who adopt privacypreserving protocols to preserve the privacy of their initial quantized states and at the same time to obtain, after a finite number of steps, the exact average of the initial states.

Journal ArticleDOI
TL;DR: In this article, the authors present an open access article under the CC BY-NC-ND license for open-access biomedical data analysis with a focus on the use of biomedical data.

Posted Content
10 Mar 2021
TL;DR: In this article, the authors propose a distributed deterministic channel access scheme for a system consisting of multiple control subsystems that close their loop over a shared wireless network, which is achieved by utilizing timers for prioritizing channel access with respect to a local cost.
Abstract: We consider the distributed channel access problem for a system consisting of multiple control subsystems that close their loop over a shared wireless network. We propose a distributed method for providing deterministic channel access without requiring explicit information exchange between the subsystems. This is achieved by utilizing timers for prioritizing channel access with respect to a local cost which we derive by transforming the control objective cost to a form that allows its local computation. This property is then exploited for developing our distributed deterministic channel access scheme. A framework to verify the stability of the system under the resulting scheme is then proposed. Next, we consider a practical scenario in which the channel statistics are unknown. We propose learning algorithms for learning the parameters of imperfect communication links for estimating the channel quality and, hence, define the local cost as a function of this estimation and control performance. We establish that our learning approach results in collision-free channel access. The behavior of the overall system is exemplified via a proof-of-concept illustrative example, and the efficacy of this mechanism is evaluated for large-scale networks via simulations.

Posted Content
TL;DR: In this article, a generalization of the estimating sequences is proposed, which allows encoding any form of information about the cost function that can aid in further accelerating the minimization process.
Abstract: We present a new accelerated gradient-based method for solving smooth unconstrained optimization problems. The goal is to embed a heavy-ball type of momentum into the Fast Gradient Method (FGM). For this purpose, we devise a generalization of the estimating sequences, which allows for encoding any form of information about the cost function that can aid in further accelerating the minimization process. In the black box framework, we propose a construction for the generalized estimating sequences, which is obtained by exploiting the history of the previously constructed estimating functions. From the viewpoint of efficiency estimates, we prove that the lower bound on the number of iterations for the proposed method is $\mathcal{O} \left(\sqrt{\frac{\kappa}{2}}\right)$. Our theoretical results are further corroborated by extensive numerical experiments on various types of optimization problems, often dealt within signal processing. Both synthetic and real-world datasets are utilized to demonstrate the efficiency of our proposed method in terms of decreasing the distance to the optimal solution, as well as in terms of decreasing the norm of the gradient.

Posted Content
22 May 2021
TL;DR: In this article, the authors consider distributed estimation of linear systems when the state observations are corrupted with Gaussian noise of unbounded support and under possible random adversarial attacks, and they consider sensors equipped with single time-scale estimators and local chi-square detectors to simultaneously serve the states, share information, fuse the noise/attack-corrupted data locally, and detect possible anomalies in their own observations.
Abstract: This paper considers distributed estimation of linear systems when the state observations are corrupted with Gaussian noise of unbounded support and under possible random adversarial attacks. We consider sensors equipped with single time-scale estimators and local chi-square ($\chi^2$) detectors to simultaneously opserve the states, share information, fuse the noise/attack-corrupted data locally, and detect possible anomalies in their own observations. While this scheme is applicable to a wide variety of systems associated with full-rank (invertible) matrices, we discuss it within the context of distributed inference in social networks. The proposed technique outperforms existing results in the sense that: (i) we consider Gaussian noise with no simplifying upper-bound assumption on the support; (ii) all existing $\chi^2$-based techniques are centralized while our proposed technique is distributed, where the sensors \textit{locally} detect attacks, with no central coordinator, using specific probabilistic thresholds; and (iii) no local-observability assumption at a sensor is made, which makes our method feasible for large-scale social networks. Moreover, we consider a Linear Matrix Inequalities (LMI) approach to design block-diagonal gain (estimator) matrices under appropriate constraints for isolating the attacks.

Posted Content
01 Apr 2021
TL;DR: In this article, the authors considered single time-scale distributed estimation of (potentially) unstable full-rank dynamical systems via a multi-agent network subject to transmission time-delays.
Abstract: Classical distributed estimation scenarios typically assume timely and reliable exchange of information over the multi-agent network. This paper, in contrast, considers single time-scale distributed estimation of (potentially) unstable full-rank dynamical systems via a multi-agent network subject to transmission time-delays. The proposed networked estimator consists of two steps: (i) consensus on (delayed) a-priori estimates, and (ii) measurement update. The agents only share their a-priori estimates with their in-neighbors over time-delayed transmission links. Considering the most general case, the delays are assumed to be time-varying, arbitrary, unknown, but upper-bounded. In contrast to most recent distributed observers assuming system observability in the neighborhood of each agent, our proposed estimator makes no such assumption. This may significantly reduce the communication/sensing loads on agents in large-scale, while making the (distributed) observability analysis more challenging. Using the notions of augmented matrices and Kronecker product, the geometric convergence of the proposed estimator over strongly-connected networks is proved irrespective of the bound on the time-delay. Simulations are provided to support our theoretical results.

Posted Content
TL;DR: In this paper, a survey of reinforcement learning-aided mobile edge caching is presented, aiming at highlighting the achieved network gains over conventional caching approaches, taking into account the heterogeneity of 6G networks in various wireless settings, such as fixed, vehicular and flying networks.
Abstract: Mobile networks are experiencing tremendous increase in data volume and user density. An efficient technique to alleviate this issue is to bring the data closer to the users by exploiting the caches of edge network nodes, such as fixed or mobile access points and even user devices. Meanwhile, the fusion of machine learning and wireless networks offers a viable way for network optimization as opposed to traditional optimization approaches which incur high complexity, or fail to provide optimal solutions. Among the various machine learning categories, reinforcement learning operates in an online and autonomous manner without relying on large sets of historical data for training. In this survey, reinforcement learning-aided mobile edge caching is presented, aiming at highlighting the achieved network gains over conventional caching approaches. Taking into account the heterogeneity of sixth generation (6G) networks in various wireless settings, such as fixed, vehicular and flying networks, learning-aided edge caching is presented, departing from traditional architectures. Furthermore, a categorization according to the desirable performance metric, such as spectral, energy and caching efficiency, average delay, and backhaul and fronthaul offloading is provided. Finally, several open issues are discussed, targeting to stimulate further interest in this important research field.

Posted Content
TL;DR: In this paper, it was shown that linear codes are not practical and fragile with respect to a mismatch between the statistics of the mathematical model of the channel and the real statistics of a Gaussian random variable.
Abstract: In \cite{butman1976} the linear coding scheme is applied, $X_t =g_t\Big(\Theta - {\bf E}\Big\{\Theta\Big|Y^{t-1}, V_0=v_0\Big\}\Big)$, $t=2,\ldots,n$, $X_1=g_1\Theta$, with $\Theta: \Omega \to {\mathbb R}$, a Gaussian random variable, to derive a lower bound on the feedback rate, for additive Gaussian noise (AGN) channels, $Y_t=X_t+V_t, t=1, \ldots, n$, where $V_t$ is a Gaussian autoregressive (AR) noise, and $\kappa \in [0,\infty)$ is the total transmitter power. For the unit memory AR noise, with parameters $(c, K_W)$, where $c\in [-1,1]$ is the pole and $K_W$ is the variance of the Gaussian noise, the lower bound is $C^{L,B} =\frac{1}{2} \log \chi^2$, where $\chi =\lim_{n\longrightarrow \infty} \chi_n$ is the positive root of $\chi^2=1+\Big(1+ \frac{|c|}{\chi}\Big)^2 \frac{\kappa}{K_W}$, and the sequence $\chi_n \triangleq \Big|\frac{g_n}{g_{n-1}}\Big|, n=2, 3, \ldots,$ satisfies a certain recursion, and conjectured that $C^{L,B}$ is the feedback capacity. In this correspondence, it is observed that the nontrivial lower bound $C^{L,B}=\frac{1}{2} \log \chi^2$ such that $\chi >1$, necessarily implies the scaling coefficients of the feedback code, $g_n$, $n=1,2, \ldots$, grow unbounded, in the sense that, $\lim_{n\longrightarrow\infty}|g_n| =+\infty$. The unbounded behaviour of $g_n$ follows from the ratio limit theorem of a sequence of real numbers, and it is verified by simulations. It is then concluded that such linear codes are not practical, and fragile with respect to a mismatch between the statistics of the mathematical model of the channel and the real statistics of the channel. In particular, if the error is perturbed by $\epsilon_n>0$ no matter how small, then $X_n =g_t\Big(\Theta - {\bf E}\Big\{\Theta\Big|Y^{t-1}, V_0=v_0\Big\}\Big)+g_n \epsilon_n$, and $|g_n|\epsilon_n \longrightarrow \infty$, as $n \longrightarrow \infty$.

Posted Content
03 May 2021
TL;DR: In this paper, a distributed consensus tracking control problem for general linear multi-agent systems (MASs) with external disturbances and heterogeneous time-varying input and communication delays under a directed communication graph topology, containing a spanning tree is investigated.
Abstract: This paper investigates the distributed consensus tracking control problem for general linear multi-agent systems (MASs) with external disturbances and heterogeneous time-varying input and communication delays under a directed communication graph topology, containing a spanning tree. First, for all agents whose state matrix has no eigenvalues with positive real parts, a communication-delay-related observer, which is used to construct the controller, is designed for followers to estimate the leader's state information. Second, by means of the output regulation theory, the results are relaxed to the case that only the leader's state matrix eigenvalues have non-positive real parts and, under these relaxed conditions, the controller is redesigned. Both cases lead to a closed-loop error system of which the stability is guaranteed via a Lyapunov-Krasovskii functional with sufficient conditions in terms of input-delay-dependent linear matrix inequalities (LMIs). An extended LMI is proposed which, in conjunction with the rest of LMIs, results in a solution with a larger upper bound on delays than what would be feasible without it. It is highlighted that the integration of communication-delay-related observer and input-delay-related LMI to construct a fully distributed controller (which requires no global information) is scalable to arbitrarily large networks. The efficacy of the proposed scheme is demonstrated via illustrative numerical examples.

Posted Content
22 May 2021
TL;DR: In this paper, the correlation between the average size/number of contractions and the global clustering coefficient (GCC) of the system graph is studied, and the empirical results show that estimating systems with high GCC requires fewer measurements, and in case of measurement failure, there are fewer possible options to find substitute measurement that recovers the system's observability.
Abstract: Observability and estimation are closely tied to the system structure, which can be visualized as a system graph--a graph that captures the inter-dependencies within the state variables. For example, in social system graphs such inter-dependencies represent the social interactions of different individuals. It was recently shown that contractions, a key concept from graph theory, in the system graph are critical to system observability, as (at least) one state measurement in every contraction is necessary for observability. Thus, the size and number of contractions are critical in recovering for loss of observability. In this paper, the correlation between the average-size/number of contractions and the global clustering coefficient (GCC) of the system graph is studied. Our empirical results show that estimating systems with high GCC requires fewer measurements, and in case of measurement failure, there are fewer possible options to find substitute measurement that recovers the system's observability. This is significant as by tuning the GCC, we can improve the observability properties of large-scale engineered networks, such as social networks and smart grid.

Posted Content
TL;DR: In this paper, a novel optimization problem formulation for minimizing the average AoI while satisfying the timely throughput constraints is proposed, which is cast as a Constrained Markov Decision Process (CMDP).
Abstract: In 5G and beyond systems, the notion of latency gets a great momentum in wireless connectivity as a metric for serving real-time communications requirements. However, in many applications, research has pointed out that latency could be inefficient to handle applications with data freshness requirements. Recently, the notion of Age of Information (AoI) that can capture the freshness of the data has attracted a lot of attention. In this work, we consider mixed traffic with time-sensitive users; a deadline-constrained user, and an AoI-oriented user. To develop an efficient scheduling policy, we cast a novel optimization problem formulation for minimizing the average AoI while satisfying the timely throughput constraints. The formulated problem is cast as a Constrained Markov Decision Process (CMDP). We relax the constrained problem to an unconstrained Markov Decision Process (MDP) problem by utilizing Lyapunov optimization theory and it can be proved that it is solved per frame by applying backward dynamic programming algorithms with optimality guarantees. Simulation results show that the timely throughput constraints are satisfied while minimizing the average AoI. Also, simulation results show the convergence of the algorithm for different values of the weighted factor and the trade-off between the AoI and the timely throughput.

Posted Content
TL;DR: In this paper, the joint rate distortion function (RDF) for a tuple of correlated sources taking values in abstract alphabet spaces (i.e., continuous) subject to two individual distortion criteria is analyzed.
Abstract: In this paper we analyze the joint rate distortion function (RDF), for a tuple of correlated sources taking values in abstract alphabet spaces (i.e., continuous) subject to two individual distortion criteria. First, we derive structural properties of the realizations of the reproduction Random Variables (RVs), which induce the corresponding optimal test channel distributions of the joint RDF. Second, we consider a tuple of correlated multivariate jointly Gaussian RVs, $X_1 : \Omega \rightarrow {\mathbb R}^{p_1}, X_2 : \Omega \rightarrow {\mathbb R}^{p_2}$ with two square-error fidelity criteria, and we derive additional structural properties of the optimal realizations, and use these to characterize the RDF as a convex optimization problem with respect to the parameters of the realizations. We show that the computation of the joint RDF can be performed by semidefinite programming. Further, we derive closed-form expressions of the joint RDF, such that Gray's [1] lower bounds hold with equality, and verify their consistency with the semidefinite programming computations.

Posted Content
07 Apr 2021
TL;DR: In this paper, a distributed channel triggering mechanism for wireless networked control systems (WNCSs) for conventional and smart sensors, i.e., sensors without and with computational power, respectively, is studied.
Abstract: In this paper, we study distributed channel triggering mechanisms for wireless networked control systems (WNCSs) for conventional and smart sensors, i.e., sensors without and with computational power, respectively. We first consider the case of conventional sensors in which the state estimate is performed based on the intermittent raw measurements received from the sensor and we show that the priority measure is associated with the statistical properties of the observations, as it is the case of the cost of information loss (CoIL) [1]. Next, we consider the case of smart sensors and despite the fact that CoIL can also be deployed, we deduce that it is more beneficial to use the available measurements and we propose a function of the value of information (VoI) [2], [3] that also incorporates the channel conditions as the priority measure. The different scenarios and priority measures are discussed and compared for simple scenarios via simulations.

Posted Content
TL;DR: In this paper, a new accelerated gradient-based estimating sequence technique for solving large-scale optimization problems with composite structure is proposed, which is obtained by utilizing a tight lower bound on the objective function.
Abstract: We devise a new accelerated gradient-based estimating sequence technique for solving large-scale optimization problems with composite structure. More specifically, we introduce a new class of estimating functions, which are obtained by utilizing a tight lower bound on the objective function. Then, by exploiting the coupling between the proposed estimating functions and the gradient mapping technique, we construct a class of composite objective multi-step estimating-sequence techniques (COMET). We propose an efficient line search strategy for COMET, and prove that it enjoys an accelerated convergence rate. The established convergence results allow for step size adaptation. Our theoretical findings are supported by extensive computational experiments on various problem types and datasets. Moreover, our numerical results show evidence of the robustness of the proposed method to the imperfect knowledge of the smoothness and strong convexity parameters of the objective function.

Posted Content
TL;DR: In this paper, the authors studied the internal stability and string stability of a vehicle platooning of constant time headway spacing policy with a varying-speed leader using a multiple-predecessor-following strategy via vehicle-to-vehicle communication.
Abstract: This paper studies the internal stability and string stability of a vehicle platooning of constant time headway spacing policy with a varying-speed leader using a multiple-predecessor-following strategy via vehicle-to-vehicle communication. Unlike the common case in which the leader's speed is constant and different kinds of Proportional-Integral-Derivative controllers are implemented, in this case, the fact that the leader has a time-varying speed necessitates the design of an observer. First, in order to estimate its position, speed and acceleration error with respect to the leader, each follower designs an observer. The observer is designed by means of constructing an observer matrix whose parameters should be determined. We simplifies the design of the matrix of the observer in such a way that the design boils down to choosing a scalar value. The resulting observer turns out to have a third order integrator dynamics, which provides an advantage of simplifying the controller structure and, hence, derive conditions for string stability using a frequency response method. A new heuristic searching algorithm is developed to deduce the controller parameter conditions, given a fixed time headway, for string stability. Additionally, a bisection-like algorithm is incorporated into the above algorithm to obtain the minimum (with some deviation tolerance) available value of the time headway by fixing one controller parameter. The effectiveness of the internal and string stabilities of the proposed observer-based controller is demonstrated via comparison examples.

Posted Content
01 Apr 2021
TL;DR: In this paper, a continuous-time algorithm that incorporates network topology changes in discrete jumps is proposed to solve the binary classification problem via distributed Support-Vector-Machines (SVM), where the idea is to train a network of agents, with limited share of data, to cooperatively learn the SVM classifier for the global database.
Abstract: In this paper, we consider the binary classification problem via distributed Support-Vector-Machines (SVM), where the idea is to train a network of agents, with limited share of data, to cooperatively learn the SVM classifier for the global database Agents only share processed information regarding the classifier parameters and the gradient of the local loss functions instead of their raw data In contrast to the existing work, we propose a continuous-time algorithm that incorporates network topology changes in discrete jumps This hybrid nature allows us to remove chattering that arises because of the discretization of the underlying CT process We show that the proposed algorithm converges to the SVM classifier over time-varying weight balanced directed graphs by using arguments from the matrix perturbation theory