scispace - formally typeset
Search or ask a question

Showing papers presented at "Conference on Information Sciences and Systems in 2021"


Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, a federated learning framework that allows one to handle heterogeneous client devices that may not conform to the population data distribution is presented, where the parameter ranges over levels of conformity.
Abstract: We present a federated learning framework that allows one to handle heterogeneous client devices that may not conform to the population data distribution. The proposed approach hinges upon a parameterized superquantile-based objective, where the parameter ranges over levels of conformity. We introduce a stochastic optimization algorithm compatible with secure aggregation, which interleaves device filtering steps with federated averaging steps. We conclude with numerical experiments with neural networks on computer vision and natural language processing data.

15 citations


Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, the authors proposed a two-fold defense mechanism to withstand adversarial interference on modulated radio signals, which consists of correcting misclassifications on mild attacks and detecting the presence of an adversary on more potent attacks.
Abstract: Automatic modulation classification (AMC) is used in intelligent receivers operating in shared spectrum environments to classify the modulation constellation of radio frequency (RF) signals from received waveforms. Recently, deep learning has proven capable of enhancing AMC performance using both convolutional neural networks (CNNs) and recurrent neural networks (RNNs). However, deep learning-based AMC models are susceptible to adversarial attacks, which can significantly degrade the performance of well-trained models by adding small amounts of interference into wireless RF signals during transmission. In this work, we present a two-fold defense mechanism to withstand adversarial interference on modulated radio signals. Specifically, our method consists of (1) correcting misclassifications on mild attacks and (2) detecting the presence of an adversary on more potent attacks. We show that our proposed defense is capable of withstanding adversarial interference injected into RF signals while maintaining false positive detection rates on CNNs and RNNs as low as 3%.

12 citations


Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the authors used ensemble learning of multiple CNN models for human activity recognition, which achieved an accuracy of 94% on the publicly available dataset and multiple ensembles of the models were created.
Abstract: Human Activity Recognition is a field concerned with the recognition of physical human activities based on the interpretation of sensor data, including one-dimensional time series data. Traditionally, hand-crafted features are relied upon to develop the machine learning models for activity recognition. However, that is a challenging task and requires a high degree of domain expertise and feature engineering. With the development in deep neural networks, it is much easier as models can automatically learn features from raw sensor data, yielding improved classification results. In this paper, we present a novel approach for human activity recognition using ensemble learning of multiple convolutional neural network (CNN) models. Three different CNN models are trained on the publicly available dataset and multiple ensembles of the models are created. The ensemble of the first two models gives an accuracy of 94% which is better than the methods available in the literature.

11 citations


Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, a fair and dynamic optimal transport with provable convergence using alternating method of multipliers is proposed, where each corresponding pair of resource supplier and receiver compute their own solutions and update the transport schemes through negotiation iteratively which do not require a central planner.
Abstract: Optimal transport is a framework that facilitates the most efficient allocation of a limited amount of resources. However, the most efficient allocation scheme does not necessarily preserve the most fairness. In this paper, we establish a framework which explicitly considers the fairness of dynamic resource allocation over a network with heterogeneous participants. As computing the transport strategy in a centralized fashion requires significant computational resources, it is imperative to develop computationally light algorithm that can be applied to large scale problems. To this end, we develop a fully distributed algorithm for fair and dynamic optimal transport with provable convergence using alternating method of multipliers. In the designed algorithm, each corresponding pair of resource supplier and receiver compute their own solutions and update the transport schemes through negotiation iteratively which do not require a central planner. The distributed algorithm can yield a fair and efficient resource allocation mechanism over a network. We corroborate the obtained results through case studies.

10 citations


Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, the authors explore the problem of deriving a posteriori probabilities of being defective for the members of a population in the non-adaptive group testing framework, which relies of a trellis representation of the test constraints.
Abstract: We explore the problem of deriving a posteriori probabilities of being defective for the members of a population in the non-adaptive group testing framework. Both noiseless and noisy testing models are addressed. The technique, which relies of a trellis representation of the test constraints, can be applied efficiently to moderate-size populations. The complexity of the approach is discussed and numerical results on the false positive probability vs. false negative probability trade-off are presented.

9 citations


Proceedings ArticleDOI
Shaohan Wu1
24 Mar 2021
TL;DR: In this article, the authors derive joint MAP/ML estimators for channel and impedance matrices in closed-form and develop a design principle leveraging a trade-off between channel estimation and impedance estimation, which depends on transmit diversity.
Abstract: Antenna impedance matching significantly affects the channel capacity of compact MIMO receivers. When antenna impedance is known to the receiver, channel capacity can be optimized. However, channel capacity may diminish, when antenna impedance varies due to time-varying near-field loading. This motivates impedance estimation in real-time. In this paper, we derive joint MAP/ML estimators for channel and impedance matrices in closed-form. As one result, we develop a design principle leveraging a trade-off between channel and impedance estimation, which depends on transmit diversity.

9 citations


Proceedings ArticleDOI
24 Mar 2021
TL;DR: Wang et al. as mentioned in this paper developed a trustable and distributed coordination system for distributed energy resources (DERs) using blockchain technology to enable trust between the aggregator and DERs.
Abstract: The fast growth of distributed energy resources (DERs), such as distributed renewables (e.g., rooftop PV panels), energy storage systems, electric vehicles, and controllable appliances, drives the power system toward a decentralized system with bidirectional power flow. The coordination of DERs through an aggregator, such as a utility, system operator, or a third-party coordinator, emerges as a promising paradigm. However, it is not well understood how to enable trust between the aggregator and DERs to integrate DERs efficiently. In this paper, we develop a trustable and distributed coordination system for DERs using blockchain technology. We model various DERs and formulate a cost minimization problem for DERs to optimize their energy trading, scheduling, and demand response. We use the alternating direction method of multipliers (ADMM) to solve the problem in a distributed fashion. To implement the distributed algorithm in a trustable way, we design a smart contract to update multipliers and communicate with DERs in a blockchain network. We validate our design by experiments using real-world data, and the simulation results demonstrate the effectiveness of our algorithm.

9 citations


Proceedings ArticleDOI
24 Mar 2021
TL;DR: This study incorporates a real-time aerial view using UAVs with three key improvements and automates UAV and ERV allocation while satisfying constraints between these vehicles using a distributed constraint optimization problem (DCOP) framework.
Abstract: While there has been significant progress on statistical theories in the information community, there is a lack of studies in information-theoretic distributed resource allocation to maximize information gain. With advanced technologies of unmanned aerial vehicles (UAVs) in response to corresponding revised FAA regulations, this study focuses on developing a new framework for utilizing UAVs in incident management. As a result of new computing technologies, predictive decision-making studies have recently improved ERV allocations for a sequence of incidents; however, these ground-based operations do not simultaneously capture network-wide information. This study incorporates a real-time aerial view using UAVs with three key improvements. First, aerial observations update the status of the freeway shoulder, allowing an ERV to safely travel at full speed. Second, observing parameters of the congestion shockwave provides accurate measurements of the true impact of an incident. Third, real-time information can be gathered on the clearance progress of an incident scene. We automate UAV and ERV allocation while satisfying constraints between these vehicles using a distributed constraint optimization problem (DCOP) framework. To find the optimal assignment of vehicles, the proposed model is formulated and solved using the Max-Sum approach. The system utility convergence is presented for different scenarios of grid size, number of incidents, and number of vehicles. We also present the solution of our model using the Distributed Stochastic Algorithm (DSA). DSA with exploration heuristics outperformed the Max-Sum algorithm when probability threshold p=0.5 but degrades for higher values of p.

8 citations


Proceedings ArticleDOI
Nandi O. Leslie1
24 Mar 2021
TL;DR: In this paper, an unsupervised learning approach was developed to monitor the normal behavior within the CAN bus data and detect malicious traffic in the SAE J1939 protocol for heavy-duty ground vehicles.
Abstract: In-vehicle networks remain largely unprotected from a myriad of vulnerabilities to failures caused by adversarial activities. Remote attacks on the SAE J1939 protocol based on controller access network (CAN) bus for heavy-duty ground vehicles can lead to detectable changes in the physical characteristics of the vehicle. In this paper, I develop an unsupervised learning approach to monitor the normal behavior within the CAN bus data and detect malicious traffic. The J1939 data packets have some text-based features that I convert to numerical values. In addition, I propose an algorithm based on hierarchical agglomerative clustering that considers multiple approaches for linkages and pairwise distances between observations. I present prediction performance results to show the effectiveness of this ensemble algorithm. In addition to in-vehicle network security, this algorithm is also transferrable to other cybersecurity datasets, including botnet attacks in traditional enterprise IP networks.

8 citations


Proceedings ArticleDOI
24 Mar 2021
TL;DR: This paper investigates the impact of the probability distribution of a Naive Bayes classifier and the statistical distribution of the underlying feature data on the classifier's performance and the relationship between classifier performance and data distribution.
Abstract: This paper investigates the impact of the probability distribution of a Naive Bayes classifier and the statistical distribution of the underlying feature data on the classifier's performance. Typical Naive Bayes performance assumptions lack quantitative and rigorous evidence in the common literature creating risk in rote application of Naive Bayes. This study investigates these performance assumptions to quantify where they are true, and the risk of maintaining those assumptions when utilizing Naive Bayes classifiers. Naive Bayes classifiers' exceptionally fast training times, performance, ease to implement, and minimal required resources often make them candidates for early classification trials, especially in Natural Language Processing tasks such as sentiment analysis. It is frequently assumed that the performance of a Naive Bayes classifier is heavily reliant with the distribution of the underlying data. This assumption is noted both in standard documentation and academic research and has largely been accepted as truth with little verification. This paper outlines an experiment that tests this assumption with real world sentiment analysis data. Naive Bayes classifiers were tested against non-Gaussian data, non-Gaussian feature weighted data, Gaussian-like data, and synthetically generated Gaussian data to observe the relationship between classifier performance and data distribution. Initial findings suggested that while this assumption is partially true, there may be additional factors heavily related with Naive Bayes performance that are not strictly related to a feature's distribution.

7 citations


Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the authors consider the time-varying Gaussian process bandit problem and derive regret guarantees for GP-UCB type algorithms, such as R-GP UCB and SW-GPUCB, under a Bayesian type regularity assumption.
Abstract: In this paper, we consider the time-varying Bayesian optimization problem. The unknown function at each time is assumed to lie in an RKHS (reproducing kernel Hilbert space) with a bounded norm. We adopt the general variation budget model to capture the time-varying environment, and the variation is characterized by the change of the RKHS norm. We adapt the restart and sliding window mechanism to introduce two GP-UCB type algorithms: R-GP-UCB and SW-GP-UCB, respectively. We derive the first (frequentist) regret guarantee on the dynamic regret for both algorithms. Our results not only recover previous linear bandit results when a linear kernel is used, but complement the previous regret analysis of time-varying Gaussian process bandit under a Bayesian-type regularity assumption, i.e., each function is a sample from a Gaussian process.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, the authors study the sequential resource allocation problem where a decision maker repeatedly allocates budgets between resources, and design combinatorial multi-armed bandit algorithms to solve this problem with discrete or continuous budgets.
Abstract: We study the sequential resource allocation problem where a decision maker repeatedly allocates budgets between resources. Motivating examples include allocating limited computing time or wireless spectrum bands to multiple users (i.e., resources). At each timestep, the decision maker should distribute its available budgets among different resources to maximize the expected reward, or equivalently to minimize the cumulative regret. In doing so, the decision maker should learn the value of the resources allocated for each user from feedback on each user's received reward. For example, users may send messages of different urgency over wireless spectrum bands; the reward generated by allocating spectrum to a user then depends on the message's urgency. We assume each user's reward follows a random process that is initially unknown. We design combinatorial multi-armed bandit algorithms to solve this problem with discrete or continuous budgets. We prove the proposed algorithms achieve logarithmic regrets under semi-bandit feedback.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the authors considered an unconstrained distributed optimization problem over a network of agents, in which some agents are adversarial and analyzed the effect of the adversarial agents on the convergence of the algorithm to the optimal solution.
Abstract: In this paper, we consider an unconstrained distributed optimization problem over a network of agents, in which some agents are adversarial. We solve the problem via gradient-based distributed optimization algorithm and characterize the effect of the adversarial agents on the convergence of the algorithm to the optimal solution. The attack model considered is such that agents locally perturb their iterates before broadcasting it to neighbors; and we analyze the case in which the adversarial agents cooperate in perturbing their estimates and the case where each adversarial agent acts independently. Based on the attack model adopted in the paper, we show that the solution converges to the neighborhood of the optimal solution and depends on the magnitude of the attack (perturbation) term. The analyses presented establishes conditions under which the malicious agents have enough information to obstruct convergence to the optimal solution by the non-adversarial agents.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the authors consider the problem of detecting hardware Trojans in an Integrated Circuit (IC) from a game theoretic standpoint. And they consider the presence of multiple classes of Trojan types, with each class containing multiple Trojan types.
Abstract: In this paper, we consider the problem of detecting hardware Trojans in an Integrated Circuit (IC) from a game theoretic standpoint. The paper considers the presence of multiple classes of Trojans, with each class containing multiple Trojan types, and characterizes the Nash Equilibrium (NE) strategy for inserting a Trojan (from the perspective of a malicious entity) and detecting a Trojan (from the perspective of a defender) under consideration of the impact that an undetected Trojan has on the defender's system. The paper also models a sequential hardware Trojan testing game, where the defender tests for the presence of Trojans over time, and characterizes the NE strategy of such a game. Numerous simulation results are presented to gain insights into the game theoretic hardware Trojan testing techniques presented in the paper.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, a network-theoretic argument is presented which shows that harmonics at sufficiently high frequencies degrade quickly with spatial distance from the oscillation source, as compared to content at the fundamental frequency.
Abstract: Harmonics in synchrophasor measurements of forced-oscillation events in the power grid are used to support localization of oscillation sources. A network-theoretic argument is presented which shows that harmonics at sufficiently high frequencies degrade quickly with spatial distance from the oscillation source, as compared to content at the fundamental frequency. Then, harmonics in synchrophasor measurements are analyzed for three historical forced-oscillation events with known oscillation sources. These data analyses confirm that harmonics are measurable in forced-oscillation responses, and further that they generally show faster spatial degradation as compared to the fundamental-frequency content. Based on these observations, we suggest techniques for localizing forced-oscillation sources based on ratios between signal content at the harmonics and the fundamental frequency.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the authors outline some recent results on group testing for two models: the practical model and the theoretical model of very low prevalence with perfect tests, and show that simple algorithms can be outperformed at low prevalence and high sensitivity.
Abstract: The usual problem for group testing is this: For a given number of individuals and a given prevalence, how many tests $T^{\prime}$ are required to find every infected individual? In real life, however, the problem is usually different: For a given number of individuals, a given prevalence, and a limited number of tests $T$ much smaller than $T^{\prime}$ , how can these tests best be used? In this conference paper, we outline some recent results on this problem for two models. First, the ‘practical’ model, which is relevant for screening for COVID-19 and has tests that are highly specific but imperfectly sensitive, shows that simple algorithms can be outperformed at low prevalence and high sensitivity. Second, the ‘theoretical’ model of very low prevalence with perfect tests gives interesting new mathematical results.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: The Mean-Change Test (MCT) as discussed by the authors minimizes the worst-case detection delay over all post-change distributions as the false-alarm rate goes to zero.
Abstract: We study the problem of quickest detection of a change in the mean of an observation sequence, under the assumption that both the pre- and post-change distributions have bounded support. We first study the case where the pre-change distribution is known, and then study the extension where only the mean and variance of the pre-change distribution are known. In both cases, no knowledge of the post-change distribution is assumed other than that it has bounded support. For the case where the pre-change distribution is known, we derive a test that asymptotically minimizes the worst-case detection delay over all post-change distributions, as the false alarm rate goes to zero. We then study the limiting form of the optimal test as the gap between the pre- and post-change means goes to zero, which we call the Mean-Change Test (MCT). We show that the MCT can be designed with only knowledge of the mean and variance of the pre-change distribution. We validate our analysis through numerical results for detecting a change in the mean of a beta distribution. We also demonstrate the use of the MCT for pandemic monitoring.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, a novel energy efficient beamforming and DOA estimation scheme called RLS-MUSIC algorithm is proposed, which takes linear array smart antenna elements geometry, which is mostly used antenna array geometry, Recursive Least Square (RLS) adaptive beam-forming algorithm and multiple signal classification (MusIC) estimation technique as the reference input methods to model the new energy efficient scheme.
Abstract: This paper proposes a novel energy efficient beamforming and DOA estimation scheme called RLS-MUSIC algorithm. First, we study different adaptive beamforming algorithms and receive signal Degree-of-Arrival (DOA) estimation techniques for wireless communication network application in reference to their effect on energy efficiency. Taking linear array smart antenna elements geometry, which is mostly used antenna array geometry, Recursive Least Square (RLS) adaptive beam-forming algorithm and Multiple Signal Classification (MUSIC) DOA estimation technique are identified as the reference input methods to model the new energy efficient scheme. Using an energy model, which is as a function of beam-width, signal transmission strength (range), and signal-to-noise-interference ratio (SNIR), we evaluate the total amount of energy spent during beam-formation. Finally, the energy gain due to RLS-MUSIC scheme is compared with that of RLS alone. The results show that the RLS-MUSIC scheme based access-points/base-station antenna systems consume less energy during beam formation and they generate beams with narrower beam-width, lower side lobe level and better deep null.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, a mixture of object detection and attention-enriched deep learning models is used to extract the image features, and then an extended version of Recurrent Neural Networks (LSTM) with attention-enhanced features is adopted to generate the caption.
Abstract: This paper focuses on developing semantic image caption generation techniques that leverage image and scene understanding. More particularly, we are interested in addressing image captioning by developing a mixture of object detection and attention-enriched deep learning models. To extract the image features, a Convolutional Neural Network (CNN) is used, and then an extended version of Recurrent Neural Networks (LSTM) with attention-enrichment is adopted to generate the caption. We implement image captioning by considering detected objects from the image scene, and then by integrating an attention mechanism for caption generation. This can have multiple advantages from accuracy and semantics perspectives. The objective of this paper is to introduce a combined pipeline that employs several variant models for semantic caption generation. Four variant models are proposed, all of them are implemented and trained on COCO and Flickr30k datasets, and then tested on a subset of COCO dataset. Results of the different models were evaluated using a semantic similarity analysis between the generated captions and the actual ground truth captions. Our framework helps in a deeper understanding of images and decision making in diverse use-cases such as innovative and distinctive responses from multimodal data, and in analyzing and monitoring crowdsourced images from social media and other sources.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the authors proposed a group updating scheme, akin to group testing, which updates a central location about the status of each member of the population by appropriately grouping their individual status.
Abstract: We consider two closely related problems: anomaly detection in sensor networks and testing for infections in human populations. In both problems, we have $n$ nodes (sensors, humans), and each node exhibits an event of interest (anomaly, infection) with probability $p$ . We want to keep track of the anomaly/infection status of all nodes at a central location. We develop a group updating scheme, akin to group testing, which updates a central location about the status of each member of the population by appropriately grouping their individual status. Unlike group testing, which uses the expected number of tests as a metric, in group updating, we use the expected age of information at the central location as a metric. We determine the optimal group size to minimize the age of information. We show that, when $p$ is small, the proposed group updating policy yields smaller age compared to a sequential updating policy.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, the authors proposed and developed an analytical modeling technique to characterize the AoI to efficiently support mURLLC services over 6G mobile wireless networks by developing a stochastic hybrid system (SHS) model.
Abstract: Integrating the ultra-reliable and low-latency communication (URLLC) with massive access, the massive-URLLC (mURLLC) in the sixth generation (6G) wireless networks aims at providing a wide range of delay-sensitive real-time services and applications by satisfying users' stringent requirements on the delay-bound and error rate. The age of information (AoI) theory characterizes the freshness of information, which measures the time elapsed since the generation instant of the latest received information update, and thus, has been recognized to be able to analyze the time-critical information's transmission latency in mURLLC networks. However, how to accurately characterize the AoI for 6G mURLLC networks has neither been well understood nor thoroughly studied. To overcome this challenge, in this paper we propose and develop an analytical modeling technique to characterize the AoI to efficiently support mURLLC services over 6G mobile wireless networks. First, we develop a stochastic hybrid system (SHS) model to track the AoI's evolution in the M/M /k queueing mURLLC networks. Second, we analyze AoI dynamics in the proposed SHS model, including deriving closed-form expressions for moments of the AoI and the state probability. Third, we prove the Lyapunov stability of the AoI under our proposed SHS model. Finally, we validate and evaluate our derived results of the AoI evolution in mURLLC networks through numerical analyses.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, the authors proposed the use of Symmetric Simplicial algorithm that can be trained to perform many morphological computations and even more complex functions and presented the training of a certain topology that uses symmetric simplicials instead of morphological functions and the classification accuracy achieved during the training process.
Abstract: Convolutional Neural Networks are capable of perform many complex tasks such as image classification. Recently morphological functions where introduced as a replacement of the first convolutional layers in any net, using their non-linearities to achieve better accuracy for classification Neural Networks, but in most cases the functions are fixed beforehand and can not be trained. We propose the use of Symmetric Simplicial algorithm that can be trained to perform many morphological computations and even more complex functions. We present the training of a certain topology that uses Symmetric Simplicials instead of morphological functions and the classification accuracy achieved during the training process.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, a novel interpretation of Markov Decision Processes (MDP) from the online optimization viewpoint is provided, where the policy of the MDP is viewed as the decision variable while the corresponding value function is treated as payoff feedback from the environment.
Abstract: Ahstract-This work provides a novel interpretation of Markov Decision Processes (MDP) from the online optimization viewpoint. In such an online optimization context, the policy of the MDP is viewed as the decision variable while the corresponding value function is treated as payoff feedback from the environment. Based on this interpretation, we construct a Blackwell game induced by MDP, which bridges the gap among regret minimization, Blackwell approachability theory, and learning theory for MDP. Specifically, Based on the approachability theory, we propose 1) Blackwell value iteration for offline planning and 2) Blackwell Q-learning for online learning in MDP, both of which are shown to converge to the optimal solution. Our theoretical guarantees are corroborated by numerical experiments.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, a generative generative network (GAN) is proposed to detect occlusions and adversarial patches in the images to target the weak points of the algorithm.
Abstract: Current Computer Vision algorithms for classifying objects, such as Deep Nets, lack robustness to image changes which, although perceptible, would not fool a human observer. We quantify this by showing how performances of Deep Nets degrades badly on images where the objects are partially occluded and degrades even worse on more challenging and adversarial situations where, for example, patches are introduced in the images to target the weak points of the algorithm. To address this problem we develop a novel architecture, called Compositional Generative Networks (Compositional Nets) which is innately robust to these types of image changes. This architecture replaces the fully connected classification head of the deep network by a generative compositional model which includes an outlier process. This enables it, for example, to localize occluders and subsequently focus on the non-occluded parts of the object. We conduct classification experiments in a variety of situations including artificially occluded images, real images of partially occluded objects from the MS-COCO dataset, and adversarial patch attacks on PASCAL3D+ and the German Traffic Sign Recognition Benchmark. Our results show that Compositional Nets are much more robust to occlusion and adversarial attacks, like patch attacks, compared to standard Deep Nets, even those which use data augmentation and adversarial training. Compositional Nets can also accurately localize these image changes, despite being trained only with class labels. We argue that testing vision algorithms in an adversarial manner which probes for the weakness of the algorithms, e.g., by patch attacks, is a more challenging way to evaluate them compared to standard methods, which simply test them on a random set of samples, and that Compositional Nets have the potential to overcome such challenges.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this paper, it is shown that a single-input, single-output (SISO) system with a Chen-Fliess series representation whose generating series has a well defined relative degree can be designed to render the system's output exactly zero.
Abstract: Given a single-input, single-output (SISO) system with a Chen-Fliess series representation whose generating series has a well defined relative degree, it is shown that there is a notion of universal zero dynamics that describes a set of dynamics evolving on a locally convex (infinite dimensional) Lie group so as to render the system's output exactly zero. Minimum phase in this setting is defined in terms of the boundedness of the applied input which zeros the output. As an application, it is shown that one can design a zero dynamics attack on cyber-infrastructure using only an estimate of the plant's generating series. That is, detailed knowledge of the plant's internal dynamics is not needed.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the authors align the needs of humans and autonomous systems in a framework called Autonomy's Hierarchy of Needs, which provides a smart city ecosystem design framework for the Artemis Base Camp.
Abstract: Space habitats such as NASA's proposed Artemis Base Camp will house both astronauts and autonomous systems. The Artemis Base Camp's infrastructure could provide supporting services to its tenants to optimize their function. This calls for a smart city ecosystem. Maslow's Hierarchy of Needs has been engaged as a framework to inform human-centric smart city design and feature prioritization; however, autonomous systems have different needs to humans. This paper aligns the needs of humans and autonomous systems in a framework called Autonomy's Hierarchy of Needs, which provides a smart city ecosystem design framework for the Artemis Base Camp.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: GCR-MHNE as mentioned in this paper employs a Multi-View Heterogeneous Network Embedding method to generate personalized recommendations, which exploits semantic relations between papers based on citations, venue information, topical relevance, authors' information, and relevant labels to learn their vector representations.
Abstract: The enormous number of research papers on the Web motivated researchers to propose models that could assist users with personalized citation recommendations. Recently, Citation Recommendation (CR) models applying Network Representation Learning (NRL) techniques have revealed promising outcomes. Still, current NRL-based models are limited in terms of employing salient factors and relations between the objects of Multi-view Heterogeneous Networks (MHNs), hence, they failed to capture researchers' preferences. Besides, these models cannot exploit heterogeneity in the networks and hence suffer from the sparsity problems. To overcome these problems, we propose GCR-MHNE model, which employs a Multi-View Heterogeneous Network Embedding method to generate personalized recommendations. Specifically, it exploits semantic relations between papers based on citations, venue information, topical relevance, authors' information, and relevant labels to learn their vector representations. Moreover, the model captures the most influential features related to each semantic relation employing an attention mechanism. Compared to its counterparts, GCR-MHNE brings 6% and 7% improvements using the openly-available datasets in terms of Mean Average Precision and Normalized Discounted Cumulative Gain metrics, respectively. Furthermore, the proposed model outperforms its counterparts when the networks are sparse.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the authors presented a method to compute the channel capacity of an observed (partially known) discrete memoryless channel (DMC) using a probably approximately correct (PAC) bound.
Abstract: This paper presents a method to compute the channel capacity of an observed (partially known) discrete memoryless channel (DMC) using a probably approximately correct (PAC) bound. Given $N$ independently and identically distributed (i.i.d.) input-output sample pairs, we define a compound DMC with convex sublevel-sets to constrain the channel output uncertainty with high probability. Then we numerically solve an ‘K-way’ convex optimization to determine an achievable information rate $R_{L}(N)$ across the channel that holds with a specified high probability. Our approach provides the non-asymptotic ‘worst-case’ convergence $R_{L}(N)$ to channel capacity $C$ at the rate of $O(\sqrt{\log (\log (N)) / N})$ .

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the tradeoff between the communication cost and control performance was studied in an LQG control system where one of two feedback channels is discrete and incurs a communication cost.
Abstract: In this work, we study an LQG control system where one of two feedback channels is discrete and incurs a communication cost. We assume that a decoder (co-located with the controller) can make noiseless measurements of a subset of the state vector (referred to as side information) meanwhile a remote encoder (co-located with a sensor) can make arbitrary measurements of the entire state vector, but must convey its measurements to the decoder over a noiseless binary channel. Use of the channel incurs a communication cost, quantified as the time-averaged expected length of prefix-free binary codeword. We study the tradeoff between the communication cost and control performance. The formulation motivates a constrained directed information minimization problem, which can be solved via convex optimization. Using the optimization, we propose a quantizer design and a subsequent achievability result.

Proceedings ArticleDOI
24 Mar 2021
TL;DR: In this article, the authors demonstrate cross-validation techniques for detecting spoofing attacks on the sensor data in autonomous driving and demonstrate the applicability of classical mobile robotics algorithms and hardware security primitives in defending autonomous vehicles from targeted cyber attacks.
Abstract: Advances in artificial intelligence, machine learning, and robotics have profoundly impacted the field of autonomous navigation and driving. However, sensor spoofing attacks can compromise critical components and the control mechanisms of mobile robots. Therefore, understanding vulnerabilities in autonomous driving and developing countermeasures remains imperative for the safety of unmanned vehicles. Hence, we demonstrate cross-validation techniques for detecting spoofing attacks on the sensor data in autonomous driving in this work. First, we discuss how visual and inertial odometry (VIO) algorithms can provide a root-of-trust during navigation. Then, we develop examples for sensor data spoofing attacks using the open-source driving dataset. Next, we design an attack detection technique using VIO algorithms that cross-validates the navigation parameters using the IMU and the visual data. Following, we consider hardware-dependent attack survival mechanisms that support an autonomous system during an attack. Finally, we also provide an example of spoofing survival technique using on-board hardware oscillators. Our work demonstrates the applicability of classical mobile robotics algorithms and hardware security primitives in defending autonomous vehicles from targeted cyber attacks.