scispace - formally typeset
Search or ask a question
Author

Eduardo L. Pasiliao

Bio: Eduardo L. Pasiliao is an academic researcher from Air Force Research Laboratory. The author has contributed to research in topics: Software-defined radio & Stochastic programming. The author has an hindex of 20, co-authored 162 publications receiving 1516 citations. Previous affiliations of Eduardo L. Pasiliao include University of Florida & University of Central Florida.


Papers
More filters
Journal ArticleDOI
TL;DR: It is demonstrated that for solving a class of convex composite optimization with linear constraints, the rate of convergence of AADMM is better than that of linearized ADMM, in terms of their dependence on the Lipschitz constant of the smooth component.
Abstract: We present a novel framework, namely, accelerated alternating direction method of multipliers (AADMM), for acceleration of linearized ADMM. The basic idea of AADMM is to incorporate a multistep acceleration scheme into linearized ADMM. We demonstrate that for solving a class of convex composite optimization with linear constraints, the rate of convergence of AADMM is better than that of linearized ADMM, in terms of their dependence on the Lipschitz constant of the smooth component. Moreover, AADMM is capable of dealing with the situation when the feasible region is unbounded, as long as the corresponding saddle point problem has a solution. A backtracking algorithm is also proposed for practical performance.

228 citations

Journal ArticleDOI
TL;DR: This paper designs and implements a generative model that learns the sample space of the I/Q values of known transmitters and uses the learned representation to generate signals that imitate the transmissions of these transmitters.
Abstract: Recent advances in wireless technologies have led to several autonomous deployments of such networks. As nodes across distributed networks must co-exist, it is important that all transmitters and receivers are aware of their radio frequency (RF) surroundings so that they can adapt their transmission and reception parameters to best suit their needs. To this end, machine learning techniques have become popular as they can learn, analyze and predict the RF signals and associated parameters that characterize the RF environment. However, in the presence of adversaries, malicious activities such as jamming and spoofing are inevitable, making most machine learning techniques ineffective in such environments. In this paper we propose the Radio Frequency Adversarial Learning (RFAL) framework for building a robust system to identify rogue RF transmitters by designing and implementing a generative adversarial net (GAN). We hope to exploit transmitter specific “signatures” like the in-phase (I) and quadrature (Q) imbalance (i.e., the I/Q imbalance ) present in all transmitters for this task, by learning feature representations using a deep neural network that uses the I/Q data from received signals as input. After detection and elimination of the adversarial transmitters RFAL further uses this learned feature embedding as “fingerprints” for categorizing the trusted transmitters. More specifically, we implement a generative model that learns the sample space of the I/Q values of known transmitters and uses the learned representation to generate signals that imitate the transmissions of these transmitters. We program 8 universal software radio peripheral (USRP) software defined radios (SDRs) as trusted transmitters and collect “over-the-air” raw I/Q data from them using a Realtek Software Defined Radio (RTL-SDR), in a laboratory setting. We also implement a discriminator model that discriminates between the trusted transmitters and the counterfeit ones with 99.9% accuracy and is trained in the GAN framework using data from the generator. Finally, after elimination of the adversarial transmitters, the trusted transmitters are classified using a convolutional neural network (CNN), a fully connected deep neural network (DNN) and a recurrent neural network (RNN) to demonstrate building of an end-to-end robust transmitter identification system with RFAL. Experimental results reveal that the CNN, DNN, and RNN are able to correctly distinguish between the 8 trusted transmitters with 81.6%, 94.6% and 97% accuracy respectively. We also show that better “trusted transmission” classification accuracy is achieved for all three types of neural networks when data from two different types of transmitters (different manufacturers) are used rather than when using the same type of transmitter (same manufacturer).

110 citations

Journal ArticleDOI
01 Oct 2015-Networks
TL;DR: This study considers a class of critical node detection problems that involves minimization of a distance‐based connectivity measure of a given unweighted graph via the removal of a subset of nodes subject to a budgetary constraint and develops an effective exact algorithm that iteratively solves a series of simpler IPs to obtain an optimal solution for the original problem.
Abstract: This study considers a class of critical node detection problems that involves minimization of a distance-based connectivity measure of a given unweighted graph via the removal of a subset of nodes referred to as critical nodes subject to a budgetary constraint. The distance-based connectivity measure of a graph is assumed to be a function of the actual pairwise distances between nodes in the remaining graph e.g., graph efficiency, Harary index, characteristic path length, residual closeness rather than simply whether nodes are connected or not, a typical assumption in the literature. We derive linear integer programming IP formulations, along with additional enhancements, aimed at improving the performance of standard solvers. For handling larger instances, we develop an effective exact algorithm that iteratively solves a series of simpler IPs to obtain an optimal solution for the original problem. The edge-weighted generalization is also considered, which results in some interesting implications for distance-based clique relaxations, namely, s-clubs. Finally, we conduct extensive computational experiments with real-world and randomly generated network instances under various settings that reveal interesting insights and demonstrate the advantages and limitations of the proposed approach. In particular, one important conclusion of our work is that vulnerability of real-world networks to targeted attacks can be significantly more pronounced than what can be estimated by centrality-based heuristic methods commonly used in the literature. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 663, 170-195 2015

80 citations

Journal ArticleDOI
TL;DR: This study presents an integer programming framework for minimizing the connectivity and cohesiveness properties of a given graph by removing nodes and edges subject to a joint budgetary constraint and demonstrates that this approach encompasses several other models existing in the literature, including minimization of the total number of connected node pairs.
Abstract: This study presents an integer programming framework for minimizing the connectivity and cohesiveness properties of a given graph by removing nodes and edges subject to a joint budgetary constraint. The connectivity and cohesiveness metrics are assumed to be general functions of sizes of the remaining connected components and node degrees, respectively. We demonstrate that our approach encompasses, as special cases (possibly, under some mild conditions), several other models existing in the literature, including minimization of the total number of connected node pairs, minimization of the largest connected component size, and maximization of the number of connected components. We discuss computational complexity issues, derive linear mixed integer programming (MIP) formulations, and describe additional modeling enhancements aimed at improving the performance of MIP solvers. We also conduct extensive computational experiments with real-life and randomly generated network instances under various settings that reveal interesting insights and demonstrate advantages and limitations of the proposed framework.

78 citations

Journal ArticleDOI
TL;DR: This work develops more compact linear 0–1 formulations for the considered types of problems with $$\varTheta (n^2)$$Θ(n2) entities and provides reformulations and valid inequalities that improve the performance of the developed models.
Abstract: Critical node detection problems aim to optimally delete a subset of nodes in order to optimize or restrict a certain metric of network fragmentation. In this paper, we consider two network disruption metrics which have recently received substantial attention in the literature: the size of the remaining connected components and the total number of node pairs connected by a path. Exact solution methods known to date are based on linear 0–1 formulations with at least $$\varTheta (n^3)$$ entities and allow one to solve these problems to optimality only in small sparse networks with up to 150 nodes. In this work, we develop more compact linear 0–1 formulations for the considered types of problems with $$\varTheta (n^2)$$ entities. We also provide reformulations and valid inequalities that improve the performance of the developed models. Computational experiments show that the proposed formulations allow finding exact solutions to the considered problems for real-world sparse networks up to 10 times larger and with CPU time up to 1,000 times faster compared to previous studies.

77 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

01 Jan 2012

3,692 citations