scispace - formally typeset
Search or ask a question

Showing papers by "Charles A. Kamhoua published in 2015"


Proceedings Article•DOI•
08 Jun 2015
TL;DR: In this article, the authors formulate a non-cooperative cybersecurity information sharing game that can guide the firms to independently decide whether to participate in the Cybersecurity Information Exchange (CYBEX) or not.
Abstract: The initiative to protect against future cyber crimes requires a collaborative effort from all types of agencies spanning industry, academia, federal institutions, and military agencies. Therefore, a Cybersecurity Information Exchange (CYBEX) framework is required to facilitate breach/patch related information sharing among the participants (firms) to combat cyber attacks. In this paper, we formulate a non-cooperative cybersecurity information sharing game that can guide: (i) the firms (players)1 to independently decide whether to “participate in CYBEX and share” or not; (ii) the CYBEX framework to utilize the participation cost dynamically as incentive (to attract firms toward self-enforced sharing) and as a charge (to increase revenue). We analyze the game from an evolutionary game-theoretic strategy and determine the conditions under which the players' self-enforced evolutionary stability can be achieved. We present a distributed learning heuristic to attain the evolutionary stable strategy (ESS) under various conditions. We also show how CYBEX can wisely vary its pricing for participation to increase sharing as well as its own revenue, eventually evolving toward a win-win situation.

65 citations


Proceedings Article•DOI•
03 Nov 2015
TL;DR: This paper will use game theory to investigate when multiple self-interested firms can invest in vulnerability discovery and share their cyber-threat information and apply this algorithm to a public cloud computing platform as one of the fastest growing segments of the cyberspace.
Abstract: Cybersecurity is among the highest priorities in industries, academia and governments. Cyber-threats information sharing among different organizations has the potential to maximize vulnerabilities discovery at a minimum cost. Cyber-threats information sharing has several advantages. First, it diminishes the chance that an attacker exploits the same vulnerability to launch multiple attacks in different organizations. Second, it reduces the likelihood an attacker can compromise an organization and collect data that will help him launch an attack on other organizations. Cyberspace has numerous interconnections and critical infrastructure owners are dependent on each other's service. This well-known problem of cyber interdependency is aggravated in a public cloud computing platform. The collaborative effort of organizations in developing a countermeasure for a cyber-breach reduces each firm's cost of investment in cyber defense. Despite its multiple advantages, there are costs and risks associated with cyber-threats information sharing. When a firm shares its vulnerabilities with others there is a risk that these vulnerabilities are leaked to the public (or to attackers) resulting in loss of reputation, market share and revenue. Therefore, in this strategic environment the firms committed to share cyber-threats information might not truthfully share information due to their own self-interests. Moreover, some firms acting selfishly may rationally limit their cybersecurity investment and rely on information shared by others to protect themselves. This can result in under investment in cybersecurity if all participants adopt the same strategy. This paper will use game theory to investigate when multiple self-interested firms can invest in vulnerability discovery and share their cyber-threat information. We will apply our algorithm to a public cloud computing platform as one of the fastest growing segments of the cyberspace.

39 citations


Proceedings Article•DOI•
03 Nov 2015
TL;DR: Numerical results verify that the proposed model promotes such sharing, which helps to relieve the firms' total security technology investment too and ensures and self-enforces the firms to share their breach information truthfully for maximization of its gross utility.
Abstract: Robust CYBersecurity information EXchange (CYBEX) infrastructure is envisioned to protect the firms from future cyber attacks via collaborative threat intelligence sharing, which might be difficult to achieve via sole effort. The executive order from the U. S. federal government clearly encourages the firms to share their cybersecurity breach and patch related information among other federal and private firms for strengthening their as well as nation's security infrastructure. In this paper, we present a game theoretic framework to investigate the economic benefits of cyber-threat information sharing and analyze the impacts and consequences of not participating in the game of information exchange. We model the information exchange framework as distributed non-cooperative game among the firms and investigate the implications of information sharing and security investments. The proposed incentive model ensures and self-enforces the firms to share their breach information truthfully for maximization of its gross utility. Theoretical analysis of the incentive framework has been conducted to find the conditions under which firms' net benefit for sharing security information and investment can be maximized. Numerical results verify that the proposed model promotes such sharing, which helps to relieve their total security technology investment too.

31 citations


Proceedings Article•DOI•
24 Aug 2015
TL;DR: A non-cooperative game between N-firms is formulated to analyze the participating firms' decisions about the information sharing and security investments, and the probability of successful cyber attack is analyzed using the famous dose-response immunity model.
Abstract: Inefficiency of addressing cybersecurity problems can be settled by the corporations if they work in a collaborative manner, exchanging security information with each other. However, without any incentive and also due to the possibility of information exploitation, the firms may not be willing to share their breach/vulnerability information with the external agencies. Hence it is crucial to understand how the firms can be encouraged, so that they become self-enforced towards sharing their threat intelligence, which will not only increase their own payoff but also their peers' too, creating a win-win situation. In this research, we study the incentives and costs behind such crucial information sharing and security investments made by the firms. Specifically, a non-cooperative game between N-firms is formulated to analyze the participating firms' decisions about the information sharing and security investments. We analyze the probability of successful cyber attack using the famous dose-response immunity model. We also design an incentive model for CYBEX, which can incentivize/punish the firms based on their sharing/free-riding nature in the framework. Using negative definite Hessian condition, we find the conditions under which the social optimal values of the coupled constraint tuple (security investment and sharing quantity) can be found, which will maximize the firms' net payoff.

30 citations


Proceedings Article•DOI•
27 Jun 2015
TL;DR: This work shows that there are multiple Nash equilibria for the public cloud security game, and demonstrates that the players' Nash equilibrium profile can be allowed to not be dependent on the probability that the hyper visor is compromised, reducing the factor externality plays in calculating the equilibrium.
Abstract: With the growth of cloud computing, many businesses, both small and large, are opting to use cloud services compelled by a great cost savings potential. This is especially true of public cloud computing which allows for quick, dynamic scalability without many overhead or long-term commitments. However, one of the largest dissuasions from using cloud services comes from the inherent and unknown danger of a shared platform such as the hyper visor. An attacker can attack a virtual machine (VM) and then go on to compromise the hyper visor. If successful, then all virtual machines on that hyper visor can become compromised. This is the problem of negative externalities, where the security of one player affects the security of another. This work shows that there are multiple Nash equilibria for the public cloud security game. It also demonstrates that we can allow the players' Nash equilibrium profile to not be dependent on the probability that the hyper visor is compromised, reducing the factor externality plays in calculating the equilibrium. Finally, by using our allocation method, the negative externality imposed onto other players can be brought to a minimum compared to other common VM allocation methods.

27 citations


Proceedings Article•DOI•
29 Oct 2015
TL;DR: The design, implementation and evaluation of G-Storm, a GPU-enabled parallel system based on Storm, which harnesses the massively parallel computing power of GPUs for high-throughput online stream data processing.
Abstract: The Single Instruction Multiple Data (SIMD) architecture of Graphic Processing Units (GPUs) makes them perfect for parallel processing of big data. In this paper, we present the design, implementation and evaluation of G-Storm, a GPU-enabled parallel system based on Storm, which harnesses the massively parallel computing power of GPUs for high-throughput online stream data processing. G-Storm has the following desirable features: 1) G-Storm is designed to be a general data processing platform as Storm, which can handle various applications and data types. 2) G-Storm exposes GPUs to Storm applications while preserving its easy-to-use programming model. 3) G-Storm achieves high-throughput and low-overhead data processing with GPUs. We implemented G-Storm based on Storm 0.9.2 and tested it using two different applications: continuous query and matrix multiplication. Extensive experimental results show that compared to Storm, G-Storm achieves over 7x improvement on throughput for continuous query, while maintaining reasonable average tuple processing time. It also leads to 2.3x throughput improvement for the matrix multiplication application.

23 citations


Proceedings Article•DOI•
26 May 2015
TL;DR: Results show that the proposed scheme outperforms the state-of-the-art method using temperature tracking in terms of detection rate and computational complexity and does not make any assumptions about the statistical distribution of power trace and no Trojan-active data is needed, which makes it appropriate for runtime use.
Abstract: Hardware Trojans (HTs) are posing a serious threat to the security of Integrated Circuits (ICs). Detecting HT in an IC is an important but hard problem due to the wide spectrum of HTs and their stealthy nature. In this paper, we propose a runtime Trojan detection approach by applying chaos theory to analyze the nonlinear dynamic characteristics of power consumption of an IC. The observed power dissipation series is embedded into a higher dimensional phase space. Such an embedding transforms the observed data to a new processing space, which provides precise information about the dynamics involved. The feature model is then built in this newly reconstructed phase space. The overhead, which is the main challenge for runtime approaches, is reduced by taking advantage of available thermal sensors in most modern ICs. The proposed model has been tested for publicly-available Trojan benchmarks and simulation results show that the proposed scheme outperforms the state-of-the-art method using temperature tracking in terms of detection rate and computational complexity. More importantly, the proposed model does not make any assumptions about the statistical distribution of power trace and no Trojan-active data is needed, which makes it appropriate for runtime use.

10 citations


Proceedings Article•DOI•
07 Dec 2015
TL;DR: This paper aims to justify the benefits of a fully Open-Implementation cloud infrastructure, which means that the cloud's implementation and configuration details can be inspected by both the legitimate and malicious cloud users, and reduces the total security threats.
Abstract: Trusting a cloud infrastructure is a hard problem, which urgently needs effective solutions. There are increasing demands for switching to the cloud in the sectors of financial, healthcare, or government etc., where data security protections are among the highest priorities. But most of them are left unsatisfied, due to the current cloud infrastructures' lack of provable trustworthiness. Trusted Computing (TC) technologies implement effective mechanisms for attesting to the genuine behaviors of a software platform. Integrating TC with cloud infrastructure shows a promising method for verifying the cloud's behaviors, which may in turn facilitate provable trustworthiness. However, the side effect of TC also brings concerns: exhibiting genuine behaviors might attract targeted attacks. Consequently, current Trusted Cloud proposals only integrate limited TC capabilities, which hampers the effective and practical trust establishment. In this paper, we aim to justify the benefits of a fully Open-Implementation cloud infrastructure, which means that the cloud's implementation and configuration details can be inspected by both the legitimate and malicious cloud users. We applied game theoretic analysis to discover the new dynamics formed between the Cloud Service Provider (CSP) and cloud users, when the Open-Implementation strategy is introduced. We conclude that, even though Open-Implementation cloud may facilitate attacks, vulnerabilities or misconfiguration are easier to discover, which in turn reduces the total security threats. Also, cyber threat monitoring and sharing are made easier in an Open-Implementation cloud. More importantly, the cloud's provable trustworthiness will attract more legitimate users, which increases CSP's revenue and helps lowering the price. This eventually creates a virtuous cycle, which will benefit both the CSP and legitimate users.

7 citations


Proceedings Article•DOI•
01 Dec 2015
TL;DR: Since, unlike other viruses, to be able to freely communicate with their masters, botnets' primary objective is to disable any protection mechanism found on the target machine; the proposed hardware-based isolation infrastructure presents an improvement over existing software-based solutions.
Abstract: Botnets are widely considered one of the most dangerous threats on the internet due to their modular and adaptive nature which makes them difficult to defend against. In contrast to previous generations of malicious codes, botnets have a command and control (C2) infrastucture which allows them to be remotely controlled by their masters. A command and control infrastructure based on Internet Relay Chat protocol (IRC-based C2) is one of the most popular C2) infrastructures botnet creators use to deploy their botnets’ malwares (IRC botnets). In this paper, we propose a novel approach to detect and eliminate IRC botnets. Our approach consists of inserting a reconfigurable hardware isolation layer between the network link and the target. Our reconfigurable hardware is an FPGA System-on-Chip (FPGA SoC) that uses both anomaly-based detection and signature-based detection approaches to identify IRC botnets. Since, unlike other viruses, to be able to freely communicate with their masters, botnets’ primary objective is to disable any protection mechanism (firewalls, antivirus applications) found on the target machine; our hardware-based isolation infrastructure presents an improvement over existing software-based solutions.We evaluated our architecture codenamed BotPGA using real-world IRC botnets’ non-encrypted network traces. The results show that BotPGA can detect real-world non-encrypted malicious IRC traffic and botnets with high accuracy.

6 citations


Proceedings Article•DOI•
01 Oct 2015
TL;DR: This paper defines and analyzes models of fault tolerant architectures for secure systems that rely on the use of design diversity, built using minimal extensions to classical architectures according to a set of defined failure classes for secure services.
Abstract: Modern critical systems are facing an increasingly number of new security risks. Nowadays, the extensive use of third-party components and tools during design, and the massive outsourcing overseas of the implementation and integration of systems parts, augment the chances for the introduction of malicious system alterations along the development lifecycle. In addition, the growing dominance of monocultures in the cyberspace, comprising collections of identical interconnected computer platforms, leads to systems that are subject to the same vulnerabilities and attacks. This is especially important for cyber-physical systems, which interconnect cyberspace with computing resources and physical processes. The application of concepts and principles from design diversity to the development and operation of critical systems can help palliate these emerging security challenges. This paper defines and analyzes models of fault tolerant architectures for secure systems that rely on the use of design diversity. The models are built using minimal extensions to classical architectures according to a set of defined failure classes for secure services. A number of metrics are provided to quantify fault tolerance and performance as a function of design diversity. The architectures are analyzed with respect to the design diversity, and compared based on the undetected failure probability, the number of tolerated and detected failures, and the performance delay.

4 citations


Proceedings Article•DOI•
01 Oct 2015
TL;DR: A multivariate Bayesian trust model for secondary nodes in a distributed DSA network that accurately incorporates anomalous behavior as well as monitoring uncertainty that might arise from an anomaly detection scheme is proposed.
Abstract: Dynamic spectrum access (DSA) networks allow opportunistic spectrum access to license exempt secondary nodes. Usually secondary nodes employ a cooperative sensing mechanism to correctly infer spectrum occupancy. However, the possibility of falsification of locally sensed occupancy report, also known as secondary spectrum data falsification (SSDF) can cripple the operation of secondary networks. In this paper, we propose a multivariate Bayesian trust model for secondary nodes in a distributed DSA network. The proposed model accurately incorporates anomalous behavior as well as monitoring uncertainty that might arise from an anomaly detection scheme. We also propose possible extensions to the SSDF attack techniques. Subsequently, we use a machine learning approach to learn the thresholds for classifying nodes as honest or malicious based on their trust values. The threshold based classification is shown to perform well under different path loss environments and with varying degrees of attacks by the malicious nodes. We also show the trust based fusion model can be used by nodes to disregard a node's information while fusing the individual reports. Using the fusion scheme, we report the improvements of cooperative spectrum decisions for various multi-channel SSDF techniques.1

Proceedings Article•DOI•
08 Jun 2015
TL;DR: This paper uses a localized clustering technique in conjunction with the concept of critical density from percolation theory such that each node decides its own rebroadcasting probability in a distributed manner to increase node outreach in interference-limited cognitive radio networks.
Abstract: In this paper, we argue that the traditional techniques for flooding and probabilistic flooding are not applicable to cognitive radio networks under the SINR regime. We identify the causes that i) degrade node outreach even with increasing deployment density under the SINR model and ii) lead to duplicate transmissions under the Boolean model. Further performance degradation occurs due to the additional constraints imposed by the primary users in such networks. To increase node outreach in interference-limited cognitive radio networks, we propose a modified version of probabilistic flooding that uses lower message overhead without compromising network connectivity. This is achieved by having just enough number of neighbors of a node to rebroadcast to others. The subset of neighbors that are selected to broadcast is decided on the number of neighbor a nodes has, their spatial orientation with respect to each other, and the interference they might cause. Identification of such subsets reduce duplicate retransmissions which in turn reduces interference. We use a localized clustering technique in conjunction with the concept of critical density from percolation theory such that each node decides its own rebroadcasting probability in a distributed manner. Through simulations, we compare the proposed technique with flooding and probabilistic flooding. Results validated that, the proposed technique reduces number of rebroadcasts and increases node outreach both under SINR and Boolean models.1

Proceedings Article•DOI•
11 May 2015
TL;DR: This paper presents a new multi-level VM replication approach which uses different types of VM clones to provide a variety of protections to mission-critical applications, and improve the survivability of the applications under accidental faults and malicious attacks.
Abstract: The elasticity and economics of cloud computing offer significant benefits to mission-critical applications which are increasingly complex and resource demanding. Cloud systems also provide powerful tools such as virtual machine (VM) based replication for defending mission-critical applications. However, cloud-based mission-critical computing raises serious challenges to mission assurance. VM-based consolidation brings different applications to the same set of physical resources, increasing the risk of one user compromising the mission of another. The mission-critical application in a VM lacks the visibility and control to detect and stop outside malicious attacks, whereas the support for security isolation from existing cloud systems is also limited. The objective of the research presented in this paper is to address these challenges and improve the survivability of mission-critical applications through the novel use of VM replication. Specifically, this paper presents a new multi-level VM replication approach which uses different types of VM clones to provide a variety of protections to mission-critical applications, and improve the survivability of the applications under accidental faults and malicious attacks. In this approach, full VM clones are employed to provide tolerance of attacks, decoy clones are created to divert attacks, and honeypot clones are used to analyze attacks. The paper also presents the prototypes of the proposed approach implemented for the widely used OpenStack-based private cloud systems and Amazon-EC2-based public cloud systems.

Proceedings Article•DOI•
26 May 2015
TL;DR: This paper explores and analyzes design diversity from a qualitative perspective, with respect to its fault tolerance and performance properties, and describes core concepts of design diversity such as non-diversity and diversity points, and provides quality measurements that help gaining a better understanding of how design diversity can impact the development of fault tolerant and secure systems.
Abstract: The design and development of modern critical systems, including cyber-physical systems, is experiencing a greater reliance on the outsourcing of systems parts and the use of third-party components and tools. These issues pose new risks and threats that affect dependability in general, and security in particular. Not only the chances are higher for system designs to be faulty, yet they can be maliciously altered. In addition, the extension of monocultures, comprising networks of interconnected systems featuring similar platforms and computing resources, facilitates the spreading and gravity of attacks. Even correctly designed systems can have side behaviors leading to vulnerabilities that are exploitable by attackers. Design diversity, although proposed and used for long time, can help palliate these emerging challenges. This paper explores and analyzes design diversity from a qualitative perspective, with respect to its fault tolerance and performance properties. The paper describes core concepts of design diversity such as non-diversity and diversity points, and provides quality measurements that help gaining a better understanding of how design diversity can impact the development of fault tolerant and secure systems.

Proceedings Article•DOI•
08 Jun 2015
TL;DR: This paper presents an effective heuristic algorithm that jointly obtains the VS provisioning and selection solutions and shows this problem is NP-hard to approximate and is not possible to obtain a better approximation ratio unless NP has TIME(nO(log log n) deterministic time algorithms.
Abstract: In this paper, we study a Virtual Server Provisioning and Selection (VSPS) problem in distributed Data Centers (DCs) with the objective of minimizing the total operational cost while meeting the service response time requirement.We aim to develop general algorithms for the VSPS problem without assuming a particular queueing model for service processing in each DC. First, we present a Mixed Integer Linear Programming (MILP) formulation. Then we present a 3-step optimization framework, under which we develop a polynomial-time ln(N)-approximation algorithm (where N is the number of clients) along with a post-optimization procedure for performance improvement. We also show this problem is NP-hard to approximate and is not possible to obtain a better approximation ratio unless NP has TIME(nO(log log n)) deterministic time algorithms. In addition, we present an effective heuristic algorithm that jointly obtains the VS provisioning and selection solutions. Extensive simulation results are presented to justify effectiveness of the proposed algorithms.

Proceedings Article•DOI•
01 Feb 2015
TL;DR: The bounds for the maximum achievable capacity of a randomly deployed secondary cognitive radio network with finite number of nodes in the presence of primary users in the underlay mode are found.
Abstract: Though there are works that show the asymptotic capacity bounds in a wireless network considering interference constraints from all transmitting nodes, there are no such evaluation of capacity bounds for finite secondary cognitive radio networks where the primaries pose additional constraints. In this paper, we find the bounds for the maximum achievable capacity of a randomly deployed secondary cognitive radio network with finite number of nodes in the presence of primary users, i.e., in the underlay mode. Since solving the functional constrained optimization problem of maximizing the secondary network's capacity subject to other radio constraints is computationally complex, we derive analytical bounds for the solution. We also show how a pre-engineered deployment with the best possible pairings of transmitters and receivers can help attain the best possible system capacity. The bounds are based on the maximum signal to interference and noise ratio (SINR) of all transmitter-receiver pairs and their geometrical placement. The derived bounds provide an insight about the network's maximum and minimum achievable capacities since solving the optimization problem shows in-scalability both in time and search space dimensionality.

Proceedings Article•DOI•
17 Dec 2015
TL;DR: This paper proposes a medium access control protocol based on distributed Time Division Multiple Access (TDMA) where all nodes randomly choose a time slot to transmit on, a phenomenon similar to the Poisson blinking model to obtain the optimal deployment density for secondary nodes.
Abstract: In this paper, we propose a medium access control (MAC) protocol to maximize the connectivity for an un-coordinated secondary dynamic spectrum access network under the signal to interference and noise ratio (SINR) regime. We use concepts from percolation theory to obtain the optimal deployment density for secondary nodes. We argue that the optimal connectivity under the SINR regime can be improved if a fraction of the nodes transmit at a given time. Thus, the proposed MAC is based on distributed Time Division Multiple Access (TDMA) where all nodes randomly choose a time slot to transmit on, a phenomenon similar to the Poisson blinking model. We find the optimal number of slots for the super-frame that includes the sensing, contention, and transmission phases. The performance of the proposed MAC is evaluated via simulations. We show how the proposed MAC adaptively adjusts the super-frame as the density of secondary varies. We also show the connectivity and throughput achieved for various network settings.1

Patent•
22 Sep 2015
TL;DR: In this paper, the authors proposed a method for enhancing security in a cloud computing system by allocating virtual machines over hypervisors, in a Cloud computing environment, in security-aware fashion.
Abstract: A method for enhancing security in a cloud computing system by allocating virtual machines over hypervisors, in a cloud computing environment, in a security-aware fashion. The invention solves the cloud user risk problem by inducing a state such that, unless there is a change in the conditions under which the present invention operates, the cloud users do not gain by deviating from the allocation induced by the present invention. The invention's methods include grouping virtual machines of similar loss potential on the same hypervisor, creating hypervisor environments of similar total loss, and implementing a risk tiered system of hypervisors based on expense factors.

Proceedings Article•DOI•
11 May 2015
TL;DR: A Bayesian inference model based on multinomial evidence to quantify reliability for a cooperative decision process as a function of beliefs associated with observations from the imperfect monitoring mechanism is proposed and an entropy measure is provided that reflects the certainty or uncertainty on the calculated reliability of the decision process.
Abstract: Reliability of a cooperative decision mechanism is critical for the proper and accurate functioning of a networked decision system. However, adversaries may choose to compromise the inputs from different sets of components that comprise the system. Often times, the monitoring mechanisms fail to accurately detect compromised inputs; hence cannot categorize all inputs into polarized decisions: compromised or not compromised. In this paper, we propose a Bayesian inference model based on multinomial evidence to quantify reliability for a cooperative decision process as a function of beliefs associated with observations from the imperfect monitoring mechanism. We propose two reliability models: an optimistic one for a normal system and a conservative one for a mission critical system. We also provide an entropy measure that reflects the certainty or uncertainty on the calculated reliability of the decision process. Through simulation, we show how the reliability and its corresponding entropy changes as the accuracy of the underlying monitoring mechanism improves.1