scispace - formally typeset
Search or ask a question

Showing papers by "Jelena Mirkovic published in 2017"


Proceedings ArticleDOI
03 Apr 2017
TL;DR: A framework, called Apate, is proposed, which detects and defeats each of these attack vectors, by performing just-in-time disassembling based on single-stepping, careful monitoring of the debuggee's execution and, when needed, modification of thedebuggee's states to hide the debugger's presence.
Abstract: Malware analysis uses debuggers to understand and manipulate the behaviors of stripped binaries. To circumvent analysis, malware applies a variety of anti-debugging techniques, such as self-modifying, checking for or removing breakpoints, hijacking keyboard and mouse events, escaping the debugger, etc. Most state-of-the-art debuggers are vulnerable to these anti-debugging techniques. In this paper, we first systematically analyze the spectrum of possible anti-debugging techniques and compile a list of 79 attack vectors. We then propose a framework, called Apate, which detects and defeats each of these attack vectors, by performing: (1) just-in-time disassembling based on single-stepping, (2) careful monitoring of the debuggee's execution and, when needed, modification of the debuggee's states to hide the debugger's presence. We implement Apate as an extension to WinDbg and extensively evaluate it using five different datasets, with known and new malware samples. Apate outperforms other debugger-hiding technologies by a wide margin, addressing 58+--465+ more attack vectors.

21 citations


Journal ArticleDOI
06 Dec 2017
TL;DR: This article proposes cardinal pill testing—a modification of red pill testing that aims to enumerate the differences between a given VM and a physical machine through carefully designed tests and proposes VM Cloak—a WinDbg plug-in which hides the presence of VMs from malware.
Abstract: Malware analysis relies heavily on the use of virtual machines (VMs) for functionality and safety. There are subtle differences in operation between virtual and physical machines. Contemporary malware checks for these differences and changes its behavior when it detects a VM presence. These anti-VM techniques hinder malware analysis. Existing research approaches to uncover differences between VMs and physical machines use randomized testing, and thus cannot guarantee completeness.In this article, we propose a detect-and-hide approach, which systematically addresses anti-VM techniques in malware. First, we propose cardinal pill testing—a modification of red pill testing that aims to enumerate the differences between a given VM and a physical machine through carefully designed tests. Cardinal pill testing finds five times more pills by running 15 times fewer tests than red pill testing. We examine the causes of pills and find that, while the majority of them stem from the failure of VMs to follow CPU specifications, a small number stem from under-specification of certain instructions by the Intel manual. This leads to divergent implementations in different CPU and VM architectures. Cardinal pill testing successfully enumerates the differences that stem from the first cause. Finally, we propose VM Cloak—a WinDbg plug-in which hides the presence of VMs from malware. VM Cloak monitors each execute malware command, detects potential pills, and at runtime modifies the command’s outcomes to match those that a physical machine would generate. We implemented VM Cloak and verified that it successfully hides VM presence from malware.

21 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: Inverted analysis is introduced, a novel approach that uses complete passive observations of a few end networks to infer what of these networks would be seen by millions of virtual monitors near their traffic's destinations, and finds that monitors near popular content see many more targets and that visibility is strongly influenced by bipartite traffic between clients and servers.
Abstract: Accurate information about address and block usage in the Internet has many applications in planning address allocation, topology studies, and simulations. Prior studies used active probing, sometimes augmented with passive observation, to study macroscopic phenomena, such as the overall usage of the IPv4 address space. This paper instead studies the completeness of passive sources: how well they can observe microscopic phenomena such as address usage within a given network. We define sparsity as the limitation of a given monitor to see a target, and we quantify the effects of interest, temporal, and coverage sparsity. To study sparsity, we introduce inverted analysis, a novel approach that uses complete passive observations of a few end networks (three campus networks in our case) to infer what of these networks would be seen by millions of virtual monitors near their traffic's destinations. Unsurprisingly, we find that monitors near popular content see many more targets and that visibility is strongly influenced by bipartite traffic between clients and servers. We are the first to quantify these effects and show their implications for the study of Internet liveness from passive observations. We find that visibility is heavy-tailed, with only 0.5% monitors seeing more than 10% of our targets' addresses, and is most affected by interest sparsity over temporal and coverage sparsity. Visibility is also strongly bipartite. Monitors of a different class than a target (e.g., a server monitor observing a client target) outperform monitors of the same class as a target in 82–99% of cases in our datasets. Finally, we find that adding active probing to passive observations greatly improves visibility of both server and client target addresses, but is not critical for visibility of target blocks. Our findings are valuable to understand limitations of existing measurement studies, and to develop methods to maximize microscopic completeness in future studies.

4 citations


Proceedings ArticleDOI
04 Dec 2017
TL;DR: Evaluation shows that commoner privacy prevents common attacks while preserving orders of magnitude higher research utility than differential privacy, and at least 9-49 times the utility of crowd-blending privacy.
Abstract: Differential privacy has emerged as a promising mechanism for privacy-safe data mining. One popular differential privacy mechanism allows researchers to pose queries over a dataset, and adds random noise to all output points to protect privacy. While differential privacy produces useful data in many scenarios, added noise may jeopardize utility for queries posed over small populations or over long-tailed datasets. Gehrke et al. proposed crowd-blending privacy, with random noise added only to those output points where fewer than k individuals (a configurable parameter) contribute to the point in the same manner. This approach has a lower privacy guarantee, but preserves more research utility than differential privacy. We propose an even more liberal privacy goal---commoner privacy---which fuzzes (omits, aggregates or adds noise to) only those output points where an individual's contribution to this point is an outlier. By hiding outliers, our mechanism hides the presence or absence of an individual in a dataset. We propose one mechanism that achieves commoner privacy---interactive k-anonymity. We also discuss query composition and show how we can guarantee privacy via either a pre-sampling step or via query introspection. We implement interactive k-anonymity and query introspection in a system called Patrol for network trace processing. Our evaluation shows that commoner privacy prevents common attacks while preserving orders of magnitude higher research utility than differential privacy, and at least 9-49 times the utility of crowd-blending privacy.

3 citations


Proceedings ArticleDOI
04 Dec 2017
TL;DR: A self-learning spoofed packet filter that detects spoofed traffic upstream from the victim by combining information about the traffic's expected route and about the sender's response to a few packet drops, RESECT is unique in its ability to autonomously learn correct filtering rules when routes change, or when routing is asymmetric or multipath.
Abstract: IP spoofing has been a persistent Internet security threat for decades. While research solutions exist that can help an edge network detect spoofed and reflected traffic, the sheer volume of such traffic requires handling further upstream. We propose RESECT---a self-learning spoofed packet filter that detects spoofed traffic upstream from the victim by combining information about the traffic's expected route and about the sender's response to a few packet drops. RESECT is unique in its ability to autonomously learn correct filtering rules when routes change, or when routing is asymmetric or multipath. Its operation has a minimal effect on legitimate traffic, while it quickly detects and drops spoofed packets. In isolated deployment, RESECT greatly reduces spoofed traffic to the deploying network and its customers, to 8-26% of its intended rate. If deployed at 50 best-connected autonomous systems, RESECT protects the deploying networks and their customers from 99% of spoofed traffic, and filters 91% of spoofed traffic sent to any other destination. RESECT is thus both a practical and highly effective solution for IP spoofing defense.

2 citations