scispace - formally typeset
Search or ask a question
Author

Simon P. Chung

Bio: Simon P. Chung is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Android (operating system) & Intrusion detection system. The author has an hindex of 14, co-authored 26 publications receiving 874 citations. Previous affiliations of Simon P. Chung include University of Texas at Austin & Georgia Tech Research Institute.

Papers
More filters
Proceedings ArticleDOI
12 Oct 2015
TL;DR: The main idea behind ASLR-Guard is to render leak of data pointer useless in deriving code address by separating code and data, provide a secure storage for code pointers, and encode the code pointers when they are treated as data.
Abstract: A general prerequisite for a code reuse attack is that the attacker needs to locate code gadgets that perform the desired operations and then direct the control flow of a vulnerable application to those gadgets. Address Space Layout Randomization (ASLR) attempts to stop code reuse attacks by making the first part of the prerequisite unsatisfiable. However, research in recent years has shown that this protection is often defeated by commonly existing information leaks, which provides attackers clues about the whereabouts of certain code gadgets. In this paper, we present ASLR-Guard, a novel mechanism that completely prevents the leaks of code pointers, and render other information leaks (e.g., the ones of data pointers) useless in deriving code address. The main idea behind ASLR-Guard is to render leak of data pointer useless in deriving code address by separating code and data, provide a secure storage for code pointers, and encode the code pointers when they are treated as data. ASLR-Guard can either prevent code pointer leaks or render their leaks harmless. That is, ASLR-Guard makes it impossible to overwrite code pointers with values that point to or will hijack the control flow to a desired address when the code pointers are dereferenced. We have implemented a prototype of ASLR-Guard, including a compilation toolchain and a C/C++ runtime. Our evaluation results show that (1) ASLR-Guard supports normal operations correctly; (2) it completely stops code address leaks and can resist against recent sophisticated attacks; (3) it imposes almost no runtime overhead (

150 citations

Proceedings Article
Tielei Wang1, Kangjie Lu1, Long Lu1, Simon P. Chung1, Wenke Lee1 
14 Aug 2013
TL;DR: A novel attack method is presented that allows attackers to reliably hide malicious behavior that would otherwise get their app rejected by the Apple review process, and to introduce malicious control flows by rearranging signed code.
Abstract: Apple adopts the mandatory app review and code signing mechanisms to ensure that only approved apps can run on iOS devices. In this paper, we present a novel attack method that fundamentally defeats both mechanisms. Our method allows attackers to reliably hide malicious behavior that would otherwise get their app rejected by the Apple review process. Once the app passes the review and is installed on an end user's device, it can be instructed to carry out the intended attacks. The key idea is to make the apps remotely exploitable and subsequently introduce malicious control flows by rearranging signed code. Since the new control flows do not exist during the app review process, such apps, namely Jekyll apps, can stay undetected when reviewed and easily obtain Apple's approval. We implemented a proof-of-concept Jekyll app and successfully published it in App Store. We remotely launched the attacks on a controlled group of devices that installed the app. The result shows that, despite running inside the iOS sandbox, Jekyll app can successfully perform many malicious tasks, such as stealthily posting tweets, taking photos, stealing device identity information, sending email and SMS, attacking other apps, and even exploiting kernel vulnerabilities.

131 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: This work uncovers several design shortcomings of the Android platform and shows how an app with these two permissions can completely control the UI feedback loop and create devastating attacks, and proposes a novel defense mechanism, implemented as an extension to the current Android API, to protect Android users and developers from the threats.
Abstract: The effectiveness of the Android permission system fundamentally hinges on the user's correct understanding of the capabilities of the permissions being granted. In this paper, we show that both the end-users and the security community have significantly underestimated the dangerous capabilities granted by the SYSTEM_ALERT_WINDOW and the BIND_ACCESSIBILITY_SERVICE permissions: while it is known that these are security-sensitive permissions and they have been abused individually (e.g., in UI redressing attacks, accessibility attacks), previous attacks based on these permissions rely on vanishing side-channels to time the appearance of overlay UI, cannot respond properly to user input, or make the attacks literally visible. This work, instead, uncovers several design shortcomings of the Android platform and shows how an app with these two permissions can completely control the UI feedback loop and create devastating attacks. In particular, we demonstrate how such an app can launch a variety of stealthy, powerful attacks, ranging from stealing user's login credentials and security PIN, to the silent installation of a God-mode app with all permissions enabled, leaving the victim completely unsuspecting. To make things even worse, we note that when installing an app targeting a recent Android SDK, the list of its required permissions is not shown to the user and that these attacks can be carried out without needing to lure the user to knowingly enable any permission. In fact, the SYSTEM_ALERT_WINDOW permission is automatically granted for apps installed from the Play Store and our experiment shows that it is practical to lure users to unknowingly grant the BIND_ACCESSIBILITY_SERVICE permission by abusing capabilities from the SYSTEM_ALERT_WINDOW permission. We evaluated the practicality of these attacks by performing a user study: none of the 20 human subjects that took part of the experiment even suspected they had been attacked. We also found that it is straightforward to get a proof-of-concept app requiring both permissions accepted on the official store. We responsibly disclosed our findings to Google. Unfortunately, since these problems are related to design issues, these vulnerabilities are still unaddressed. We conclude the paper by proposing a novel defense mechanism, implemented as an extension to the current Android API, which would protect Android users and developers from the threats we uncovered.

108 citations

Proceedings ArticleDOI
03 Nov 2014
TL;DR: This paper presents the first security evaluation of accessibility support for four of the most popular computing platforms: Microsoft Windows, Ubuntu Linux, iOS, and Android, and identifies twelve attacks that can bypass state-of-the-art defense mechanisms deployed on these OSs.
Abstract: Driven in part by federal law, accessibility (a11y) support for disabled users is becoming ubiquitous in commodity OSs. Some assistive technologies such as natural language user interfaces in mobile devices are welcomed by the general user population. Unfortunately, adding new features in modern, complex OSs usually introduces new security vulnerabilities. Accessibility support is no exception. Assistive technologies can be defined as computing subsystems that either transform user input into interaction requests for other applications and the underlying OS, or transform application and OS output for display on alternative devices. Inadequate security checks on these new I/O paths make it possible to launch attacks from accessibility interfaces. In this paper, we present the first security evaluation of accessibility support for four of the most popular computing platforms: Microsoft Windows, Ubuntu Linux, iOS, and Android. We identify twelve attacks that can bypass state-of-the-art defense mechanisms deployed on these OSs, including UAC, the Yama security module, the iOS sandbox, and the Android sandbox. Further analysis of the identified vulnerabilities shows that their root cause is that the design and implementation of accessibility support involves inevitable trade-offs among compatibility, usability, security, and (economic) cost. These trade-offs make it difficult to secure a system against misuse of accessibility support. Based on our findings, we propose a number of recommendations to either make the implementation of all necessary security checks easier and more intuitive, or to alleviate the impact of missing/incorrect checks. We also point out open problems and challenges in automatically analyzing accessibility support and identifying security vulnerabilities.

85 citations

Proceedings ArticleDOI
15 Oct 2018
TL;DR: The results show that uCFI strictly enforces the UCT property for protected programs, successfully detects all attacks, and introduces less than 10% performance overhead.
Abstract: The goal of control-flow integrity (CFI) is to stop control-hijacking attacks by ensuring that each indirect control-flow transfer (ICT) jumps to its legitimate target. However, existing implementations of CFI have fallen short of this goal because their approaches are inaccurate and as a result, the set of allowable targets for an ICT instruction is too large, making illegal jumps possible. In this paper, we propose the Unique Code Target (UCT) property for CFI. Namely, for each invocation of an ICT instruction, there should be one and only one valid target. We develop a prototype called uCFI to enforce this new property. During compilation, uCFI identifies the sensitive instructions that influence ICT and instruments the program to record necessary execution context. At runtime, uCFI monitors the program execution in a different process, and performs points-to analysis by interpreting sensitive instructions using the recorded execution context in a memory safe manner. It checks runtime ICT targets against the analysis results to detect CFI violations. We apply uCFI to SPEC benchmarks and 2 servers (nginx and vsftpd) to evaluate its efficacy of enforcing UCT and its overhead. We also test uCFI against control-hijacking attacks, including 5 real-world exploits, 1 proof of concept COOP attack, and 2 synthesized attacks that bypass existing defenses. The results show that uCFI strictly enforces the UCT property for protected programs, successfully detects all attacks, and introduces less than 10% performance overhead.

83 citations


Cited by
More filters
Book ChapterDOI
04 Oct 2019
TL;DR: Permission to copy without fee all or part of this material is granted provided that the copies arc not made or distributed for direct commercial advantage.
Abstract: Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to exhibit a Hamiltonian tour in it; however, this seems to contain more knowledge than the single bit Hamiltonian/non-Hamiltonian.In this paper a computational complexity theory of the “knowledge” contained in a proof is developed. Zero-knowledge proofs are defined as those proofs that convey no additional knowledge other than the correctness of the proposition in question. Examples of zero-knowledge proof systems are given for the languages of quadratic residuosity and 'quadratic nonresiduosity. These are the first examples of zero-knowledge proofs for languages not known to be efficiently recognizable.

1,962 citations

Posted Content
TL;DR: It is shown that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs.
Abstract: Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a \emph{BadNet}) that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs. We first explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classifier. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of {25}\% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful and---because the behavior of neural networks is difficult to explicate---stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging software.

993 citations

Proceedings ArticleDOI
21 Oct 2011
TL;DR: In this article, the authors discuss an emerging field of study: adversarial machine learning (AML), the study of effective machine learning techniques against an adversarial opponent, and give a taxonomy for classifying attacks against online machine learning algorithms.
Abstract: In this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for classifying attacks against online machine learning algorithms; discuss application-specific factors that limit an adversary's capabilities; introduce two models for modeling an adversary's capabilities; explore the limits of an adversary's knowledge about the algorithm, feature space, training, and input data; explore vulnerabilities in machine learning algorithms; discuss countermeasures against attacks; introduce the evasion challenge; and discuss privacy-preserving learning techniques.

947 citations

Journal ArticleDOI
TL;DR: A taxonomy identifying and analyzing attacks against machine learning systems is presented, showing how these classes influence the costs for the attacker and defender, and a formal structure defining their interaction is given.
Abstract: Machine learning's ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.

811 citations

Journal ArticleDOI
TL;DR: The author briefly introduces the emerging field of adversarial machine learning, in which opponents can cause traditional machine learning algorithms to behave poorly in security applications.
Abstract: The author briefly introduces the emerging field of adversarial machine learning, in which opponents can cause traditional machine learning algorithms to behave poorly in security applications. He gives a high-level overview and mentions several types of attacks, as well as several types of defenses, and theoretical limits derived from a study of near-optimal evasion.

703 citations