scispace - formally typeset
Search or ask a question
Author

Ariel Haney

Bio: Ariel Haney is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: The Internet & Permission. The author has an hindex of 1, co-authored 1 publications receiving 969 citations.

Papers
More filters
Proceedings ArticleDOI
11 Jul 2012
TL;DR: It is found that current Android permission warnings do not help most users make correct security decisions, however, a notable minority of users demonstrated both awareness of permission warnings and reasonable rates of comprehension.
Abstract: Android's permission system is intended to inform users about the risks of installing applications. When a user installs an application, he or she has the opportunity to review the application's permission requests and cancel the installation if the permissions are excessive or objectionable. We examine whether the Android permission system is effective at warning users. In particular, we evaluate whether Android users pay attention to, understand, and act on permission information during installation. We performed two usability studies: an Internet survey of 308 Android users, and a laboratory study wherein we interviewed and observed 25 Android users. Study participants displayed low attention and comprehension rates: both the Internet survey and laboratory study found that 17% of participants paid attention to permissions during installation, and only 3% of Internet survey respondents could correctly answer all three permission comprehension questions. This indicates that current Android permission warnings do not help most users make correct security decisions. However, a notable minority of users demonstrated both awareness of permission warnings and reasonable rates of comprehension. We present recommendations for improving user attention and comprehension, as well as identify open challenges.

1,047 citations


Cited by
More filters
Proceedings ArticleDOI
17 Oct 2011
TL;DR: Stowaway, a tool that detects overprivilege in compiled Android applications, is built and finds that about one-third of applications are overprivileged.
Abstract: Android provides third-party applications with an extensive API that includes access to phone hardware, settings, and user data. Access to privacy- and security-relevant parts of the API is controlled with an install-time application permission system. We study Android applications to determine whether Android developers follow least privilege with their permission requests. We built Stowaway, a tool that detects overprivilege in compiled Android applications. Stowaway determines the set of API calls that an application uses and then maps those API calls to permissions. We used automated testing tools on the Android API in order to build the permission map that is necessary for detecting overprivilege. We apply Stowaway to a set of 940 applications and find that about one-third are overprivileged. We investigate the causes of overprivilege and find evidence that developers are trying to follow least privilege but sometimes fail due to insufficient API documentation.

1,395 citations

Proceedings ArticleDOI
16 Oct 2012
TL;DR: An analysis of the permission system of the Android smartphone OS is performed and it is found that a trade-off exists between enabling least-privilege security with fine-grained permissions and maintaining stability of the permissions specification as the Android OS evolves.
Abstract: Modern smartphone operating systems (OSs) have been developed with a greater emphasis on security and protecting privacy. One of the mechanisms these systems use to protect users is a permission system, which requires developers to declare what sensitive resources their applications will use, has users agree with this request when they install the application and constrains the application to the requested resources during runtime. As these permission systems become more common, questions have risen about their design and implementation. In this paper, we perform an analysis of the permission system of the Android smartphone OS in an attempt to begin answering some of these questions. Because the documentation of Android's permission system is incomplete and because we wanted to be able to analyze several versions of Android, we developed PScout, a tool that extracts the permission specification from the Android OS source code using static analysis. PScout overcomes several challenges, such as scalability due to Android's 3.4 million line code base, accounting for permission enforcement across processes due to Android's use of IPC, and abstracting Android's diverse permission checking mechanisms into a single primitive for analysis.We use PScout to analyze 4 versions of Android spanning version 2.2 up to the recently released Android 4.0. Our main findings are that while Android has over 75 permissions, there is little redundancy in the permission specification. However, if applications could be constrained to only use documented APIs, then about 22% of the non-system permissions are actually unnecessary. Finally, we find that a trade-off exists between enabling least-privilege security with fine-grained permissions and maintaining stability of the permission specification as the Android OS evolves.

707 citations

Proceedings ArticleDOI
22 May 2016
TL;DR: This paper analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks, and discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps.
Abstract: Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes, (2) stole existing door lock codes, (3) disabled vacation mode of the home, and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.

591 citations

Proceedings ArticleDOI
16 Oct 2012
TL;DR: An analysis of 13,500 popular free apps downloaded from Google's Play Market revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks, and MalloDroid is introduced, a tool to detect potential vulnerability againstMITM attacks.
Abstract: Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated.

557 citations

Proceedings ArticleDOI
05 Sep 2012
TL;DR: A new model for privacy is introduced, namely privacy as expectations, which involves using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use and a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations.
Abstract: Smartphone security research has produced many useful tools to analyze the privacy-related behaviors of mobile apps. However, these automated tools cannot assess people's perceptions of whether a given action is legitimate, or how that action makes them feel with respect to privacy. For example, automated tools might detect that a blackjack game and a map app both use one's location information, but people would likely view the map's use of that data as more legitimate than the game. Our work introduces a new model for privacy, namely privacy as expectations. We report on the results of using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use. We also report on a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations. We conclude with a discussion of implications for employing crowdsourcing as a privacy evaluation technique.

491 citations