scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Analyzing inter-application communication in Android

TL;DR: This work examines Android application interaction and identifies security risks in application components and provides a tool, ComDroid, that detects application communication vulnerabilities and found 34 exploitable vulnerabilities.
Abstract: Modern smartphone operating systems support the development of third-party applications with open system APIs. In addition to an open API, the Android operating system also provides a rich inter-application message passing system. This encourages inter-application collaboration and reduces developer burden by facilitating component reuse. Unfortunately, message passing is also an application attack surface. The content of messages can be sniffed, modified, stolen, or replaced, which can compromise user privacy. Also, a malicious application can inject forged or otherwise malicious messages, which can lead to breaches of user data and violate application security policies.We examine Android application interaction and identify security risks in application components. We provide a tool, ComDroid, that detects application communication vulnerabilities. ComDroid can be used by developers to analyze their own applications before release, by application reviewers to analyze applications in the Android Market, and by end users. We analyzed 20 applications with the help of ComDroid and found 34 exploitable vulnerabilities; 12 of the 20 applications have at least one vulnerability.
Figures (13)

Content maybe subject to copyright    Report

Analyzing Inter-Application Communication in Android
Erika Chin Adrienne Porter Felt Kate Greenwood David Wagner
University of California, Berkeley
Berkeley, CA, USA
{emc, apf, kate_eli, daw}@cs.berkeley.edu
ABSTRACT
Modern smartphone operating systems support the devel-
opment of third-party applications with open system APIs.
In addition to an open API, the Android operating system
also provides a rich inter-application message passing sys-
tem. This encourages inter-application collaboration and
reduces developer burden by facilitating component reuse.
Unfortunately, message passing is also an application at-
tack surface. The content of messages can be sniffed, modi-
fied, stolen, or replaced, which can compromise user privacy.
Also, a malicious application can inject forged or otherwise
malicious messages, which can lead to breaches of user data
and violate application security policies.
We examine Android application interaction and identify
security risks in application components. We provide a tool,
ComDroid, that detects application communication vulner-
abilities. ComDroid can be used by developers to analyze
their own applications before release, by application review-
ers to analyze applications in the Android Market, and by
end users. We analyzed 20 applications with the help of
ComDroid and found 34 exploitable vulnerabilities; 12 of
the 20 applications have at least one vulnerability.
Categories and Subject Descriptors
D.2.5 [Software Engineering]: Testing and Debugging;
D.4 [Operating Systems]: Security and Protection
General Terms
Security
Keywords
Android, message passing, Intents, mobile phone security
1. INTRODUCTION
Over the past decade, mobile phones have evolved from
simple devices used for phone calls and SMS messages to so-
phisticated devices that can run third-party software. Phone
owners are no longer limited to the simple address book and
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
MobiSys’11, June 28–July 1, 2011, Bethesda, Maryland, USA.
Copyright 2011 ACM 978-1-4503-0643-0/11/06 ...$10.00.
other basic capabilities provided by the operating system
and phone manufacturer. They are free to customize their
phones by installing third-party applications of their choos-
ing. Mobile phone manufacturers support third-party app-
lication developers by providing development platforms and
software stores (e.g., Android Market, Apple App Store [1,
3]) where developers can distribute their applications.
Android’s application communication model further pro-
motes the development of rich applications. Android devel-
opers can leverage existing data and services provided by
other applications while still giving the impression of a sin-
gle, seamless application. For example, a restaurant review
application can ask other applications to display the restau-
rant’s website, provide a map with the restaurant’s location,
and call the restaurant. This communication model reduces
developer burden and promotes functionality reuse. Android
achieves this by dividing applications into components and
providing a message passing system so that components can
communicate within and across application boundaries.
Android’s message passing system can become an attack
surface if used incorrectly. In this paper, we discuss the risks
of Android message passing and identify insecure developer
practices. If a message sender does not correctly specify the
recipient, then an attacker could intercept the message and
compromise its confidentiality or integrity. If a component
does not restrict who may send it messages, then an attacker
could inject malicious messages into it.
We have seen numerous malicious mobile phone applica-
tions in the wild. For example, SMS Message Spy Pro dis-
guises itself as a tip calculator and forwards all sent and
received SMS messages to a third party [25]; similarly, Mo-
biStealth records SMS messages, call history, browser his-
tory, GPS location, and more [26, 4]. This is worrisome
because users rely on their phones to perform private and
sensitive tasks like sending e-mail, taking pictures, and per-
forming banking transactions. It is therefore important to
help developers write secure applications that do not leak or
alter user data in the presence of an adversary.
We examine the Android communication model and the
security risks it creates, including personal data loss and
corruption, phishing, and other unexpected behavior. We
present ComDroid, a tool that analyzes Android applica-
tions to detect potential instances of these vulnerabilities.
We used ComDroid to analyze 20 applications and found
34 vulnerabilities in 12 of the applications. Most of these
vulnerabilities stem from the fact that Intents can be used
for both intra- and inter-application communication, so we
provide recommendations for changing Android to help de-
velopers distinguish between internal and external messages.

2. ANDROID OVERVIEW
Android’s security model differs significantly from the stan-
dard desktop security model. Android applications are treated
as mutually distrusting principals; they are isolated from
each other and do not have access to each others’ private
data. We provide an overview of the Android security model
and inter-application communication facilities next.
Although other smartphone platforms have a similar se-
curity model, we focus on Android because it has the most
sophisticated application communication system. The com-
plexity of Android’s message passing system implies it has
the largest attack surface. In Section 6, we compare and
contrast Android to other mobile operating systems.
2.1 Threat Model
The Android Market contains a wide array of third-party
applications, and a user may install applications with vary-
ing trust levels. Users install applications from unknown
developers alongside trusted applications that handle pri-
vate information such as financial data and personal pho-
tographs. For example, a user might install both a highly
trusted banking application and a free game application.
The game should not be able to obtain access to the user’s
bank account information.
Under the Android security model, all applications are
treated as potentially malicious. Each application runs in
its own process with a low-privilege user ID, and applica-
tions can only access their own files by default. These isola-
tion mechanisms aim to protect applications with sensitive
information from malware.
Despite their default isolation, applications can optionally
communicate via message passing. Communication can be-
come an attack vector. If a developer accidentally exposes
functionality, then the application can be tricked into per-
forming an undesirable action. If a developer sends data to
the wrong recipient, then it might leak sensitive data. In
this paper, we consider how applications can prevent these
kinds of communication-based attacks.
In addition to providing inter-application isolation, the
Android security model protects the system API from mali-
cious applications. By default, applications do not have the
ability to interact with sensitive parts of the system API;
however, the user can grant an application additional per-
missions during installation. We do not consider attacks on
the operating system; instead, we focus on securing applica-
tions from each other.
2.2 Intents
Android provides a sophisticated message passing system,
in which Intents are used to link applications. An Intent is
a message that declares a recipient and optionally includes
data; an Intent can be thought of as a self-contained ob-
ject that specifies a remote procedure to invoke and includes
the associated arguments. Applications use Intents for both
inter-application communication and intra-application com-
munication. Additionally, the operating system sends In-
tents to applications as event notifications. Some of these
event notifications are system-wide events that can only be
sent by the operating system. We call these messages system
broadcast Intents.
Intents can be used for explicit or implicit communica-
tion. An explicit Intent specifies that it should be delivered
to a particular application specified by the Intent, whereas
an implicit Intent requests delivery to any application that
supports a desired operation. In other words, an explicit
Intent identifies the intended recipient by name, whereas an
implicit Intent leaves it up to the Android platform to de-
termine which application(s) should receive the Intent. For
example, consider an application that stores contact infor-
mation. When the user clicks on a contact’s street address,
the contacts application needs to ask another application to
display a map of that location. To achieve this, the con-
tacts application could send an explicit Intent directly to
Google Maps, or it could send an implicit Intent that would
be delivered to any application that says it provides map-
ping functionality (e.g., Yahoo! Maps or Bing Maps). Using
an explicit Intent guarantees that the Intent is delivered to
the intended recipient, whereas implicit Intents allow for late
runtime binding between different applications.
2.3 Components
Intents are delivered to application components, which are
logical application building blocks. Android defines four
types of components:
Activities provide user interfaces. Activities are started
with Intents, and they can return data to their invok-
ing components upon completion. All visible portions
of applications are Activities.
Services run in the background and do not interact
with the user. Downloading a file or decompressing
an archive are examples of operations that may take
place in a Service. Other components can bind to a
Service, which lets the binder invoke methods that are
declared in the target Service’s interface. Intents are
used to start and bind to Services.
Broadcast Receivers receive Intents sent to multiple
applications. Receivers are triggered by the receipt
of an appropriate Intent and then run in the back-
ground to handle the event. Receivers are typically
short-lived; they often relay messages to Activities or
Services. There are three types of broadcast Intents:
normal, sticky, and ordered. Normal broadcasts are
sent to all registered Receivers at once, and then they
disappear. Ordered broadcasts are delivered to one
Receiver at a time; also, any Receiver in the delivery
chain of an ordered broadcast can stop its propaga-
tion. Broadcast Receivers have the ability to set their
priority level for receiving ordered broadcasts. Sticky
broadcasts remain accessible after they have been de-
livered and are re-broadcast to future Receivers.
Content Providers are databases addressable by their
application-defined URIs. They are used for both per-
sistent internal data storage and as a mechanism for
sharing information between applications.
Intents can be sent between three of the four components:
Activities, Services, and Broadcast Receivers. Intents can
be used to start Activities; start, stop, and bind Services;
and broadcast information to Broadcast Receivers. (Table 1
shows relevant method signatures.) All of these forms of
communication can be used with either explicit or implicit
Intents. By default, a component receives only internal app-
lication Intents (and is therefore not externally invocable).

To Receiver sendBroadcast(Intent i)
sendBroadcast(Intent i, String rcvrPermission)
sendOrderedBroadcast(Intent i, String recvrPermission, BroadcastReceiver receiver, ...)
sendOrderedBroadcast(Intent i, String recvrPermission)
sendStickyBroadcast(Intent i)
sendStickyOrderedBroadcast(Intent i, BroadcastReceiver receiver, ...)
To Activity startActivity(Intent i)
startActivityForResult(Intent i, int requestCode)
To Service startService(Intent i)
bindService(Intent i, ServiceConnection conn, int flags)
Table 1: A non-exhaustive list of Intent-sending mechanisms
2.4 Component Declaration
To receive Intents, a component must be declared in the
application manifest. A manifest is a configuration file that
accompanies the application during installation. A devel-
oper uses the manifest to specify what external Intents (if
any) should be delivered to the application’s components.
2.4.1 Exporting a Component
For a Service or Activity to receive Intents, it must be de-
clared in the manifest. (Broadcast Receivers can be declared
in the manifest or at runtime.) A component is considered
exported, or public, if its declaration sets the EXPORTED flag
or includes at least one Intent filter. Exported components
can receive Intents from other applications, and Intent fil-
ters specify what type of Intents should be delivered to an
exported component.
Android determines which Intents should be delivered to
an exported component by matching each Intent’s fields to
the component’s declaration. An Intent can include a com-
ponent name, an action, data, a category, extra data, or
any subset thereof. A developer sends an explicit Intent by
specifying a recipient component name; the Intent is then
delivered to the component with that name. Implicit Intents
lack component names, so Android uses the other fields to
identify an appropriate recipient.
An Intent filter can constrain incoming Intents by action,
data, and category; the operating system will match Intents
against these constraints. An action specifies a general op-
eration to be performed, the data field specifies the type of
data to operate on, and the category gives additional infor-
mation about the action to execute. For example, a com-
ponent that edits images might define an Intent filter that
states it can accept any Intent with an EDIT action and data
whose MIME type is image/*. For a component to be an
eligible recipient of an Intent, it must have specified each
action, category, and data contained in the Intent in its own
Intent filter. A filter can specify more actions, data, and
categories than the Intent, but it cannot have less.
Multiple applications can register components that han-
dle the same type of Intent. This means that the operating
system needs to decide which component should receive the
Intent. Broadcast Receivers can specify a priority level (as
an attribute of its Intent filter) to indicate to the operat-
ing system how well-suited the component is to handle an
Intent. When ordered broadcasts are sent, the Intent filter
with the highest priority level will receive the Intent first.
Ties among Activities are resolved by asking the user to se-
lect the preferred application (if the user has not already
set a default selection). Competition between Services is
decided by randomly choosing a Service.
It is important to note that Intent filters are not a security
mechanism. A sender can assign any action, type, or cate-
gory that it wants to an Intent (with the exception of certain
actions that only the system can send), or it can bypass the
filter system entirely with an explicit Intent. Conversely, a
component can claim to handle any action, type, or cate-
gory, regardless of whether it is actually well-suited for the
desired operation.
2.4.2 Protection
Android restricts access to the system API with permis-
sions, and applications must request the appropriate per-
missions in their manifests to gain access to protected API
calls. Applications can also use permissions to protect them-
selves. An application can specify that a caller must have a
certain permission by adding a permission requirement to a
component’s declaration in the manifest or setting a default
permission requirement for the whole application. Also, the
developer can add permission checks throughout the code.
Conversely, a broadcast Intent sender can limit who can re-
ceive the Intent by requiring the recipient to have a permis-
sion. (This protection is only available to broadcast Intents
and not available to Activity or Service Intents.) Applica-
tions can make use of existing Android permissions or define
new permissions in their manifests.
All permissions have a protection level that determines
how difficult the permission is to acquire. There are four
protection levels:
Normal permissions are granted automatically.
Dangerous permissions can be granted by the user dur-
ing installation. If the permission request is denied,
then the application is not installed.
Signature permissions are only granted if the request-
ing application is signed by the same developer that
defined the permission. Signature permissions are use-
ful for restricting component access to a small set of
applications trusted and controlled by the developer.
SignatureOrSystem permissions are granted if the app-
lication meets the Signature requirement or if the app-
lication is installed in the system applications folder.
Applications from the Android Market cannot be in-
stalled into the system applications folder. System ap-
plications must be pre-installed by the device manu-
facturer or manually installed by an advanced user.

Applications seeking strong protection can require that callers
hold permissions from the higher categories. For example,
the BRICK permission can be used to disable a device. It is a
Signature-level permission defined by the operating system,
which means that it will only be granted to applications
with the same signature as the operating system (i.e., appli-
cations signed with the phone manufacturer’s signature). If
a developer were to protect her component with the BRICK
permission, then only an application with that permission
(e.g., a Google-made application) could use that component.
In contrast, a component protected with a Normal permis-
sion is essentially unprotected because any application can
easily obtain the permission.
3. INTENT-BASED ATTACK SURFACES
We examine the security challenges of Android commu-
nication from the perspectives of Intent senders and Intent
recipients. In Section 3.1, we discuss how sending an Intent
to the wrong application can leak user information. Data
can be stolen by eavesdroppers and permissions can be acci-
dentally transferred between applications. In Section 3.2, we
consider vulnerabilities related to receiving external Intents,
i.e., Intents coming from other applications. If a component
is accidentally made public, then external applications can
invoke its components in surprising ways or inject malicious
data into it. We summarize guidelines for secure Android
communication in Section 3.3.
Throughout our discussion of component security, we fo-
cus our attention on exported components. Non-exported
components are not accessible to other applications and thus
are not subject to the attacks we present here. We also ex-
clude exported components and broadcast Intents that are
protected with permissions that other applications cannot
acquire. As explained in Section 2.4.2, Normal and Dan-
gerous permissions do not offer components or Intents very
strong protection: Normal permissions are granted automat-
ically, and Dangerous permissions are granted with user ap-
proval. Signature and SignatureOrSystem permissions, how-
ever, are very difficult to obtain. We consider components
and broadcast Intents that are protected with Signature or
SignatureOrSystem permissions as private.
3.1 Unauthorized Intent Receipt
When an application sends an implicit Intent, there is no
guarantee that the Intent will be received by the intended
recipient. A malicious application can intercept an implicit
Intent simply by declaring an Intent filter with all of the
actions, data, and categories listed in the Intent. The ma-
licious application then gains access to all of the data in
any matching Intent, unless the Intent is protected by a
permission that the malicious application lacks. Intercep-
tion can also lead to control-flow attacks like denial of ser-
vice or phishing. We consider how attacks can be mounted
on Intents intended for Broadcast Receivers, Activities, and
Services. We also discuss special types of Intents that are
particularly dangerous if intercepted.
3.1.1 Broadcast Theft
Broadcasts can be vulnerable to passive eavesdropping or
active denial of service attacks (Figure 1). An eavesdropper
can silently read the contents of a broadcast Intent without
interrupting the broadcast. Eavesdropping is a risk when-
ever an application sends a public broadcast. (A public
!"#$%&
'()&
*+%&
,-.("&
!"#$%& '()&
*+%&
,-.("&
Figure 1: Broadcast Eavesdropping (left): Expected recip-
ients Bob and Carol receive the Intent, but so does Eve.
Broadcast Denial of Service for Ordered Broadcasts (right):
Eve steals the Intent and prevents Bob and Carol from re-
ceiving it.
broadcast is an implicit Intent that is not protected by a
Signature or SignatureOrSystem permission.) A malicious
Broadcast Receiver could eavesdrop on all public broadcasts
from all applications by creating an Intent filter that lists all
possible actions, data, and categories. There is no indica-
tion to the sender or user that the broadcast has been read.
Sticky broadcasts are particularly at risk for eavesdropping
because they persist and are re-broadcast to new Receivers;
consequently, there is a large temporal window for a sticky
broadcast Intent to be read. Additionally, sticky broadcasts
cannot be protected by permissions.
Furthermore, an active attacker could launch denial of
service or data injection attacks on ordered broadcasts. Or-
dered broadcasts are serially delivered to Receivers in order
of priority, and each Receiver can stop it from propagat-
ing further. If a malicious Receiver were to make itself a
preferred Receiver by registering itself as a high priority, it
would receive the Intent first and could cancel the broad-
cast. Non-ordered broadcasts are not vulnerable to denial
of service attacks because they are delivered simultaneously
to all Receivers. Ordered broadcasts can also be subject to
malicious data injection. As each Receiver processes the In-
tent, it can pass on a result to the next Receiver; after all
Receivers process the Intent, the result is returned to the
sending component. A malicious Receiver can change the
result, potentially affecting the sender and all other receiv-
ing components.
When a developer broadcasts an Intent, he or she must
consider whether the information being sent is sensitive. Ex-
plicit broadcast Intents should be used for internal applica-
tion communication, to prevent eavesdropping or denial of
service. There is no need to use implicit broadcasts for in-
ternal functionality. At the very least, the developer should
consider applying appropriate permissions to Intents con-
taining private data.
3.1.2 Activity Hijacking
In an Activity hijacking attack, a malicious Activity is
launched in place of the intended Activity. The malicious
Activity registers to receive another application’s implicit
Intents, and it is then started in place of the expected Ac-
tivity (Figure 2).
In the simplest form of this attack, the malicious Activ-
ity could read the data in the Intent and then immediately
relay it to a legitimate Activity. In a more sophisticated ac-
tive attack, the hijacker could spoof the expected Activity’s
user interface to steal user-supplied data (i.e., phishing). For

!"#$%& '()&
*+%&
!"#$%& '()&
*+%&
Figure 2: Activity/Service Hijacking (left): Alice acciden-
tally starts Eve’s component instead of Bob’s. False Re-
sponse (right): Eve returns a malicious result to Alice. Alice
thinks the result comes from Bob.
Figure 3: The user is prompted when an implicit Intent
resolves to multiple Activities.
example, consider a legitimate application that solicits do-
nations. When a user clicks on a “Donate Here” button, the
application uses an implicit Intent to start another Activity
that prompts the user for payment information. If a ma-
licious Activity hijacks the Intent, then the attacker could
receive information supplied by the user (e.g., passwords and
money). Phishing attacks can be mounted convincingly be-
cause the Android UI does not identify the currently running
application. Similarly, a spoofed Activity can lie to the user
about an action’s completion (e.g., telling the user that an
application was successfully uninstalled when it was not).
Activity hijacking is not always possible. When multiple
Activities match the same Intent, the user will be prompted
to choose which application the Intent should go to if a de-
fault choice has not already been set. (Figure 3 shows the
dialog.) If the secure choice is obvious, then the attack will
not succeed. However, an attacker can handle this challenge
in two ways. First, an application can provide a confusing
name for a component to fool the user into selecting the
wrong application. Second, the malicious application can
provide a useful service so that the user willingly makes it
the default application to launch. For example, a user might
opt to make a malicious browser the default browser and
never get prompted to choose between components again.
Although the visibility of the Activity chooser represents a
challenge for the attacker, the consequences of a successful
attack can be severe.
If an Activity hijacking attack is successful, the victim
component may be open to a secondary false response at-
tack. Some Activities are expected to return results upon
completion. In these cases, an Activity hijacker can return a
malicious response value to its invoker. If the victim appli-
cation trusts the response, then false information is injected
into the victim application.
3.1.3 Service Hijacking
Service hijacking occurs when a malicious Service inter-
cepts an Intent meant for a legitimate Service. The result
is that the initiating application establishes a connection
with a malicious Service instead of the one it wanted. The
malicious Service can steal data and lie about completing
requested actions. Unlike Activity hijacking, Service hijack-
ing is not apparent to the user because no user interface
is involved. When multiple Services can handle an Intent,
Android selects one at random; the user is not prompted to
select a Service.
As with Activity hijacking, Service hijacking can enable
the attacker to spoof responses (a false response attack).
Once the malicious Service is bound to the calling applica-
tion, then the attacker can return arbitrary malicious data
or simply return a successful result without taking the re-
quested action. If the calling application provides the Ser-
vice with callbacks, then the Service might be able to mount
additional attacks using the callbacks.
3.1.4 Special Intents
Intents can include URIs that reference data stored in an
application’s Content Provider. In case the Intent recipi-
ent does not have the privilege to access the URI, the In-
tent sender can set the FLAG_GRANT_READ_URI_PERMISSION
or FLAG_GRANT_WRITE_URI_PERMISSION flags on the Intent.
If the Provider has allowed URI permissions to be granted
(in the manifest), this will give the Intent recipient the abil-
ity to read or write the data at the URI. If a malicious
component intercepts the Intent (in the ways previously dis-
cussed), it can access the data URI contained in the Intent.
If the data is intended to be private, then an Intent carry-
ing data privileges should be explicitly addressed to prevent
interception.
Similarly, pending Intents delegate privileges. A pending
Intent is made by one application and then passed to another
application to use. The pending Intent retains all of the
permissions and privileges of its creator. The recipient of a
pending Intent can send the Intent to a third application,
and the pending Intent will still carry the authority of its
creator. If a malicious application obtains a pending Intent,
then the authority of the Intent’s creator can be abused.
3.2 Intent Spoofing
A malicious application can launch an Intent spoofing at-
tack by sending an Intent to an exported component that
is not expecting Intents from that application (Figure 4). If
the victim application takes some action upon receipt of such
an Intent, the attack can trigger that action. For example,
this attack may be possible when a component is exported
even though it is not truly meant to be public. Although
developers can limit component exposure by setting permis-
sion requirements in the manifest or dynamically checking
the caller’s identity, they do not always do so.
3.2.1 Malicious Broadcast Injection
If an exported Broadcast Receiver blindly trusts an in-
coming broadcast Intent, it may take inappropriate action
or operate on malicious data from the broadcast Intent. Re-
ceivers often pass on commands and/or data to Services and
Activities; if this is the case, the malicious Intent can prop-
agate throughout the application.

Citations
More filters
Proceedings ArticleDOI
20 May 2012
TL;DR: Systematize or characterize existing Android malware from various aspects, including their installation methods, activation mechanisms as well as the nature of carried malicious payloads reveal that they are evolving rapidly to circumvent the detection from existing mobile anti-virus software.
Abstract: The popularity and adoption of smart phones has greatly stimulated the spread of mobile malware, especially on the popular platforms such as Android. In light of their rapid growth, there is a pressing need to develop effective solutions. However, our defense capability is largely constrained by the limited understanding of these emerging mobile malware and the lack of timely access to related samples. In this paper, we focus on the Android platform and aim to systematize or characterize existing Android malware. Particularly, with more than one year effort, we have managed to collect more than 1,200 malware samples that cover the majority of existing Android malware families, ranging from their debut in August 2010 to recent ones in October 2011. In addition, we systematically characterize them from various aspects, including their installation methods, activation mechanisms as well as the nature of carried malicious payloads. The characterization and a subsequent evolution-based study of representative families reveal that they are evolving rapidly to circumvent the detection from existing mobile anti-virus software. Based on the evaluation with four representative mobile security software, our experiments show that the best case detects 79.6% of them while the worst case detects only 20.2% in our dataset. These results clearly call for the need to better develop next-generation anti-mobile-malware solutions.

2,122 citations

Proceedings ArticleDOI
17 Oct 2011
TL;DR: Stowaway, a tool that detects overprivilege in compiled Android applications, is built and finds that about one-third of applications are overprivileged.
Abstract: Android provides third-party applications with an extensive API that includes access to phone hardware, settings, and user data. Access to privacy- and security-relevant parts of the API is controlled with an install-time application permission system. We study Android applications to determine whether Android developers follow least privilege with their permission requests. We built Stowaway, a tool that detects overprivilege in compiled Android applications. Stowaway determines the set of API calls that an application uses and then maps those API calls to permissions. We used automated testing tools on the Android API in order to build the permission map that is necessary for detecting overprivilege. We apply Stowaway to a set of 940 applications and find that about one-third are overprivileged. We investigate the causes of overprivilege and find evidence that developers are trying to follow least privilege but sometimes fail due to insufficient API documentation.

1,395 citations

Proceedings Article
01 Jan 2012
TL;DR: A permissionbased behavioral footprinting scheme to detect new samples of known Android malware families and a heuristics-based filtering scheme to identify certain inherent behaviors of unknown malicious families are proposed.
Abstract: In this paper, we present a systematic study for the detection of malicious applications (or apps) on popular Android Markets. To this end, we first propose a permissionbased behavioral footprinting scheme to detect new samples of known Android malware families. Then we apply a heuristics-based filtering scheme to identify certain inherent behaviors of unknown malicious families. We implemented both schemes in a system called DroidRanger. The experiments with 204, 040 apps collected from five different Android Markets in May-June 2011 reveal 211 malicious ones: 32 from the official Android Market (0.02% infection rate) and 179 from alternative marketplaces (infection rates ranging from 0.20% to 0.47%). Among those malicious apps, our system also uncovered two zero-day malware (in 40 apps): one from the official Android Market and the other from alternative marketplaces. The results show that current marketplaces are functional and relatively healthy. However, there is also a clear need for a rigorous policing process, especially for non-regulated alternative marketplaces.

805 citations

Proceedings ArticleDOI
16 Oct 2012
TL;DR: An analysis of the permission system of the Android smartphone OS is performed and it is found that a trade-off exists between enabling least-privilege security with fine-grained permissions and maintaining stability of the permissions specification as the Android OS evolves.
Abstract: Modern smartphone operating systems (OSs) have been developed with a greater emphasis on security and protecting privacy. One of the mechanisms these systems use to protect users is a permission system, which requires developers to declare what sensitive resources their applications will use, has users agree with this request when they install the application and constrains the application to the requested resources during runtime. As these permission systems become more common, questions have risen about their design and implementation. In this paper, we perform an analysis of the permission system of the Android smartphone OS in an attempt to begin answering some of these questions. Because the documentation of Android's permission system is incomplete and because we wanted to be able to analyze several versions of Android, we developed PScout, a tool that extracts the permission specification from the Android OS source code using static analysis. PScout overcomes several challenges, such as scalability due to Android's 3.4 million line code base, accounting for permission enforcement across processes due to Android's use of IPC, and abstracting Android's diverse permission checking mechanisms into a single primitive for analysis.We use PScout to analyze 4 versions of Android spanning version 2.2 up to the recently released Android 4.0. Our main findings are that while Android has over 75 permissions, there is little redundancy in the permission specification. However, if applications could be constrained to only use documented APIs, then about 22% of the non-system permissions are actually unnecessary. Finally, we find that a trade-off exists between enabling least-privilege security with fine-grained permissions and maintaining stability of the permission specification as the Android OS evolves.

707 citations

Proceedings ArticleDOI
25 Jun 2012
TL;DR: An automated system called RiskRanker is developed to scalably analyze whether a particular app exhibits dangerous behavior and is used to produce a prioritized list of reduced apps that merit further investigation, demonstrating the efficacy and scalability of riskRanker to police Android markets of all stripes.
Abstract: Smartphone sales have recently experienced explosive growth. Their popularity also encourages malware authors to penetrate various mobile marketplaces with malicious applications (or apps). These malicious apps hide in the sheer number of other normal apps, which makes their detection challenging. Existing mobile anti-virus software are inadequate in their reactive nature by relying on known malware samples for signature extraction. In this paper, we propose a proactive scheme to spot zero-day Android malware. Without relying on malware samples and their signatures, our scheme is motivated to assess potential security risks posed by these untrusted apps. Specifically, we have developed an automated system called RiskRanker to scalably analyze whether a particular app exhibits dangerous behavior (e.g., launching a root exploit or sending background SMS messages). The output is then used to produce a prioritized list of reduced apps that merit further investigation. When applied to examine 118,318 total apps collected from various Android markets over September and October 2011, our system takes less than four days to process all of them and effectively reports 3281 risky apps. Among these reported apps, we successfully uncovered 718 malware samples (in 29 families) and 322 of them are zero-day (in 11 families). These results demonstrate the efficacy and scalability of RiskRanker to police Android markets of all stripes.

640 citations

References
More filters
Journal ArticleDOI
TL;DR: TaintDroid as mentioned in this paper is an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data by leveraging Android's virtualized execution environment.
Abstract: Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32p performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.

2,983 citations

Proceedings ArticleDOI
04 Oct 2010
TL;DR: Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, this work found 68 instances of misappropriation of users' location and device identification information across 20 applications.
Abstract: Today's smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides realtime analysis by leveraging Android's virtualized execution environment. TaintDroid incurs only 14% performance overhead on a CPU-bound micro-benchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of potential misuse of users' private information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications.

2,379 citations

Proceedings ArticleDOI
09 Nov 2009
TL;DR: The Kirin security service for Android is proposed, which performs lightweight certification of applications to mitigate malware at install time and indicates that security configuration bundled with Android applications provides practical means of detecting malware.
Abstract: Users have begun downloading an increasingly large number of mobile phone applications in response to advancements in handsets and wireless networks. The increased number of applications results in a greater chance of installing Trojans and similar malware. In this paper, we propose the Kirin security service for Android, which performs lightweight certification of applications to mitigate malware at install time. Kirin certification uses security rules, which are templates designed to conservatively match undesirable properties in security configuration bundled with applications. We use a variant of security requirements engineering techniques to perform an in-depth security analysis of Android to produce a set of rules that match malware characteristics. In a sample of 311 of the most popular applications downloaded from the official Android Market, Kirin and our rules found 5 applications that implement dangerous functionality and therefore should be installed with extreme caution. Upon close inspection, another five applications asserted dangerous rights, but were within the scope of reasonable functional needs. These results indicate that security configuration bundled with Android applications provides practical means of detecting malware.

1,008 citations


"Analyzing inter-application communi..." refers background in this paper

  • ...Kirin [13] approaches third-party application security from the opposite perspective of our tool....

    [...]

Proceedings Article
08 Aug 2011
TL;DR: A horizontal study of popular free Android applications uncovered pervasive use/misuse of personal/ phone identifiers, and deep penetration of advertising and analytics networks, but did not find evidence of malware or exploitable vulnerabilities in the studied applications.
Abstract: The fluidity of application markets complicate smartphone security. Although recent efforts have shed light on particular security issues, there remains little insight into broader security characteristics of smartphone applications. This paper seeks to better understand smartphone application security by studying 1,100 popular free Android applications. We introduce the ded decompiler, which recovers Android application source code directly from its installation image. We design and execute a horizontal study of smartphone applications based on static analysis of 21 million lines of recovered code. Our analysis uncovered pervasive use/misuse of personal/ phone identifiers, and deep penetration of advertising and analytics networks. However, we did not find evidence of malware or exploitable vulnerabilities in the studied applications. We conclude by considering the implications of these preliminary findings and offer directions for future analysis.

947 citations


"Analyzing inter-application communi..." refers background in this paper

  • ...Broadcast Receivers have the ability to set their priority level for receiving ordered broadcasts....

    [...]

  • ...(Broadcast Receivers can be declared in the manifest or at runtime.)...

    [...]

  • ...Sticky broadcasts remain accessible after they have been de­livered and are re-broadcast to future Receivers....

    [...]

  • ...Receivers are triggered by the receipt of an appropriate Intent and then run in the back­ground to handle the event....

    [...]

  • ...Broadcast Receivers can specify a priority level (as an attribute of its Intent .lter) to indicate to the operat­ing system how well-suited the component is to handle an Intent....

    [...]

Book
06 Mar 2003
TL;DR: The first edition made a number of predictions, explicitly or implicitly, about the growth of the Web and the patterns of Internet connectivity vastly increased, and warned of issues posed by home LANs, and about the problems caused by roaming laptops.
Abstract: From the Book: But after a time, as Frodo did not show any sign of writing a book on the spot, the hobbits returned to their questions about doings in the Shire. Lord of the Rings —J.R.R. TOLKIEN The first printing of the First Edition appeared at the Las Vegas Interop in May, 1994. At that same show appeared the first of many commercial firewall products. In many ways, the field has matured since then: You can buy a decent firewall off the shelf from many vendors. The problem of deploying that firewall in a secure and useful manner remains. We have studied many Internet access arrangements in which the only secure component was the firewall itself—it was easily bypassed by attackers going after the “protected” inside machines. Before the trivestiture of AT&T/Lucent/NCR, there were over 300,000 hosts behind at least six firewalls, plus special access arrangements with some 200 business partners. Our first edition did not discuss the massive sniffing attacks discovered in the spring of 1994. Sniffers had been running on important Internet Service Provider (ISP) machines for months—machines that had access to a major percentage of the ISP’s packet flow. By some estimates, these sniffers captured over a million host name/user name/password sets from passing telnet, ftp, and rlogin sessions. There were also reports of increased hacker activity on military sites. It’s obvious what must have happened: If you are a hacker with a million passwords in your pocket, you are going to look for the most interesting targets, and .mil certainly qualifies. Since the First Edition, we have been slowlylosing the Internet arms race. The hackers have developed and deployed tools for attacks we had been anticipating for years. IP spoofing Shimomura, 1996 and TCP hijacking are now quite common, according to the Computer Emergency Response Team (CERT). ISPs report that attacks on the Internet’s infrastructure are increasing. There was one attack we chose not to include in the First Edition: the SYN-flooding denial-of- service attack that seemed to be unstoppable. Of course, the Bad Guys learned about the attack anyway, making us regret that we had deleted that paragraph in the first place. We still believe that it is better to disseminate this information, informing saints and sinners at the same time. The saints need all the help they can get, and the sinners have their own channels of communication.Crystal Ball or Bowling Ball?The first edition made a number of predictions, explicitly or implicitly. Was our foresight accurate? Our biggest failure was neglecting to foresee how successful the Internet would become. We barely mentioned the Web and declined a suggestion to use some weird syntax when listing software resources. The syntax, of course, was the URL... Concomitant with the growth of the Web, the patterns of Internet connectivity vastly increased. We assumed that a company would have only a few external connections—few enough that they’d be easy to keep track of, and to firewall. Today’s spaghetti topology was a surprise. We didn’t realize that PCs would become Internet clients as soon as they did. We did, however, warn that as personal machines became more capable, they’d become more vulnerable. Experience has proved us very correct on that point. We did anticipate high-speed home connections, though we spoke of ISDN, rather than cable modems or DSL. (We had high-speed connectivity even then, though it was slow by today’s standards.) We also warned of issues posed by home LANs, and we warned about the problems caused by roaming laptops. We were overly optimistic about the deployment of IPv6 (which was called IPng back then, as the choice hadn’t been finalized). It still hasn’t been deployed, and its future is still somewhat uncertain. We were correct, though, about the most fundamental point we made: Buggy host software is a major security issue. In fact, we called it the “fundamental theorem of firewalls”: Most hosts cannot meet our requirements: they run too many programs that are too large. Therefore, the only solution is to isolate them behind a firewall if you wish to run any programs at all. If anything, we were too conservative.Our ApproachThis book is nearly a complete rewrite of the first edition. The approach is different, and so are many of the technical details. Most people don’t build their own firewalls anymore. There are far more Internet users, and the economic stakes are higher. The Internet is a factor in warfare. The field of study is also much larger—there is too much to cover in a single book. One reviewer suggested that Chapters 2 and 3 could be a six-volume set. (They were originally one mammoth chapter.) Our goal, as always, is to teach an approach to security. We took far too long to write this edition, but one of the reasons why the first edition survived as long as it did was that we concentrated on the concepts, rather than details specific to a particular product at a particular time. The right frame of mind goes a long way toward understanding security issues and making reasonable security decisions. We’ve tried to include anecdotes, stories, and comments to make our points. Some complain that our approach is too academic, or too UNIX-centric, that we are too idealistic, and don’t describe many of the most common computing tools. We are trying to teach attitudes here more than specific bits and bytes. Most people have hideously poor computing habits and network hygiene. We try to use a safer world ourselves, and are trying to convey how we think it should be. The chapter outline follows, but we want to emphasize the following: It is OK to skip the hard parts. If we dive into detail that is not useful to you, feel free to move on. The introduction covers the overall philosophy of security, with a variety of time-tested maxims. As in the first edition, Chapter 2 discusses most of the important protocols, from a security point of view. We moved material about higher-layer protocols to Chapter 3. The Web merits a chapter of its own. The next part discusses the threats we are dealing with: the kinds of attacks in Chapter 5, and some of the tools and techniques used to attack hosts and networks in Chapter 6. Part III covers some of the tools and techniques we can use to make our networking world safer. We cover authentication tools in Chapter 7, and safer network servicing software in Chapter 8. Part IV covers firewalls and virtual private networks (VPNs). Chapter 9 introduces various types of firewalls and filtering techniques, and Chapter 10 summarizes some reasonable policies for filtering some of the more essential services discussed in Chapter 2. If you don’t find advice about filtering a service you like, we probably think it is too dangerous (refer to Chapter 2). Chapter 11 covers a lot of the deep details of firewalls, including their configuration, administration, and design. It is certainly not a complete discussion of the subject, but should give readers a good start. VPN tunnels, including holes through firewalls, are covered in some detail in Chapter 12. There is more detail in Chapter 18. In Part V, we apply these tools and lessons to organizations. Chapter 13 examines the problems and practices on modern intranets. See Chapter 15 for information about deploying a hacking-resistant host, which is useful in any part of an intranet. Though we don’t especially like intrusion detection systems (IDSs) very much, they do play a role in security, and are discussed in Chapter 15. The last part offers a couple of stories and some further details. The Berferd chapter is largely unchanged, and we have added “The Taking of Clark,” a real-life story about a minor break-in that taught useful lessons. Chapter 18 discusses secure communications over insecure networks, in quite some detail. For even further detail, Appendix A has a short introduction to cryptography. The conclusion offers some predictions by the authors, with justifications. If the predictions are wrong, perhaps the justifications will be instructive. (We don’t have a great track record as prophets.) Appendix B provides a number of resources for keeping up in this rapidly changing field.Errata and UpdatesEveryone and every thing seems to have a Web site these days; this book is no exception. Our “official” Web site is . We’ll post an errata list there; we’ll also keep an up-to-date list of other useful Web resources. If you find any errors—we hope there aren’t many—please let us know via e-mail at .AcknowledgmentsFor many kindnesses, we’d like to thank Joe Bigler, Steve “Hollywood” Branigan, Hal Burch, Brian Clapper, David Crocker, Tom Dow, Phil Edwards and the Internet Public Library, Anja Feldmann, Karen Gettman, Brian Kernighan, David Korman, Tom Limoncelli, Norma Loquendi, Cat Okita, Robert Oliver, Vern Paxson, Marcus Ranum, Eric Rescorla, Guido van Rooij, Luann Rouff (a most excellent copy editor), Abba Rubin, Peter Salus, Glenn Sieb, Karl Siil (we’ll always have Boston), Irina Strizhevskaya, Rob Thomas, Win Treese, Dan Wallach, Avishai Wool, Karen Yannetta, and Michal Zalewski, among many others. BILL CHESWICK STEVE BELLOVIN AVI RUBIN 020163466XP01302003

730 citations


Additional excerpts

  • ...[9]....

    [...]

Frequently Asked Questions (11)
Q1. What contributions have the authors mentioned in the paper "Analyzing inter-application communication in android" ?

In addition to an open API, the Android operating system also provides a rich inter-application message passing system. The authors examine Android application interaction and identify security risks in application components. The authors provide a tool, ComDroid, that detects application communication vulnerabilities. The authors analyzed 20 applications with the help of ComDroid and found 34 exploitable vulnerabilities ; 12 of the 20 applications have at least one vulnerability. 

ComDroid specifically performs flowsensitive, intraprocedural static analysis, augmented with limited interprocedural analysis that follows method invocations to a depth of one method call. 

Requiring Signature or SignatureOrSystem permissions is an effective way of limiting a component’s exposure to a set of trusted applications. 

The authors treat Activities and their aliases as separate components for the purpose of their analysis because an alias’s fields can increase the exposure surface of the component. 

Receivers can also be dynamically created and registered by calling registerReceiver(BroadcastReceiver receiver, IntentFilter filter). 

Their results indicate that Broadcast- and Activity- related Intents (both sending to and receiving from) play a large role in application exposure. 

Android determines which Intents should be delivered to an exported component by matching each Intent’s fields to the component’s declaration. 

iOS developers are unlikely to accidentally expose functionality because schemes are only used for public interfaces; different types of messages are used for internal communication. 

Of the 181 warnings, the authors discovered 20 definite vulnerabilities, 14 spoofing vulnerabilities, and 16 common, unintentional bugs (that are not also vulnerabilities). 

To make components more secure, developers should avoid exporting components unless the component is specifically designed to handle requests from other applications. 

A developer sends an explicit Intent by specifying a recipient component name; the Intent is then delivered to the component with that name.