scispace - formally typeset
Search or ask a question
Journal Article

Privacy and Trust Attitudes in the Intent to Volunteer for Data-Tracking Research.

01 Dec 2016-Information Research: An International Electronic Journal (Thomas D. Wilson. 9 Broomfield Road, Broomhill, Sheffield, S10 2SE, UK. Web site: http://informationr.net/ir)-Vol. 21, Iss: 4
TL;DR: Few people are likely to be open to volunteering when required to install data-tracking software on their own computers, and addressing privacy concerns and conditions of trust requires understanding the dependencies between these factors through further research with broader populations.
Abstract: Introduction. The analysis of detailed interaction records is fundamental to development of user-centred systems. Researchers seeking such data must recruit volunteers willing to allow tracking of their interactions. This study examines privacy and trust attitudes in the intent to volunteer for research requiring installation of tracking software. Method. A quasi-experimental survey was used to determine how privacy and trust attitudes and the intent to volunteer differ depending on whether tracking software is installed on one’s own computer or a university lab computer. Analysis. Data from 110 valid responses were analysed using SPSS. Responses were compared between three levels of intent to volunteer (open, closed, unsure) and installation requirements. Results. Comparing those who decided on installation in the lab to those who decided on installation on their own computers, the acceptability of data tracking differed significantly and differences in the intent to volunteer approached significance. Attitudes on technology, information privacy, trust and research participation differed only with the intent to volunteer. Conclusion. Few people are likely to be open to volunteering when required to install data-tracking software on their own computers. Addressing privacy concerns and conditions of trust requires understanding the dependencies between these factors through further research with broader populations.

Content maybe subject to copyright    Report

VOL. 21 NO. 4, DECEMBER, 2016
Contents | Author index | Subject index | Search | Home
Privacy and trust attitudes in the intent to volunteer for data-
tracking research
Catherine L. Smith.
Abstract
Introduction. The analysis of detailed interaction records is fundamental
to development of user-centred systems. Researchers seeking such data must
recruit volunteers willing to allow tracking of their interactions. This study
examines privacy and trust attitudes in the intent to volunteer for research
requiring installation of tracking software.
Method. A quasi-experimental survey was used to determine how privacy
and trust attitudes and the intent to volunteer differ depending on whether
tracking software is installed on one’s own computer or a university lab
computer.
Analysis. Data from 110 valid responses were analysed using SPSS.
Responses were compared between three levels of intent to volunteer (open,
closed, unsure) and installation requirements.
Results. Comparing those who decided on installation in the lab to those
who decided on installation on their own computers, the acceptability of data
tracking differed significantly and differences in the intent to volunteer
approached significance. Attitudes on technology, information privacy, trust
and research participation differed only with the intent to volunteer.
Conclusion. Few people are likely to be open to volunteering when required
to install data-tracking software on their own computers. Addressing
privacy concerns and conditions of trust requires understanding the
dependencies between these factors through further research with broader
populations.
Introduction
In daily life, people use search engines, social networking sites, and
other electronic resources as a matter of course. Companies that
provide these services record their users’ activities for purposes of
modelling and predicting needs and preferences. In exchange for
valuable services, users grant companies permission to access, record
(log), and analyse highly personal and detailed information such as
the content of email, search engine query terms, and URLs of
Websites visited (Kellar, Hawkey, Inkpen and Watters, 2008
).
Collected data may be anonymised, or users may grant permission for
the retention of identifiable data for the construction of individualised
profiles. With these data, commercial enterprises such as Google,
Facebook, and Microsoft have acquired detailed and powerful views
change font

on many aspects of human information behaviour.
For academic researchers, understanding how current systems are
used in the wild is fundamental. One approach to this is ethnographic
methods (Rieh, 2004
), which are time-consuming to analyse and
often focus on small samples that may not generalise. Research
participants may be invited to a lab for observation, but the
completion of assigned tasks is unlikely to reflect typical user needs
and behaviour, even when participants are asked to perform their
own tasks (Hearst and Degler, 2013
). Beyond the need for records of
interaction during authentic problem solving for domains such as
health care (Mamlin and Tierney, 2016) and disaster recovery
(Spence, Lachlan and Rainear, 2016
), long-term longitudinal data are
critical to understanding changes in usage over time. Obtaining such
data requires access to shared collections (e.g., USEWOD2012, n.d.),
collaborative work across industry and academia (Dumais, Jeffries,
Russell, Tang and Teevan, 2014; Yang and Soboroff, 2016), or the
deployment of data tracking processes developed by and for academic
researchers (Feild, Allan and Glatt, 2011
).
One solution for academics is a collaborative approach such as a
living laboratory (Kelly, Dumais and Pedersen, 2009; Smith, 2011,
2013). Enterprises of this type are shared among researchers and may
engage volunteers in the co-design of information systems (Pallot,
Trousse, Senach and Scapin, 2010). In this paper, we focus
specifically on the concept of a virtual lab, where collaboration occurs
online and participants are remote. Here, ideal volunteers would
grant permission for the tracking of detailed interaction data across
all personal digital devices. From the perspective of privacy and trust,
the development of such a facility faces two interdependent
challenges. First, the privacy of volunteers must be safeguarded
through techniques such as anonymisation and differential privacy
(Ohm 2010
; Yang and Soboroff, 2016). Second, in a chicken-and-egg
problem, testing these privacy techniques requires a sufficient
number of volunteers (Feild and Allan, 2013
). Researchers in both the
academy and industry have found it difficult to recruit volunteers
willing to knowingly install tracking software on their computers
(Guo, White, Zhang, Anderson and Dumais, 2011
; Community Query
Log Project, 2010; Russell and Oren, 2009). Challenges in recruiting
research volunteers extend to other domains (Close, Smaldone,
Fennoy, Reame and Grey, 2013; Koo and Skinner, 2005), but it is also
likely that privacy concerns associated with tracking cause specific
impediments. This paper addresses these concerns and other factors
hypothesised to affect the decision to volunteer.
This paper is organised as follows. First, we briefly review selected
literature on privacy and trust. Following this background
information, we state four specific research questions and then
describe the method of the study and results. We then discuss our
findings and implications before concluding. The paper contributes
findings on factors affecting a potential research volunteer’s decision
to participate in research, with specific findings on the requirement to
download and install tracking software on one’s own computer.
Background

There are many obvious considerations in a decision to volunteer for
research where explicit disclosure of private information is required.
Two basic aspects are one’s views on personal information privacy
and trust that one’s privacy will not be violated. While these are
straightforward concerns, the study of privacy and trust is not,
particularly in light of the many issues raised when modern
information technologies are involved (Stutzman, Gross and Acquisti,
2013). There are many studies on privacy and trust in various
disciplines and social contexts: law, business, marketing, psychology,
computer science, information science and so forth (see, Bélanger and
Crossler, 2011; Wang, Min and Han, 2016). While privacy and trust
have been treated separately, recent work has examined the
combined role of each in human affairs. Many useful
conceptualisations flow from this work. Discussions from the law
(Nissenbaum, 2001
, 2004) are written with the goal of developing a
theoretical framework for discussion of practical implications. In this
background section we introduce central concepts of privacy and trust
starting with Nissenbaum’s views, and then present work on several
major constructs.
Conceptualisation of information privacy
In introducing conceptualisations that underlie our study, we begin
with Nissenbaum’s paradigm of contextual integrity (2004
). In
investigating factors in privacy perception, Martin and Nissenbaum
(in press) hypothesised that one’s sense of privacy is dependent on
three aspects of context: the specific actors involved (who is sending
or receiving information), expectations on the flow of information
between the actors (when and how the information will be used), and
the type or content of information within that flow (what is shared).
More generally, Nissenbaum’s view posits that context forms social
and personal norms for privacy, and that privacy violations come
about when contextual elements are misaligned.
For example, granting permission to a search engine (actor) for the
recording of query terms (content) for the purpose of improving
search outcomes (flow) is normative; in this context the searcher
perceives some acceptable level of privacy. In contrast, if the query
terms are later distributed to a third party for marketing purposes,
the flow is altered in violation of the norm, and privacy is diminished.
In the present study, we examine the specific context of a researcher
recruiting volunteers for a study that requires the explicit action of
downloading, installing and activating tracking software that records
search interaction. In this scenario, the actors are the potential
volunteer receiving a recruiting communication, the researcher
sending it and the researcher’s affiliated institution. The content is
the verbatim text of search queries and the URLs of Websites visited.
The flow of information mirrors that expected with search providers,
except that the researcher offers no exchange of services for the right
to access (Richards and Hartzog, in press
). As suggested by
Nissenbaum’s view, privacy is a highly complex construct.
Typical factors studied in work on privacy-related decisions include
general privacy concerns (e.g., Malhotra, Kim and Agarwal, 2004
),
context-dependent privacy concerns (e.g., Internet privacy concerns,

Dinev and Hart, 2006) and other situational factors (for a
comprehensive review, see Bélanger and Crossler, 2011
). Recent work
(Dinev, Xu, Smith and Hart, 2013; Kehr, Kowatsch, Wentzel and
Fleisch, 2015) has found evidence for the subsuming construct
privacy perception, which is characterised as ‘an individual state,
subsuming all privacy-related considerations at a specific point in
time’ (Kehr,
et al., 2015, para. 1). Kehr et al. found privacy perception
to be antecedent to privacy-related decisions on information
disclosure. Two key findings flow from this work. There is an
interdependency of risk and benefit perceptions, whereby the
perception of risk to privacy is mitigated by the perception of greater
benefit from disclosure (Dinev
et al., 2013; Kehr et al., 2015). The
same studies found that perceptions of risk and benefit vary with
other factors such as general concerns about privacy, the affective
valence of communications and trust in technology infrastructure.
Dinev et al. found that the perception of control over the information
involved (i.e., anonymity and secrecy) affected perceptions of privacy.
More generally, trust has been found to be a key factor in decisions on
the disclosure of private information. Next, we introduce the general
concept of trust, and then briefly review associated factors before
concluding with a discussion of models that account for both privacy
concerns and trust relationships.
Conceptualisations of trust
In work on privacy, trust has been modelled as an outcome on
perceptions of risk (e.g., Dinev and Hart, 2006
) and as antecedent to
perceptions of risks and benefits (e.g., Kehr et al., 2015). In a recent
meta-analysis of research on trust in decisions on engagement in
social media, Wang, Min and Han (2016) examined trust as a causal
factor in perceptions of risk to information privacy and security.
Given the complexity of interdependencies between trust and privacy,
in considering the role of trust in decision making we turn again to
Nissenbaum (2001
) for views taken from the broader and more
practical vantage point of the law. Next, we summarise and
paraphrase her characterisations of trust.
Generally, trust is a specific relationship between a trustee (the entity
being trusted) and a trustor (the person who trusts). Trust forms over
time and with experience; however, in order for trust to accrue, there
must be sufficient initial trust. Trust is affected by the history and
reputation of the trustee. Where the trustor has some basis for
personal knowledge of a trustee, perceptions of the trustee’s personal
characteristics affect trust. Within the social context in which trust is
sought, a trustee assumes a role. The trustor’s knowledge of the
trustee’s qualifications for that role are important to initial trust
formation. More generally, the construct of social context includes
norms for trustworthiness in the relationship, any penalty the trustee
faces for failing to prove trustworthy, the likelihood of disclosure
should there be a failure and any insurance against the trustor’s loss
if the trustee proves untrustworthy. Finally, trust is most likely to
develop when two parties share a mutual condition or risk and there
is some expectation of reciprocity. In the context of our study, all of
these factors may be involved in a potential volunteer’s decision on

enabling data tracking.
As with work on privacy, trust has a large literature covering many
models and conceptualisations. McKnight, Choudhury and Kacmar
(2002) summarise these constructs in a review of work on trust in the
context of e-commerce. We apply these concepts to the perceptions
involved in volunteering as a research participant, an act which
requires some level of trust or willingness to be vulnerable (Mayer,
Davis and Schoorman, 1995). Note that vulnerability implies
acceptance of risk.
The study presented in this paper involves three trustees: the
individual researcher seeking volunteers, the researcher’s affiliated
institution and the information technologies used to communicate
about and conduct the study.
The work of McKnight and others (McKnight, Carter, Thatcher and
Clay, 2011; McKnight and Chervany, 2001; McKnight, Choudhury and
Kacmar, 2002) suggest the following conditions for trust in e-
commerce and technology-enabled contexts. With respect to the
trustworthiness of individuals, the researcher must be perceived as
benevolent and possessing sufficient competence and integrity to
perform as promised (McKnight
et al., 2001). The university, as an
institution, must be perceived as providing structural assurance
(mitigation of risk by social constructs such as rules and regulations)
and situational normality (proper, customary, and understandable
roles) (McKnight
et al., 2002). Finally, the specific technologies
involved must be perceived as having the functionality required to
perform as promised, sufficient reliability to assure predictability and
a quality of helpfulness (McKnight
et al., 2011). In recruiting
volunteers through online means, only electronic or digital
communication is available for conveying these qualities of
trustworthiness.
Privacy and trust in decision making
We conclude our review on privacy and trust by considering elements
involved in the recruitment of research volunteers, where
participation requires the installation of tracking software. We focus
on two papers that have modelled privacy and trust factors in the
context of engagement with specific software applications. These
papers use the constructs mentioned above while introducing
additional factors.
In a synthesis of prior findings on interrelated constructs on trust and
risk, Wang, Min and Han (2016
) conducted a meta-analysis of forty-
three studies drawn from the literature on social media. In reviewing
the work, the authors found the perception of risk often measured
using privacy constructs. Their analytical framework examined
associations between trust and risk, and the associations of each on
data sharing behaviour, among other outcomes. Trust was found to
have a larger effect than risk. While the perception of risk was
associated with diminished sharing, the larger effect of trust was
associated with diminished risk perception and more sharing. In
examining moderators on sharing behaviour, trust in the technology
platform (the site, community, or service provider) was found to have

Citations
More filters
Journal Article
TL;DR: The findings suggest that, in the presence of the interaction between perceived risk and knowledge gap, as suggested by the risk-gap interaction hypothesis, the well-established direct effect between knowledge gap and motivation to seek information becomes insignificant.
Abstract: Introduction. This study proposes a theoretical framework for the complex interplay between knowledge gap and perceived risk as motivators of information seeking. Given the shortage of studies that use quantitative research methods to support theory-building in information behaviour research, the survey method is adopted and two models with alternative configurations are examined. Method. Since the two proposed models involve relationships between latent constructs, survey items were developed to operationalise the constructs and statistical tests were performed to examine the associated hypotheses. Data were collected from a large public university through a scenario-based online survey. In order to avoid confounding effects, participants were recruited from a variety of educational levels, age groups, and socioeconomic backgrounds. Eight scenarios were randomly assigned across the participants. A total of 289 responses were used for analysis. Analysis. The two models were tested using partial least squares structural equation modelling (PLS-SEM), a commonly used method for theory development in exploratory social sciences research. Results. In model one, where perceived risk and knowledge gap are assumed to have direct effects on motivation to seek information, both direct effects are significant. In model two, where perceived risk is assumed to have a moderating effect on the relationship between knowledge gap and motivation to seek information, knowledge gap interacts with perceived risk to motivate information seeking. Given that model two has a better fit than model one, the results indicate that perceived risk and knowledge gap play a rather complex role in information-seeking behaviour. Model two proposes the risk-gap interaction hypothesis (RGIH), which states that risk perceptions interact with knowledge gap to motivate information seeking. Conclusions. The findings suggest that, in the presence of the interaction between perceived risk and knowledge gap, as suggested by the risk-gap interaction hypothesis, the well-established direct effect between knowledge gap and motivation to seek information becomes insignificant. Therefore, the interaction between perceived risk and knowledge gap is of primary importance in motivating information seeking.

7 citations


Cites methods from "Privacy and Trust Attitudes in the ..."

  • ...For example, Smith (2016) used the survey method to test relationships among privacy and trust attitudes, and subjects’ intent to volunteer for research studies that require installation of tracking software....

    [...]

Journal ArticleDOI
TL;DR: In this article , a review of user trust in Artificial Intelligence (AI) enabled systems is presented, focusing on three main themes, namely socio-ethical considerations, technical and design features, and user characteristics.
Abstract: User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption. It has been suggested that AI-enabled systems must go beyond technical-centric approaches and towards embracing a more human centric approach, a core principle of the human-computer interaction (HCI) field. This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies to gather insight for future technical and design strategies, research, and initiatives to calibrate the user AI relationship. The findings confirm that there is more than one way to define trust. Selecting the most appropriate trust definition to depict user trust in a specific context should be the focus instead of comparing definitions. User trust in AI-enabled systems is found to be influenced by three main themes, namely socio-ethical considerations, technical and design features, and user characteristics. User characteristics dominate the findings, reinforcing the importance of user involvement from development through to monitoring of AI enabled systems. In conclusion, user trust needs to be addressed directly in every context where AI-enabled systems are being used or discussed. In addition, calibrating the user-AI relationship requires finding the optimal balance that works for not only the user but also the system.

5 citations

Book ChapterDOI
24 Sep 2018
TL;DR: The case for privacy literacy from an information literacy perspective is presented as a complementary mechanism to the existing approaches to protecting individuals’ information privacy.
Abstract: The increased privacy concerns and risks associated with the misuse of personal information collected, processed and re-purposed from various digital technologies calls for users’ understanding of their own informational privacy. While regulatory and technical mechanisms exist to protect individuals’ information privacy, these approaches have failed to be effective. This study presents the case for privacy literacy from an information literacy perspective as a complementary mechanism to the existing approaches to protecting individuals’ information privacy. The research used a constructivist paradigm, through interviewing twenty-one participants, and through online observation of SNS (social network services), and a privacy-settings walkthrough specifically on Facebook, and asking participants to track their online footprints and talk about any personal information found online.

3 citations

References
More filters
Book
01 Jan 1980
TL;DR: History Conceptual Foundations Uses and Kinds of Inference The Logic of Content Analysis Designs Unitizing Sampling Recording Data Languages Constructs for Inference Analytical Techniques The Use of Computers Reliability Validity A Practical Guide
Abstract: History Conceptual Foundations Uses and Kinds of Inference The Logic of Content Analysis Designs Unitizing Sampling Recording Data Languages Constructs for Inference Analytical Techniques The Use of Computers Reliability Validity A Practical Guide

25,749 citations


"Privacy and Trust Attitudes in the ..." refers methods in this paper

  • ...The verbatim responses were used in bottom-up content analysis (Krippendorff, 2012), resulting in the bipolar items....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a definition of trust and a model of its antecedents and outcomes are presented, which integrate research from multiple disciplines and differentiate trust from similar constructs, and several research propositions based on the model are presented.
Abstract: Scholars in various disciplines have considered the causes, nature, and effects of trust. Prior approaches to studying trust are considered, including characteristics of the trustor, the trustee, and the role of risk. A definition of trust and a model of its antecedents and outcomes are presented, which integrate research from multiple disciplines and differentiate trust from similar constructs. Several research propositions based on the model are presented.

16,559 citations

Journal ArticleDOI
TL;DR: This paper contributes by proposing and validating measures for a multidisciplinary, multidimensional model of trust in e-commerce, which shows that trust is indeed a multiddimensional concept.
Abstract: Evidence suggests that consumers often hesitate to transact with Web-based vendors because of uncertainty about vendor behavior or the perceived risk of having personal information stolen by hackers. Trust plays a central role in helping consumers overcome perceptions of risk and insecurity. Trust makes consumers comfortable sharing personal information, making purchases, and acting on Web vendor advice--behaviors essential to widespread adoption of e-commerce. Therefore, trust is critical to both researchers and practitioners. Prior research on e-commerce trust has used diverse, incomplete, and inconsistent definitions of trust, making it difficult to compare results across studies. This paper contributes by proposing and validating measures for a multidisciplinary, multidimensional model of trust in e-commerce. The model includes four high-level constructs--disposition to trust, institution-based trust, trusting beliefs, and trusting intentions--which are further delineated into 16 measurable, literature-grounded subconstructs. The psychometric properties of the measures are demonstrated through use of a hypothetical, legal advice Web site. The results show that trust is indeed a multidimensional concept. Proposed relationships among the trust constructs are tested (for internal nomological validity), as are relationships between the trust constructs and three other e-commerce constructs (for external nomological validity)--Web experience, personal innovativeness, and Web site quality. Suggestions for future research as well as implications for practice are discussed.

4,526 citations


"Privacy and Trust Attitudes in the ..." refers background in this paper

  • ...The university, as an institution, must be perceived as providing structural assurance (mitigation of risk by social constructs such as rules and regulations) and situational normality (proper, customary, and understandable roles) (McKnight et al., 2002)....

    [...]

Journal ArticleDOI
TL;DR: The results of this study indicate that the second-order IUIPC factor, which consists of three first-order dimensions--namely, collection, control, and awareness--exhibited desirable psychometric properties in the context of online privacy.
Abstract: The lack of consumer confidence in information privacy has been identified as a major problem hampering the growth of e-commerce. Despite the importance of understanding the nature of online consumers' concerns for information privacy, this topic has received little attention in the information systems community. To fill the gap in the literature, this article focuses on three distinct, yet closely related, issues. First, drawing on social contract theory, we offer a theoretical framework on the dimensionality of Internet users' information privacy concerns (IUIPC). Second, we attempt to operationalize the multidimensional notion of IUIPC using a second-order construct, and we develop a scale for it. Third, we propose and test a causal model on the relationship between IUIPC and behavioral intention toward releasing personal information at the request of a marketer. We conducted two separate field surveys and collected data from 742 household respondents in one-on-one, face-to-face interviews. The results of this study indicate that the second-order IUIPC factor, which consists of three first-order dimensions--namely, collection, control, and awareness--exhibited desirable psychometric properties in the context of online privacy. In addition, we found that the causal model centering on IUIPC fits the data satisfactorily and explains a large amount of variance in behavioral intention, suggesting that the proposed model will serve as a useful tool for analyzing online consumers' reactions to various privacy threats on the Internet.

2,597 citations

Journal ArticleDOI
TL;DR: Although Internet privacy concerns inhibit e-commerce transactions, the cumulative influence of Internet trust and personal Internet interest are important factors that can outweigh privacy risk perceptions in the decision to disclose personal information when an individual uses the Internet.
Abstract: While privacy is a highly cherished value, few would argue with the notion that absolute privacy is unattainable. Individuals make choices in which they surrender a certain degree of privacy in exchange for outcomes that are perceived to be worth the risk of information disclosure. This research attempts to better understand the delicate balance between privacy risk beliefs and confidence and enticement beliefs that influence the intention to provide personal information necessary to conduct transactions on the Internet. A theoretical model that incorporated contrary factors representing elements of a privacy calculus was tested using data gathered from 369 respondents. Structural equations modeling (SEM) using LISREL validated the instrument and the proposed model. The results suggest that although Internet privacy concerns inhibit e-commerce transactions, the cumulative influence of Internet trust and personal Internet interest are important factors that can outweigh privacy risk perceptions in the decision to disclose personal information when an individual uses the Internet. These findings provide empirical support for an extended privacy calculus model.

1,870 citations


"Privacy and Trust Attitudes in the ..." refers background or result in this paper

  • ...Trust is a mitigating factor for the perception of risk (Dinev and Hart, 2006; Johnston and Warkentin, 2010; Kehr et al., 2015), hence, we associate open respondents’ greater trust in the researcher with a lower perception of risk, and hypothesise that this enhanced the perception of acceptability…...

    [...]

  • ...In work on privacy, trust has been modelled as an outcome on perceptions of risk (e.g., Dinev and Hart, 2006) and as antecedent to perceptions of risks and benefits (e.g., Kehr et al., 2015)....

    [...]

  • ...Trust is a mitigating factor for the perception of risk (Dinev and Hart, 2006; Johnston and Warkentin, 2010; Kehr et al., 2015), hence, we associate open respondents’ greater trust in the researcher with a lower perception of risk, and hypothesise that this enhanced the perception of acceptability for data tracking....

    [...]

  • ...…in work on privacy-related decisions include general privacy concerns (e.g., Malhotra, Kim and Agarwal, 2004), context-dependent privacy concerns (e.g., Internet privacy concerns, Dinev and Hart, 2006) and other situational factors (for a comprehensive review, see Bélanger and Crossler, 2011)....

    [...]

  • ...In viewing acceptability as the perception of risk, this is consistent with prior findings that the perception of greater risk is associated with less willingness to disclose private information and lower trust (Dinev and Hart, 2006; Dinev et al., 2013)....

    [...]