scispace - formally typeset
Search or ask a question

Showing papers in "Management Information Systems Quarterly in 2022"


Journal ArticleDOI
TL;DR: In this article , the authors conducted a qualitative study with narrative interviews of IT users who had experienced technostress, and they contributed to the literature by unpacking states in which technology stress forms and can be mitigated and the IT affordance literature by explaining the role of affordances and their actualizations in technology stress, and introducing the new concept of actualization cost.
Abstract: Understanding information technology (IT) use is vital for the information systems (IS) discipline due to its substantial positive and negative consequences. In recent years, IT use for personal purposes has grown rapidly. Although personal use is voluntary and can often reflect fun, technostress is a common negative consequence of such use. When left unaddressed, technostress can cause serious harm to IT users. However, prior research has not explained how technostress forms over time or how its mitigation takes place in a personal—rather than organizational—environment. To address these research gaps, we conducted a qualitative study with narrative interviews of IT users who had experienced technostress. This study contributes to (1) the technostress literature by unpacking states in which technostress forms and can be mitigated and (2) the IT affordance literature by explaining the role of affordances and their actualizations in technostress as well as introducing the new concept of actualization cost. In terms of practice, our findings help individuals and societies identify the development of technostress, understand the activities required for its mitigation, and recognize mitigation barriers.

30 citations


Journal ArticleDOI
TL;DR: In this article , a game-theoretic model was developed to examine how new-media advertising affects retail platform openness and found that the availability of relatively low-cost advertising through new media plays a critical role in influencing the leading retailer to open its platform and to form a partnership with the third-party seller, which is impossible when the cost of advertising is relatively high.
Abstract: We have recently witnessed two important trends in online retailing: The advent of new media (e.g., social media and search engines) has made advertising affordable for small sellers, and large online retailers (e.g., Amazon and JD.com) have opened their platforms to allow even direct competitors to sell on their platforms. We examine how new-media advertising affects retail platform openness. We develop a game-theoretic model in which a leading retailer, who has both valuation and awareness advantages, and a third-party seller, who sells an identical product, engage in price competition. We find that the availability of relatively low-cost advertising through new media plays a critical role in influencing the leading retailer to open its platform and to form a partnership with the third-party seller, which is impossible when the cost of advertising is relatively high. Low-cost advertising can increase consumer surplus either directly via the third-party seller’s advertising or indirectly via the partnership on the leading retailer’s platform. We also find that the leading retailer has a greater incentive to open its platform and that the partnership is more likely to be formed when there are network effects, when the leading retailer can control the third-party seller’s exposure on its platform, or when the leading retailer can offer a direct advertising service to the third-party seller. Meanwhile, the constraint on the third-party seller’s advertising budget can reduce the leading retailer’s incentive to open its platform, making a partnership less likely. Our analysis offers important insights into the underlying economic incentives that help explain the emerging open retail platform trend in the era of new-media advertising.

15 citations



Journal ArticleDOI
TL;DR: In this paper , a peer privacy concern is defined as the feeling of being unable to maintain functional personal boundaries in online activities as a result of the behavior of online peers, and a multidimensional peer-related privacy concern construct is proposed, which focuses on privacy violations from online peers.
Abstract: Privacy needs on today’s internet differ from the information privacy needs in traditional e-commerce settings due to their focus on interactions among online peers rather than merely transactions with an online vendor. Peer-oriented online interactions have critical implications for an individual’s virtual presence and self-cognition. Yet existing conceptualizations of internet privacy concerns have solely focused on the control of personal information release and on online interactions with online vendors. Drawing on the theory of personal boundaries, this study revisits the theoretical foundation of online privacy and proposes a multidimensional peer-related privacy concern construct, that focuses on privacy violations from online peers. We term this new construct “Peer Privacy Concern” (PrPC) and define it as the general feeling of being unable to maintain functional personal boundaries in online activities as a result of the behavior of online peers. This construct consists of four dimensions comprised of a reconceptualization of information privacy concerns to also reflect privacy concerns with respect to peers’ handling of self-shared information and with respect to peer-shared information about one’s self, and three new dimensions that tap into the arising privacy needs from virtual interactions (i.e., virtual territory privacy concern and communication privacy concern) as well as from the need to maintain psychological independence (i.e., psychological privacy concern). These new dimensions, which are rooted in the theory of personal boundaries, are prominent privacy needs in online social interactions with peers. However, they are absent from previous privacy concern conceptualizations. Scales for measuring this new construct are developed and empirically validated.

13 citations



Journal ArticleDOI
TL;DR: In this paper , the authors identify the declining information processing speed of older workers as the cause of their reduced capacity to perform IT-enabled tasks, and suggest five ways that organizations can help older users improve their capacity.
Abstract: Evidence shows that older users have lower performance levels for IT-enabled tasks than younger users. This is alarming at a time when the workforce is rapidly aging and organizational technologies are proliferating. Since the explanation for these lower performance levels remains unclear, managers are not sure how to help older users realize their full potential as contributors to organizational success. The research model presented here identifies the declining information-processing speed of older workers as the cause of their reduced capacity to perform IT-enabled tasks. According to the model, IT experience and IT self-efficacy reduce the negative impacts of this decline, whereas IT overload and the effort cost of IT use aggravate them. To test the model, data were collected using three complementary studies. The results supported the model and indicated five ways that organizations can help older users improve their capacity to perform IT-enabled tasks. Additional data collected in interviews with human resources directors confirmed the relevance of these solutions.

10 citations



Journal ArticleDOI
TL;DR: A recent and largely untested exception is the theory of effective use (TEU) as discussed by the authors , which has been used in the context of business intelligence (BI) in the past few years.
Abstract: The benefits that organizations accrue from information systems depend on how effectively the systems are used. Yet despite the importance of knowing what it takes to use information systems effectively, little theory on the topic exists. One recent and largely untested exception is the theory of effective use (TEU). We report on a contextualization, extension, and test of TEU in the business intelligence (BI) context, a context of considerable importance in which researchers have called for such studies. We used a mixed methods, three-phase approach involving instrument development (n = 218), a two-wave cross-sectional survey (n = 437), and three sets of follow-up interviews (n = 33). The paper contributes by (1) showing how TEU can be contextualized, operationalized, and extended, (2) demonstrating that many of TEU’s predictions hold in the BI context while also revealing ways to improve the theory, and (3) offering practical insights that executives can draw on to improve the use of BI in their organizations.

9 citations


Journal ArticleDOI
TL;DR: In this article , mouse-movement traces were used to detect fraud during online transactions in real time, enabling organizations to confront fraud proactively as it is happening at internet scale.
Abstract: Trace data—users’ digital records when interacting with technology—can reveal their cognitive dynamics when making decisions on websites in real time. Here, we present a trace­data method, analyzing movements captured via a computer mouse, to assess potential fraud when filling out an online form. In contrast to existing fraud­detection methods, which analyze information after submission, mouse­movement traces can capture the cognitive deliberations as possible indicators of fraud as it is happening. We report two controlled studies using different tasks, where participants could freely commit fraud to benefit themselves financially. As they performed the tasks, we captured mouse­cursor movement data and found that participants who entered fraudulent responses moved their mouse significantly more slowly and with greater deviation. We show that the extent of fraud matters such that more extensive fraud increases movement deviation and decreases movement speed. These results demonstrate the efficacy of analyzing mouse­movement traces to detect fraud during online transactions in real time, enabling organizations to confront fraud proactively as it is happening at internet scale. Our method of analyzing actual user behaviors in real time can complement other behavioral methods in the context of fraud and a variety of other contexts and settings.

8 citations


Journal ArticleDOI
TL;DR: This study leverages the conduit brokerage perspective and the findings of a multiple case study to develop a novel framework of algorithmic conduit brokerage for understanding information dissemination by bots and the design choices that may influence their actions.
Abstract: Despite increased empirical attention, theory on bots and how they act to disseminate information on social media remains poorly understood. Our study leverages the conduit brokerage perspective and the findings of a multiple case study to develop a novel framework of algorithmic conduit brokerage for understanding information dissemination by bots and the design choices that may influence their actions. Algorithmic conduit brokerage encompasses two intertwined processes. The first process, algorithmic social alertness, relies on bot activity to curate and reconfigure information. Algorithmic social alertness is significant because it involves action triggers that dictate the kinds of information being searched, discovered, and retrieved by bots. The second process, algorithmic social transmission, relies on bot activity to embellish and distribute the information curated. Algorithmic social transmission is important because it can broaden the reach of information disseminated by bots through increased discoverability and directed targeting. The two algorithmic conduit brokerage processes we offer are unique to bots and distinct from the original conceptualization of conduit brokerage, which is rooted in human activity. First, since bots lack the human ability of sensemaking and are instead fueled by automation and action triggers rather than by emotions, algorithmic conduit brokerage is more invariant and reliable than human conduit brokerage. Second, automation increases the speed and scale of information curation and transfer, making algorithmic conduit brokerage not only more consistent but also faster and more extensive. Third, algorithmic conduit brokerage includes a set of new concepts (e.g., action triggers and rapid scaling) that are specific to bots and therefore not applicable to human conduit brokerage.

8 citations


Journal ArticleDOI
TL;DR: In this article , the authors conducted a randomized field experiment and recruited 215 freelancers in a freemium, work-related SMN and found that freelancers who had access to advanced networking features increased their social capital by 4.609% for each unit increase on the strategic networking behavior scale.
Abstract: Work-related social media networks (SMNs) like LinkedIn introduce novel networking opportunities and features that promise to help individuals establish, extend, and maintain social capital (SC). Typically, work-related SMNs offer access to advanced networking features exclusively to premium users in order to encourage basic users to become paying members. Yet little is known about whether access to these advanced networking features has a causal impact on the accumulation of SC. To close this research gap, we conducted a randomized field experiment and recruited 215 freelancers in a freemium, work-related SMN. Of these recruited participants, more than 70 received a randomly assigned voucher for a free 12- month premium membership. We observe that individuals do not necessarily accumulate more SC from their ability to access advanced networking features, as the treated freelancers did not automatically change their online networking engagement. Those features only reveal their full utility if individuals are motivated to proactively engage in networking. We found that freelancers who had access to advanced networking features increased their SC by 4.609% for each unit increase on the strategic networking behavior scale. We confirmed this finding in another study utilizing a second, individual-level panel dataset covering 52,392 freelancers. We also investigated the dynamics that active vs. passive features play in SC accumulation. Based on these findings, we introduce the “theory of purposeful feature utilization”: essentially, individuals must not only possess an efficacious “networking weapon”—they also need the intent to “shoot” it.

Journal ArticleDOI
TL;DR: In this paper , the authors examine the mutual influence between participants' product scope and product innovation over time and probe the moderating role of co-created collaborative networks, concluding that participants with more frequent new product development are more likely to expand their product scope, whereas those with frequent existing product updates are less likely to pursue scope expansion.
Abstract: This research highlights the circulative nature of digital platform ecosystem dynamics. Investigating these dynamics, we examine the mutual influence between participants’ product scope and product innovation over time and probe the moderating role of co-created collaborative networks. We distinguish between two types of product innovation: new product development and existing product updates. Our longitudinal analysis of the Hadoop software ecosystem indicates that participants covering a broader scope of the platform’s technological layers are less likely to develop new products but more likely to update existing products. In turn, participants with more frequent new product development are more likely to expand their product scope, whereas those with more frequent existing product updates are less likely to pursue scope expansion. Participants’ centrality in the ecosystem’s collaborative network amplifies the bidirectional link between product scope and existing product updates but weakens the link between product scope and new product development. Our findings offer a theoretical and practical understanding of temporal dynamics between participants’ product scope choices and different forms of product innovations in the co-created collaborative network environment.

Journal ArticleDOI
TL;DR: In this article , the authors show that when sellers respond to product ratings by adjusting their prices, compared to the single-dimensional rating scheme, the multidimensional rating scheme does not always benefit consumers, nor does it necessarily benefit sellers or society.
Abstract: Product review platforms in online marketplaces differ with respect to the granularity of product quality information they provide. While some platforms provide a single overall rating for product quality (also referred to as the single-dimensional rating scheme), others provide a separate rating for each individual quality attribute (also referred to as the multidimensional rating scheme). The multidimensional rating scheme is superior to the single-dimensional rating scheme, ceteris paribus, in reducing consumers’ uncertainty about product quality and value. However, we show that, when sellers respond to product ratings by adjusting their prices, compared to the single-dimensional rating scheme, the multidimensional rating scheme does not always benefit consumers, nor does it necessarily benefit sellers or society. The uncertainty associated with quality attribute rating and the extent of differentiation between competing products determines whether a finer-grained multidimensional rating scheme is superior to a coarser-grained single-dimensional rating scheme from the consumer, seller, and social planner perspectives. The main driver of the results is that more (less) granular and less (more) uncertain information exposes (hides) underlying differentiation, or a lack thereof, between competing products, which, in turn, alters upstream price competition in the presence of heterogeneous consumer preferences. The results demonstrate that focusing on the information transfer aspect of rating schemes provides only a partial understanding of the true impacts of rating schemes.

Journal ArticleDOI
TL;DR: In this article , an adversarial attention-based deep multisource multitask learning (AADMML) framework was proposed to detect early stage Parkinson's disease using wearable sensor-based information systems.
Abstract: Advancing the quality of healthcare for senior citizens with chronic conditions is of great social relevance. To better manage chronic conditions, objective, convenient, and inexpensive wearable sensor- based information systems (IS) have been increasingly used by researchers and practitioners. However, existing models often focus on a single aspect of chronic conditions and are often “black boxes” with limited interpretability. In this research, we adopt the computational design science paradigm and propose a novel adversarial attention-based deep multisource multitask learning (AADMML) framework. Drawing upon deep learning, multitask learning, multisource learning, attention mechanism, and adversarial learning, AADMML addresses limitations with existing wearable sensor-based chronic condition severity assessment methods. Choosing Parkinson’s disease (PD) as our test case because of its prevalence and societal significance, we conduct benchmark experiments to evaluate AADMML against state-of-the-art models on a large-scale dataset containing thousands of instances. We present three case studies to demonstrate the practical utility and economic benefits of AADMML and by applying it to detect early-stage PD. We discuss how our work is related to the IS knowledge base and its practical implications. This work can contribute to improved life quality for senior citizens and advance IS research in mobile health analytics.

Journal ArticleDOI
TL;DR: In this paper , the authors examined the effects of various response strategies and response times on the predominant stakeholders affected by data breaches: customers and investors, and found that the negative effects of data breaches disappear after six months.
Abstract: Companies may face serious adverse consequences as a result of a data breach event. To repair the potential damage to relationships with stakeholders after data breaches, companies adopt a variety of response strategies. However, the effects of these response strategies on the behavior of stakeholders after a data breach are unclear; differences in response times may also affect these outcomes, depending on the notification laws that apply to each company. As part of a multimethod study, we first identified the adopted response strategies in Study 1 based on content analysis of the response letters issued by publicly traded U.S. companies (n = 204) following data breaches; these strategies include any combination of the following: corrective action, apology, and compensation. We also found that breached companies may remain silent and adopt a “no action” strategy. In Studies 2 and 3, we examined the effects of various response strategies and response times on the predominant stakeholders affected by data breaches: customers and investors. In Study 2, we focused on customers and present a moderated-moderated-mediation model based on the expectancy violation theory. To test this model, we designed a factorial survey with 15 different conditions (n = 811). In Study 3, we focused on investors and conducted an event study (n = 166) to examine their reactions to company responses to data breaches. The results indicate the presence of moderating effects of certain response strategies; surprisingly, we did not find compensation to be more effective than apology. The magnitude of the moderating effects of response strategies is contingent upon response time. We also found that the negative effects of data breaches disappear after six months. We interpret the results and provide implications for research and practice.

Journal ArticleDOI
TL;DR: It is shown that the presence of social cues is more likely to enhance users’ social perceptions when users are primed to perceive the website as trustworthy, as opposed to untrustworthy (through the presentation of trust cues such as data protection disclaimers).
Abstract: Across different domains, websites are incorporating social media features, rendering themselves interactive and community-oriented. This study suggests that these “friendly” websites may indirectly encourage users to disclose private information. To investigate this possibility, we carried out online experiments utilizing a YouTube-like video-browsing platform. This platform provides a realistic and controlled environment in which to study users’ behaviors and perceptions during their first encounter with a website. We show that the presence of social cues on a website (e.g., an environment in which users “like” or rate website content) indirectly affects users’ likelihood of disclosing private information to that website (such as full name, address, and birthdate) by enhancing users’ “social perceptions” of the website (i.e., their perceptions that the website is a place where they can socialize with others). We further show that the presence of social cues is more likely to enhance users’ social perceptions when users are primed to perceive the website as trustworthy, as opposed to untrustworthy (through the presentation of trust cues such as data protection disclaimers). Moreover, we rule out users’ privacy concerns as an alternative mechanism influencing the relationship between social cues and information disclosure. We ground our observations in goal systems and trust theories. Our insights may be beneficial both for managers and for policy makers who seek to safeguard users’ privacy.

Journal ArticleDOI
TL;DR: In this article , the effects of prior high-quality ideas on the generation of subsequent ideas depend on the alignment of crowd participants' subjective quality assessments of prior ideas and subsequent problem-related contributions made by the crowd.
Abstract: Findings on how prior high-quality ideas affect the quality of subsequent ideas in online ideation contests have been mixed. Some studies find that high-quality ideas lead to subsequent high-quality ideas, while others find the opposite. Based on computationally intensive exploratory research, utilizing theory on blending of mental spaces, we suggest that the effects of prior ideas on the generation of subsequent ideas depend on the alignment of (1) crowd participants’ subjective quality assessments of prior ideas and (2) subsequent problemrelated contributions made by the crowd. When a prior idea is assessed as high-quality, this motivates the crowd to emulate that idea. When this motivation is aligned with subsequent contributions that expand the mental space of the prior idea, a new high-quality idea can be created. In contrast, when a prior idea is assessed as low-quality, it motivates the crowd to redirect away from that idea. When this motivation is aligned with subsequent contributions that shift the mental space of the prior idea, a new high-quality idea can be created. The mixed findings in the literature can then be explained by a failure to consider non-idea information contributions made by the crowd.

Journal ArticleDOI
TL;DR: The authors investigated the shared emotional responses of Twitter users in the aftermath of a massive data breach, a crisis event known as the Office of Personnel Management (OPM) data breach of 2015.
Abstract: This paper investigates the shared emotional responses of Twitter users in the aftermath of a massive data breach, a crisis event known as the Office of Personnel Management (OPM) data breach of 2015. This breach impacted the lives of several million individuals due to the exposure of sensitive and personally identifying information. We take a data exploration approach to analyzing over 18,000 tweet messages of the ensuing discussion that took place after public notification that the breach had occurred. The resulting analysis reveals that although the emotions of anxiety, anger, and sadness may initially appear erratic, at an aggregate level, the public display of these emotions corresponds to the situational awareness of the breach event. Further, our analysis finds that this relationship extends to the sharing of emotions, indicating that those participating in the conversation congregate around a sense of shared emotional experience. Finally, an in-depth analysis of the ensuing dialogue identifies the most salient conversational drivers of these emotions, revealing breach concepts most significantly related to each emotion. Based on the results, we present propositions that draw from this analysis to inform emotional response characteristics that emerge over the duration of such crisis events. The results of this study can inform organizational practices and policy making in the context of response to crisis events such as data breaches.

Journal ArticleDOI
TL;DR: CLHAD automatically leverages the knowledge learned from English content to detect hacker assets in non-English dark web platforms and encompasses a novel Adversarial deep representation learning (ADREL) method, which generates multilingual text representations using generative adversarial networks (GANs).
Abstract: International dark web platforms operating within multiple geopolitical regions and languages host a myriad of hacker assets such as malware, hacking tools, hacking tutorials, and malicious source code. Cybersecurity analytics organizations employ machine learning models trained on human-labeled data to automatically detect these assets and bolster their situational awareness. However, the lack of human-labeled training data is prohibitive when analyzing foreign-language dark web content. In this research note, we adopt the computational design science paradigm to develop a novel IT artifact for cross-lingual hacker asset detection (CLHAD). CLHAD automatically leverages the knowledge learned from English content to detect hacker assets in non-English dark web platforms. CLHAD encompasses a novel Adversarial deep representation learning (ADREL) method, which generates multilingual text representations using generative adversarial networks (GANs). Drawing upon the state of the art in cross-lingual knowledge transfer, ADREL is a novel approach to automatically extract transferable text representations and facilitate the analysis of multilingual content. We evaluate CLHAD on Russian, French, and Italian dark web platforms and demonstrate its practical utility in hacker asset profiling, and conduct a proof-of-concept case study. Our analysis suggests that cybersecurity managers may benefit more from focusing on Russian to identify sophisticated hacking assets. In contrast, financial hacker assets are scattered among several dominant dark web languages. Managerial insights for security managers are discussed at operational and strategic levels.

Journal ArticleDOI
TL;DR: In this article , the authors analyzed the influential role of top reviews and their valence in mitigating information overload and found that the influence of those top reviews diminishes when they too pose an overload risk but is strengthened when their signal is reaffirmed by signals from all other reviews.
Abstract: By empowering customers to make fitting purchases, user reviews play an important role in reducing inefficiencies in the provisioning of product information. Because of the abundance of reviews and the signals they provide, this information may become confusing and risks overloading customers. Consequently, review hosting platforms have adjusted their designs to feature a signal “distilled” from a selective set of “top reviews” and their valences. The expected ease with which customers process this signal is intended to increase their satisfaction, thus reducing dispersion in their subsequent review ratings. In this study, we analyze the influential role that top reviews and their valence play under various scenarios: when customers are overloaded by a large number of reviews, when top reviews themselves are not parsimonious in number, and when the signals from top reviews are not in concordance with that from all the other reviews. We find that the valence of top reviews plays a central role in mitigating information overload. However, the influence of those top reviews diminishes when they too pose an overload risk but is strengthened when their signal is reaffirmed by signals from all other reviews. Finally, the impact of top reviews is weaker for less popular products.

Journal ArticleDOI
TL;DR: In this paper , the authors trace how six independent digital ventures in the German financial services industry dealt with this tension as they created their digital market offerings, and suggest that digital ventures enact three designing mechanisms to resolve the tension: bounding the technology scope, transposing through digital objects, and probing the solution space.
Abstract: Digital ventures must navigate a key tension as they design new digital market offerings—that is, products or services that are embodied in digital technologies or enabled by them. On the one hand, digital ventures pursue a vision that builds on what might be possible through the generative potential that digital technology offers; on the other hand, they face an environment in the here and now, with existing customer preferences, extant regulations, and legacy technology. Taking a designing view, we trace how six independent digital ventures in the German financial services industry dealt with this tension as they created their digital market offerings. Our findings suggest that digital ventures enact three designing mechanisms to resolve the tension: bounding the technology scope, transposing through digital objects, and probing the solution space. Through these mechanisms, digital ventures construct a buffer—one that has functional, material, and temporal dimensions—between the vision they gradually realize through their market offering and the here-and-now conditions of the environment that digital ventures enter.

Journal ArticleDOI
TL;DR: This paper discusses and test three alternative models underlying user comparison of competing systems: separate, crossover effect, and relative comparison processes, and shows that the relative comparison process is the most parsimonious and the best model in terms of explaining the mechanisms underlying the comparison of system use by individuals.
Abstract: Although individual adoption and use of a single system has been examined extensively, little is known about how people evaluate and compare competing systems. In this paper, we discuss and test three alternative models underlying user comparison of competing systems: separate, crossover effect, and relative comparison processes. The separate comparison process proposes that users develop separate cognitive, affective, and conative evaluations toward each system, and the between-system comparison only occurs at the point of choosing a preferred system. The crossover effect comparison process posits that users not only perform separate evaluations for each system, but also consider the competitive effects when proceeding across cognitive, affective, and conative evaluation stages. In contrast, the relative comparison process postulates that users directly compare competing systems within each of the cognitive, affective, and conative evaluation stages. Based on the IS continuance model, we tested each of these three models using data collected from users of two competing instant messaging systems. Our results showed that the relative comparison process is the most parsimonious and the best model in terms of explaining the mechanisms underlying the comparison of system use by individuals. Theoretical and practical implications are discussed.

Journal ArticleDOI
TL;DR: In this article , the authors conducted a longitudinal field study investigating Target's data breach in 2013 that affected more than 110 million customers, and examined customers' expectations toward compensation immediately after the breach was confirmed and their experiences after reparations were made.
Abstract: Data breaches are a major threat to organizations from both financial and customer relations perspectives. We developed a nomological network linking post-breach compensation strategies to key outcomes, namely continued shopping intentions, positive word-of-mouth, and online complaining, with the effects being mediated by customers’ justice perceptions. We conducted a longitudinal field study investigating Target’s data breach in 2013 that affected more than 110 million customers. We examined customers’ expectations toward compensation immediately after the breach was confirmed (survey 1) and their experiences after reparations were made (survey 2). Evidence from polynomial regression and response surface analyses of data collected from 388 affected customers showed that customers’ justice perceptions were influenced by the actual compensation provided as well as the type and extent of compensation an organization could and should have provided (i.e., customers’ compensation expectations). Interestingly, both positive and negative expectation disconfirmation led to less favorable justice perceptions compared to when expectations were met. Justice perceptions were, in turn, associated with key outcomes. We discuss implications for research on data security, information systems, and justice theory.

Journal ArticleDOI
TL;DR: In this paper , the authors investigated the relationship between the IT newness and the long-term abnormal returns to firms emphasizing a revenue enhancement (cost reduction) IT strategy and found that the purchasing of a firm's stock by its senior executives before a major IT investment is associated with the investment's longterm effect on firm value.
Abstract: Performance impacts of investments in information technologies (ITs) are difficult to evaluate. External investors are further constrained by their lack of visibility into the firm’s intangible, complementary actions and capabilities, creating an information asymmetry between them and the firm’s executives. Building on signaling theory and the research on senior executives’ trades in a firm’s stock, this paper addresses the following question: How are the stock trades by a firm’s senior executives before a major IT investment by the firm associated with the future value to the firm from that IT investment? The results based on data on 2,898 publicly announced IT investments from 926 firms during 2002–2016 suggest that (1) the purchasing of a firm’s stock by its senior executives before a firm’s IT investment is associated with the investment’s longterm effect on firm value; (2) such stock purchases by a firm’s senior executives are associated with a stronger positive (negative) relationship between the IT’s newness and the long-term abnormal returns to firms emphasizing a revenue enhancement (cost reduction) IT strategy; (3) for firms pursuing a hybrid strategy, purchases by CIOs but not purchases by CEOs or the newness of IT are associated with firm value, and (4) purchases made by CIOs provide greater information about the IT investment’s impact on firm value than purchases made by CEOs. We further improve our predictive model’s accuracy from 75% for a model including the fit between IT newness and IT strategy to 80% and 91% when considering purchases by CEOs or CIOs, respectively, and 92% when considering purchases by both executives.

Journal ArticleDOI
TL;DR: In this article , the authors investigate the effects of prosocial crowdfunding on traditional micro finance institutions and find that MFIs' sustainability improves and interest rates decrease after joining a crowdfunding platform, rather than increased supply of low-cost funds.
Abstract: Online crowdfunding holds the promise of empowering entrepreneurs and small businesses as an innovative alternative financing channel. However, doubts have been expressed as to whether online crowdfunding can deliver its promise because of the lack of empirical evidence regarding its effects. In this study, we investigate the effects that prosocial crowdfunding has on traditional microfinance institutions (MFIs). Combining multiple data sources, including data from Kiva.org and the Microfinance Information Exchange Market (MIX Market), we examine how access to crowdfunding influences MFIs’ sustainability and interest rates. We find that after joining Kiva, MFIs’ sustainability improves and interest rates decrease. Further investigation suggests that the changes mainly result from efficiency improvement, rather than increased supply of low-cost funds. We propose that joining an online crowdfunding platform induces greater transparency and crowd monitoring, which motivates and empowers MFIs to improve operations and become more efficient.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed combining human knowledge and intelligence with machine intelligence to tackle the false news crisis, which first extracts relevant human and machine judgments from data sources including news features and scalable crowd intelligence, and then aggregates information is then aggregated by an unsupervised Bayesian aggregation model.
Abstract: The explosive spread of false news on social media has severely affected many areas such as news ecosystems, politics, economics, and public trust, especially amid the COVID-19 infodemic. Machine intelligence has met with limited success in detecting and curbing false news. Human knowledge and intelligence hold great potential to complement machine-based methods. Yet they are largely underexplored in current false news detection research, especially in terms of how to efficiently utilize such information. We observe that the crowd contributes to the challenging task of assessing the veracity of news by posting responses or reporting. We propose combining these two types of scalable crowd judgments with machine intelligence to tackle the false news crisis. Specifically, we design a novel framework called CAND, which first extracts relevant human and machine judgments from data sources including news features and scalable crowd intelligence. The extracted information is then aggregated by an unsupervised Bayesian aggregation model. Evaluation based on Weibo and Twitter datasets demonstrates the effectiveness of crowd intelligence and the superior performance of the proposed framework in comparison with the benchmark methods. The results also generate many valuable insights, such as the complementary value of human and machine intelligence, the possibility of using human intelligence for early detection, and the robustness of our approach to intentional manipulation. This research significantly contributes to relevant literature on false news detection and crowd intelligence. In practice, our proposed framework serves as a feasible and effective approach for false news detection.

Journal ArticleDOI
TL;DR: In this paper , the authors provide guidelines to help assess whether an existing construct warrants updating and to structure the updating task if it is undertaken, and illustrate their guidelines using computer self-efficacy (CSE) as a case study.
Abstract: In this paper, we confront a paradox in the IS literature that even though our field focuses on the rapid pace of technological change and the dramatic scale of technology-enabled organizational and societal changes, we sometimes find ourselves studying these changes using—largely without question—constructs that were developed in a vastly different IT, user, and organizational environment. We provide guidelines to help assess whether an existing construct warrants updating and to structure the updating task if it is undertaken. Our three-step process provides for a theoretically grounded and comprehensive method that ensures we balance the need for construct updating against the need to sustain our cumulative tradition. We illustrate our guidelines using computer self-efficacy (CSE) as a case study. We document each of the steps involved in analyzing, reconceptualizing, and testing the revised construct information technology self-efficacy (ITSE). Our analyses show that the new construct better explains both traditional and contemporary constructs with a traditional (postal survey) and contemporary (online panel) sample. We discuss the implications of our work both for research on self-efficacy and more broadly for future updating of other important constructs.


Journal ArticleDOI
TL;DR: In this paper , a randomized field experiment was conducted to examine the use of a herding cue as an implementation intervention to hasten adoption behaviors, and the results showed that a Herding cue directly impacts the time it takes an individual to adopt a technology, but has no impact on the effect of subjective norm (a form of normative social influence), and dampens the effects of an individual's private beliefs about the usefulness of a technology.
Abstract: A herding cue is a lean information signal that an individual receives about the aggregate number of others who have engaged in a behavior that may result in herd behavior. Given the ease with which they can be leveraged as implementation interventions or design features on online sites, herding cues hold the promise to provide a means to influence adoption behaviors. Yet, little attention has been devoted in the IS adoption literature to understanding the effects of herding cues. Given that herding cues are just one of several forms of social influence on adoption behaviors and are relatively lean in nature, understanding their viability as an implementation intervention necessitates understanding their effects in the presence of (1) other forms of social influence, which also serve to reduce uncertainty and signal the appropriateness of technology adoption, and (2) an individual’s own beliefs about adopting. In this vein, we conducted a randomized field experiment to examine the use of a herding cue as an implementation intervention to hasten adoption behaviors. The research model was evaluated using survival analysis by combining the data from the field experiment with two waves of surveys, and archival logs of adoption. Our results show that a herding cue (1) directly impacts the time it takes an individual to adopt a technology, (2) amplifies the effects of peer behaviors (another type of informative social influence), but has no impact on the effect of subjective norm (a form of normative social influence), and (3) dampens the effects of an individual’s private beliefs about the usefulness of a technology. Our paper disentangles herding information signals to define a herding cue as distinct from other herd behavior triggers, explores how it may interact with other forms of social influences and private beliefs to influence adoption behaviors, and, on a practical level, provides evidence of how a herding cue can be a tangible intervention to accelerate technology adoption.

Journal ArticleDOI
TL;DR: In this article , the authors unpack the diversity-coherence paradox by recasting coherence as the relatedness of innovation frames and spotlighting the role of discursive fields that circumscribe meaning.
Abstract: Innovation breakthroughs prompt sensemaking discourses that promote community learning and socially construct the innovation. Through this discourse, interested actors advance diverse frames, appealing to consumers with disparate preferences but raising concerns for the coherence of that discourse. We unpack this diversity-coherence paradox by recasting coherence as the relatedness of innovation frames and spotlighting the role of discursive fields that circumscribe meaning. Our empirical context is the first six years of blockchain discourse across seven discursive fields. Our research offers three insights in furtherance of an ecological perspective on innovation discourse. First, framing diversity emanates from discursive fields rather than from actors. Second, fields play differentiated roles in the framing process. Enactment fields comprised of actors with direct experience with the technology limit diversity. They do so by erecting walls that circumscribe discourse through imprinting on their original frame and retracting from or abandoning frames learned from other fields. In contrast, mediated fields, in which actors lack direct experience with the technology, enhance diversity. They do so by imitating or learning from other fields and foreshadowing or anticipating the frames used by other fields, thereby building bridges. Third, rather than opposing each other, diversity and coherence coevolve as the diversity induced by mediated fields increases framing redundancies, synthesizing frames into a coherent community understanding of the innovation. Our research signals to the actors who serve as innovation ambassadors and gatekeepers that diverse views of an innovation are not only inevitable, given the many discourse fields in which those views are formulated, but can also be coherent and desirable.