scispace - formally typeset
Search or ask a question

Showing papers in "Management Information Systems Quarterly in 2021"


Journal ArticleDOI
TL;DR: It is argued that a new generation of “agentic” IS artifacts requires revisiting the human agency primacy assumption, and an IS delegation theoretical framework is developed, which provides a scaffolding which can guide future IS delegation theorizing and focuses on the human-agentic IS artifact ​dyad​ as the elemental unit of analysis.
Abstract: Information systems (IS) use, the dominant theoretical paradigm for explaining how users apply IS artifacts toward goal attainment, gives primacy to human agency in the user-IS artifact relationship. Models and theorizing in the IS use research stream tend to treat the IS artifact as a passive tool; lacking in the ability to initiate action and accept rights and responsibilities for achieving optimal outcomes under uncertainty. We argue that a new generation of “agentic” IS artifacts requires revisiting the human agency primacy assumption. ​Agentic IS artifacts​ are no longer passive tools waiting to be used, are no longer always subordinate to the human agent, and can now assume responsibility for tasks with ambiguous requirements and for seeking optimal outcomes under uncertainty. To move our theorizing forward, we introduce ​delegation​, based on agent interaction theories, as a foundational and powerful lens through which to understand and explain the human-agentic IS artifact relationship. While delegation has always been central to human-IS artifact interactions, it has yet to be explicitly recognized in IS use theorizing. We explicitly theorize IS delegation by developing an IS delegation theoretical framework. This framework provides a scaffolding which can guide future IS delegation theorizing and focuses on the human-agentic IS artifact ​dyad​ as the elemental unit of analysis. The framework specifically reveals the importance of agent attributes relevant to delegation (endowments, preferences, and roles) as well as foundational mechanisms of delegation (appraisal, distribution, and coordination). Guidelines are proposed to demonstrate how this theoretical framework can be applied toward generation of testable models. We conclude by outlining a roadmap for mobilizin​g​ future research.

81 citations


Journal ArticleDOI
TL;DR: By synthesizing ecological and information perspectives, this information ecology theory identifies several key functions that digital technologies serve in providing the information needed to support the interactions and tasks for innovation in ecosystems of varying scales.
Abstract: The remarkable connectivity and embeddedness of digital technologies enable innovations undertaken by a broad set of actors, often beyond organizational and industry boundaries, whose relationships mimic those of interdependent species in a natural ecosystem. These digital innovation ecosystems, if successful, can spawn countless innovations of substantial social and economic value, but are complex and prone to often surprising failure. Aiming to understand ecosystems as a new organizational form for digital innovations, I develop a theory that addresses an underexplored but important question: In a digital innovation ecosystem, how are the efforts of autonomous parties integrated into a coherent whole and what role do digital technologies play in this integration? By synthesizing ecological and information perspectives, this information ecology theory identifies several key functions that digital technologies serve in providing the information needed to support the interactions and tasks for innovation in ecosystems of varying scales. This theory contributes to digital innovation research new insights on managing part-whole relations, the role of digital technologies in innovation, and multilevel interactions in and across digital innovation ecosystems. The theory can also inspire the development of next-generation information systems for ecosystems as a new organizational form.

71 citations


Journal ArticleDOI
TL;DR: A new theoretical framework of conceptual modeling is developed that delivers a fundamental shift in the assumptions that govern research in this area and can make traditional knowledge about conceptual modeling consistent with the emerging requirements of a digital world.
Abstract: The role of information systems (IS) as representations of real-world systems is changing in an increasingly digitalized world, suggesting that conceptual modeling is losing its relevance to the IS field. We argue the opposite: Conceptual modeling research is more relevant to the IS field than ever, but it requires an update with current theory. We develop a new theoretical framework of conceptual modeling that delivers a fundamental shift in the assumptions that govern research in this area. This move can make traditional knowledge about conceptual modeling consistent with the emerging requirements of a digital world. Our framework draws attention to the role of conceptual modeling scripts as mediators between physical and digital realities. We identify new research questions about grammars, methods, scripts, agents, and contexts that are situated in intertwined physical and digital realities. We discuss several implications for conceptual modeling scholarship that relate to the necessity of developing new methods and grammars for conceptual modeling, broadening the methodological array of conceptual modeling scholarship, and considering new dependent variables.

43 citations


Journal ArticleDOI
TL;DR: The result of the analysis is the CARE (Claims, Affronts, Response, Equilibrium) theory of dignity amid personal data digitalization, a theory that explains the relationship of personal datadigitalization to human dignity.
Abstract: With the rapidly evolving permeation of digital technologies into everyday human life, we are witnessing an era of personal data digitalization. Personal data digitalization refers to the sociotechnical encounters associated with the digitization of personal data for use in digital technologies. Personal data digitalization is being applied to central attributes of human life -health, cognition, and emotion – with the purported aim of helping individuals live longer, healthier lives endowed with the requisite cognition and emotion for responding to life situations and other people in a manner that enables human flourishing. A concern taking hold in manifold fields ranging from IT, bioethics, and law, to philosophy and religion is that as personal data digitalization permeates ever more areas of human existence, humans risk becoming artifacts of technology production. This concern brings to center stage the very notion of what it means to be human, a notion encapsulated in the term human dignity, which broadly refers to the recognition that human beings possess intrinsic value and, as such, are endowed with certain rights and should be treated with respect. In this paper, we identify, describe, and transform what we know about personal data digitalization into a higher order theoretical structure around the concept of human dignity. The result of our analysis is the CARE (Claims, Affronts, Response, Equilibrium) theory of dignity amid personal data digitalization, a theory that explains the relationship of personal data digitalization to human dignity. Building upon the CARE theory as a foundation, researchers in a variety of IS research streams could develop mid-range theories for empirical testing or could use the CARE theory as an overarching lens for interpreting emerging IS phenomena. Practitioners and government agencies can also use the CARE theory to understand the opportunities and risks of personal data digitalization and to develop policies and systems that respect the dignity of employees and citizens.

42 citations


Journal ArticleDOI
TL;DR: In this article, a two-year ethnographic study focused on how developers managed this tension when building an ML system to support the process of hiring job candidates at a large international organization, finding that developers and experts arrived at a new hybrid practice that relied on a combination of ML and domain expertise.
Abstract: The introduction of machine learning (ML) in organizations comes with the claim that algorithms will produce insights superior to those of experts by discovering the “truth” from data. Such a claim gives rise to a tension between the need to produce knowledge independent of domain experts and the need to remain relevant to the domain the system serves. This two-year ethnographic study focuses on how developers managed this tension when building an ML system to support the process of hiring job candidates at a large international organization. Despite the initial goal of getting domain experts “out the loop,” we found that developers and experts arrived at a new hybrid practice that relied on a combination of ML and domain expertise. We explain this outcome as resulting from a process of mutual learning in which deep engagement with the technology triggered actors to reflect on how they produced knowledge. These reflections prompted the developers to iterate between excluding domain expertise from the ML system and including it. Contrary to common views that imply an opposition between ML and domain expertise, our study foregrounds their interdependence and as such shows the dialectic nature of developing ML. We discuss the theoretical implications of these findings for the literature on information technologies and knowledge work, information system development and implementation, and human–ML hybrids.

36 citations


Journal ArticleDOI
TL;DR: In this article, a field study of a major U.S. hospital, managers evaluated five different machine-learning (ML) based AI tools and found that none of them met expectations.
Abstract: Organizational decision-makers need to evaluate AI tools in light of increasing claims that such tools outperform human experts. Yet, measuring the quality of knowledge work is challenging, raising the question of how to evaluate AI performance in such contexts. We investigate this question through a field study of a major U.S. hospital, observing how managers evaluated five different machine-learning (ML) based AI tools. Each tool reported high performance according to standard AI accuracy measures, which were based on ground truth labels provided by qualified experts. Trying these tools out in practice, however, revealed that none of them met expectations. Searching for explanations, managers began confronting the high uncertainty of experts’ know-what knowledge captured in ground truth labels used to train and validate ML models. In practice, experts address this uncertainty by drawing on rich know-how practices, which were not incorporated into these ML-based tools. Discovering the disconnect between AI’s know-what and experts’ know-how enabled managers to better understand the risks and benefits of each tool. This study shows dangers of treating ground truth labels used in ML models objectively when the underlying knowledge is uncertain. We outline implications of our study for developing, training, and evaluating AI for knowledge work.

35 citations


Journal ArticleDOI
TL;DR: This paper turns to the work of social anthropologist Tim Ingold to advance a theoretical vocabulary of flowing lines of action and their correspondences and expound three modalities of correspondence, namely: timing, attentionality, and undergoing, which together explain the dynamics of creation, sensing, and actualization of (trans)formative possibilities for action along socio-technological flows.
Abstract: Ongoing digital innovations are transforming almost every aspect of our contemporary societies—rendering our lives and work evermore fluid and dynamic. This paper is an invitation to likewise remake our theorizing of socio-technological transformation by shifting from actor-centric orientations towards a flow-oriented approach and vocabulary. Such a shift from actors to the flows of action allows us to offer an innovative theory of socio-technological transformation that does not rely on self-contained actors or technologies as originators of transformation. Instead, it allows us to foreground how contingent confluences among heterogenous flows of action can account for the trajectories of socio-technological (trans)formation, both upstream and downstream. To do this, we turn to the work of social anthropologist Tim Ingold to advance a theoretical vocabulary of flowing lines of action and their correspondences. We expound three modalities of correspondence, namely: timing, attentionality, and undergoing, which together explain the dynamics of creation, sensing, and actualization of (trans)formative possibilities for action along socio-technological flows. We demonstrate the application and utility of this vocabulary through an empirical illustration and show how it reveals novel insights for IS research vis-a-vis existing theoretical alternatives. Finally, we outline the implications of our approach for IS research and suggest some guiding principles for studying and theorizing IS phenomena through this orientation. We invite the IS community to engage with our approach to develop novel ways of understanding and theorizing IS phenomena along our increasingly fluid and dynamic digital world, ever overflowing.

33 citations


Journal ArticleDOI
TL;DR: This work proposes that a cross-stream CPS effect—the interaction of CPS with customers (CPS-C) and CPS with suppliers (C CPS-S)—can enable firms to reinvigorate their internal knowledge for innovation by engaging customers and suppliers in filtering and interpreting market-facing information.
Abstract: A firm’s use of boundary-spanning information systems (BSIS) can be beneficial for innovation by providing access to market-facing information. At the same time, BSIS use can give rise to information overload, making it difficult for firms to leverage the most pertinent information for innovation. Although there has been progress in developing our understanding of the role of IS in innovation, it is unclear what capabilities firms need to develop to facilitate innovation in the presence of information overload from BSIS use (IO-BSIS). We maintain that firms today are increasingly experiencing IO-BSIS and therefore a thorough investigation of firm-level capabilities to facilitate innovation while coping with IO-BSIS is needed. To address this key gap, we broaden the theory of problemistic search for innovation by proposing a digitally enabled collaborative problemistic search (CPS) capability. We propose that a cross-stream CPS effect—interaction of CPS with customers (CPS-C) and CPS with suppliers (CPS-S)—enables a firm to reinvigorate its internal knowledge for innovation by engaging customers and suppliers in filtering and interpreting market-facing information. Further, we theorize that the presence or absence of IO-BSIS is a contingency that affects whether the cross-stream CPS effect is likely to be beneficial or detrimental to innovation. Based on the analysis of data collected from 227 firms, we found that the cross-stream CPS effect is beneficial for innovation when firms face IO-BSIS and detrimental to innovation when firms do not experience IO-BSIS. We thus open the black box of the digitally enabled innovation activity by shedding light on specific collaborative activities that advance innovation by enabling firms to cope with information overload.

31 citations


Journal ArticleDOI
TL;DR: Two sets of design principles are developed that protect the user from informania’s oppression by engaging in an adversarial relationship with its oppressive ML platforms when necessary and should encourage IS researchers to enlarge the range of possibilities for responding to the influx of ML systems.
Abstract: Widespread use of machine learning (ML) systems could result in an oppressive future of ubiquitous monitoring and behavior control that, for dialogic purposes, we call “informania.” This dystopian future results from ML systems’ inherent design based on training data rather than built with code. To avoid this oppressive future, we develop the concept of an emancipatory assistant (EA), an ML system that engages with human users to help them understand and enact emancipatory outcomes amidst the oppressive environment of informania. Using emancipatory pedagogy as a kernel theory, we develop two sets of design principles: one for the near future and the other for the far-term future. Designers optimize EA on emancipatory outcomes for an individual user, which protects the user from informania’s oppression by engaging in an adversarial relationship with its oppressive ML platforms when necessary. The principles should encourage IS researchers to enlarge the range of possibilities for responding to the influx of ML systems. Given the fusion of social and technical expertise that IS research embodies, we encourage other IS researchers to theorize boldly about the long-term consequences of emerging technologies on society and potentially change their trajectory.

31 citations


Journal ArticleDOI
TL;DR: This study integrates prior findings on the augmenting and suppressing pathways with a new theory explaining the suppressing pathways to propose an overall inverted U-shaped relationship between IT investment and commercialized innovation performance (CIP).
Abstract: A firm’s investment in information technology (IT) has been widely considered as a key enabler of innovation. In this study, we intend to integrate prior findings for augmenting pathways (whereby IT investment supports innovation) with a new theory for suppressing pathways (whereby dynamic adjustment costs associated with IT investment can be detrimental to innovation) to propose an overall inverted U-shaped relationship between IT investment and commercialized innovation performance (CIP). To test our theory, we analyzed a unique panel dataset from the largest economy in Europe and discovered a curvilinear relationship between IT investment and CIP for firms across a broad spectrum of industries. Our research presents empirical evidence corroborating the augmenting and suppressing pathways linking IT investment and CIP. Our findings can serve as a cautionary signal to executives, discouraging overinvestment in IT.

23 citations



Journal ArticleDOI
TL;DR: In this article, a 2D interaction kernel for convolutional neural networks is proposed to leverage interactions between human and object motion sensors to recognize ADLs on different levels more accurately.
Abstract: Ensuring the health and safety of independent-living senior citizens is a growing societal concern. Activity of Daily Living (ADL) is a common approach to monitor these citizens’ self-care ability and disease progression. However, prevailing sensor-based ADL monitoring systems primarily rely on wearable motion sensors, capture insufficient information for accurate ADL recognition, and do not provide a comprehensive understanding of ADLs at different granularities. Current healthcare IS and mobile analytics research focuses on studying the system, device, and provided services, and needs an end-to-end solution to comprehensively recognize ADLs from mobile sensor data. This study adopts the design science paradigm and employs advanced deep learning algorithms to develop a novel hierarchical, multi-phase ADL recognition framework to model ADLs with different granularities. A novel 2D Interaction Kernel for convolutional neural networks is proposed to leverage interactions between human and object motion sensors. We rigorously evaluate each proposed module and the entire framework against state-of-the-art benchmarks (e.g., Support Vector Machines, DeepConvLSTM, Hidden Markov Models, and Topic-modeling-based ADLR) on two real-life motion sensor datasets that consist of ADLs at varying granularities: Opportunity and INTER. Results and a case study demonstrate that our framework can recognize ADLs on different levels more accurately. We discuss how stakeholders can further benefit from our proposed framework. Beyond demonstrating the practical utility, we discuss contributions to the IS knowledge base for future design-science-based cybersecurity, healthcare, and mobile analytics applications.

Journal ArticleDOI
TL;DR: Empirical evidence that consumers are less price elastic towards movies placed in salient slots is found, which raises the question whether recommender systems embed mechanisms that extract excessive surplus from consumers, which may call for better scrutiny.
Abstract: Recommender systems have been introduced to help consumers navigate large sets of alternatives. They usually lead to more sales, which may increase consumer surplus and firm profit. In this paper, we ask whether firms may hurt consumers when they choose which recommender systems to use. We use data from a large scale field experiment ran using the video-on-demand system of a large telecommunications provider to measure the price elasticity of demand for movies placed in salient and non-salient slots on the TV screen. During this experiment, the firm randomized the slots in which movies were recommended to consumers as well as their prices. This setting readily allows for identifying the effects of price and slot on demand and thus compute consumer surplus. We find empirical evidence that consumers are less price elastic towards movies placed in salient slots. Using the outcomes of this experiment we simulate how consumer surplus and welfare change when the firm implements several recommender system, namely one that maximizes profit. We show that this system hurts both consumer surplus and welfare relative to the systems designed to maximize the latter. We also show that, at least in our setting, the system that maximizes profit does not generate less consumer surplus than some recommender systems often used in practice, such as content-based, lists of most sold, most rated and highest rated products. Yet, how much extra rent the firm can extract from strategically placing movies in salient slots is still a function of the popularity and quality of movies used to do so. Ultimately, our results question whether recommender systems embed mechanisms that extract excessive surplus from consumers, which may call for better scrutiny.


Journal ArticleDOI
TL;DR: The authors found that negative reviews expressing anger tend to increase negative influence of the review on reader attitudes and decisions, and they extended current understanding of the interpersonal effects of emotion in online communication and suggested implications for review platforms, retailers, marketers and manufacturers faced with the task of managing consumer reviews.
Abstract: A common assumption in prior research and practice is that more helpful online reviews will exert a greater impact on consumer attitudes and purchase decisions. We suggest that this assumption may not hold for reviews expressing anger. Building on Emotions as Social Information (EASI) theory, we propose that although expressions of anger in a negative review tend to decrease reader perceptions of review helpfulness, the same expressions tend to increase the negative influence of the review on reader attitudes and decisions. Results from a series of laboratory experiments provide support for our claims. Our findings challenge the widely accepted assumption that more “helpful” reviews are ultimately more persuasive, and they extend current understanding of the interpersonal effects of emotion in online communication. Our findings also suggest implications for review platforms, retailers, marketers, and manufacturers faced with the task of managing consumer reviews.

Journal ArticleDOI
TL;DR: The Information Systems research community has a complex relationship with theory and theorizing, and ideas about what theory is, who theorizes, where theory comes from, when the authors theorize about, how theory is developed and changes, and why theory is (or isn’t) important shapes the projects they do.
Abstract: The Information Systems research community has a complex relationship with theory and theorizing. As a community of scholars, our assumptions about theory and theorizing affect every aspect of our intellectual lives. Ideas about what theory is, who theorizes, where theory comes from, when we theorize about, how theory is developed and changes, and why theory is (or isn’t) important shapes the projects we do, the partnerships we have, the resources available to us, and phenomena that we find to be significant, interesting, and novel.






Journal ArticleDOI
TL;DR: This study investigates whether consumers exhibit time habits for online shopping and whether following such time habits affects their satisfaction and revisit behavior, and employs activity-based metrics to assess individual shopping time habits.
Abstract: Little research has focused on online shopping habits, particularly concerning time, missing the opportunity to potentially improve important outcomes by the simple innovative use of time. Based on a unique dataset that includes reviews as well as pertinent purchases at the individual level from a large online retailer, this study investigates whether consumers exhibit time habits for online shopping and whether following such time habits affects their satisfaction and revisit behavior. We employ activity-based metrics to assess individual shopping time habits, with the results showing that consumers form shopping time habits, and they obtain higher consumer satisfaction and exhibit greater re-visit behavior when the timing of shopping follows their shopping time habits. While prior works have documented that consumers exhibit time habits for physical shopping, driven mostly by time and location constraints, this study is the first, to our knowledge, to examine online shopping time habit and, most importantly, its effects on consumer satisfaction and revisit behavior. With the availability of detailed individual transaction data in online shopping and the advance of technology in providing personalized services which enable companies to act upon knowledge of individual behaviors, this research provides important practical implications for system and website design, marketing strategy, and customer relationship management.

Journal Article
TL;DR: In this article, the authors used a simulation to build new theory about complexity and phase change in processes that are supported by digital technologies, and used the simulation to generate a set of theoretical propositions about the effects of digitization that will be testable in empirical research.
Abstract: This paper uses a simulation to build new theory about complexity and phase change in processes that are supported by digital technologies. We know that digitized processes can drift (change incrementally over time). We simulate this phenomenon by incrementally adding and removing edges from a network that represents the process. The simulation demonstrates that incremental change can lead to a state of self-organized criticality. As the process approaches this state, further incremental change can precipitate nonlinear bursts in process complexity and significant changes in process structure. Digital technology can be designed and used to influence the likelihood and severity of these transformative phase changes. For example, the simulation predicts that systems with adaptive programming are prone to phase changes, while systems with deterministic programming are not. We use the simulation to generate a set of theoretical propositions about the effects of digitization that will be testable in empirical research.

Journal Article
TL;DR: In this article, the authors define a prescriptive analytics framework that addresses the needs of a constrained decision-maker facing, ex-ante, unknown costs and benefits of multiple policy levers.
Abstract: We define a prescriptive analytics framework that addresses the needs of a constrained decision-maker facing, ex-ante, unknown costs and benefits of multiple policy levers. The framework is general in nature and can be deployed in any utility maximizing context, public or private. It relies on randomized field experiments for causal inference, machine learning for estimating heterogeneous treatment effects, and the optimization of an integer linear program for converting predictions into decisions. The net result is the discovery of individual-level targeting of policy interventions to maximize overall utility under a budget constraint. The framework is set in the context of the four pillars of analytics and is especially valuable for companies that already have an existing practice of running A/B tests. The key contribution in this work is to develop and operationalize a framework to exploit both within- and between-treatment arm heterogeneity in the utility response function, in order to derive benefits from future (optimized) prescriptions. We demonstrate the value of this framework as compared to benchmark practices--i.e., the use of the average treatment effect, uplift modeling, as well as an extension to contextual bandits --in two different settings. Unlike these standard approaches, our framework is able to recognize, adapt to, and exploit the (potential) presence of different subpopulations that experience varying costs and benefits within a treatment arm, while also exhibiting differential costs and benefits across treatment arms. As a result, we find a targeting strategy that produces an order of magnitude improvement in expected total utility, for the case where significant within- and between-treatment arm heterogeneity exists.