scispace - formally typeset
Search or ask a question

Showing papers in "Information Systems Research in 2014"


Journal ArticleDOI
TL;DR: A theory of communication visibility based on a field study of the implementation of a new enterprise social networking site in a large financial services organization suggests that once invisible communication occurring between others in the organization becomes visible for third parties, those third parties could improve their metaknowledge i.e., knowledge of who knows what and who knows whom.
Abstract: This paper offers a theory of communication visibility based on a field study of the implementation of a new enterprise social networking site in a large financial services organization. The emerging theory suggests that once invisible communication occurring between others in the organization becomes visible for third parties, those third parties could improve their metaknowledge i.e., knowledge of who knows what and who knows whom. Communication visibility, in this case made possible by the enterprise social networking site, leads to enhanced awareness of who knows what and whom through two interrelated mechanisms: message transparency and network translucence. Seeing the contents of other's messages helps third-party observers make inferences about coworkers' knowledge. Tangentially, seeing the structure of coworkers' communication networks helps third-party observers make inferences about those with whom coworkers regularly communicate. The emerging theory further suggests that enhanced metaknowledge can lead to more innovative products and services and less knowledge duplication if employees learn to work in new ways. By learning vicariously rather than through experience, workers can more effectively recombine existing ideas into new ideas and avoid duplicating work. Moreover, they can begin to proactively aggregate information perceived daily rather than engaging in reactive search after confronting a problem. I discuss the important implications of this emerging theory of communication visibility for work in the knowledge economy.

530 citations


Journal ArticleDOI
TL;DR: This work addresses key questions related to the explosion of interest in the emerging fields of big data, analytics, and data science and the strengths that the information systems IS community brings to this discourse.
Abstract: We address key questions related to the explosion of interest in the emerging fields of big data, analytics, and data science. We discuss the novelty of the fields and whether the underlying questions are fundamentally different, the strengths that the information systems IS community brings to this discourse, interesting research questions for IS scholars, the role of predictive and explanatory modeling, and how research in this emerging area should be evaluated for contribution and significance.

524 citations


Journal ArticleDOI
TL;DR: The results show that the decomposed TAM provides a better understanding of the contexts by revealing the direct and interaction effects of context-specific factors on behavioral intention that are not mediated by the TAM constructs of perceived usefulness and perceived ease of use.
Abstract: This paper discusses the value of context in theory development in information systems IS research. We examine how prior research has incorporated context in theorizing and develop a framework to classify existing approaches to contextualization. In addition, we expound on a decomposition approach to contextualization and put forth a set of guidelines for developing context-specific models. We illustrate the application of the guidelines by constructing and comparing various context-specific variations of the technology acceptance model TAM---i.e., the decomposed TAM that incorporates interaction effects between context-specific factors, the extended TAM with context-specific antecedents, and the integrated TAM that incorporates mediated moderation and moderated mediation effects of context-specific factors. We tested the models on 972 individuals in two technology usage contexts: a digital library and an agile Web portal. The results show that the decomposed TAM provides a better understanding of the contexts by revealing the direct and interaction effects of context-specific factors on behavioral intention that are not mediated by the TAM constructs of perceived usefulness and perceived ease of use. This work contributes to the ongoing discussion about the importance of context in theory development and provides guidance for context-specific theorizing in IS research.

410 citations


Journal ArticleDOI
TL;DR: It is found that as users become more popular, they produce more reviews and more objective reviews; however, their numeric ratings also systematically change and become more negative and more varied.
Abstract: Online product reviews are increasingly important for consumer decisions, yet we still know little about how reviews are generated in the first place. In an effort to gather more reviews, many websites encourage user interactions such as allowing one user to subscribe to another. Do these interactions actually facilitate the generation of product reviews? More importantly, what kind of reviews do such interactions induce? We study these questions using data from one of the largest product review websites where users can subscribe to one another. By applying both panel data and a flexible matching method, we find that as users become more popular, they produce more reviews and more objective reviews; however, their numeric ratings also systematically change and become more negative and more varied. Such trade-off has not been previously documented and has important implications for both product review and other user-generated content websites.

299 citations


Journal ArticleDOI
TL;DR: It is found that patients benefit from learning from others and that their participation in the online community helps them to improve their health and to better engage in their disease self-management process.
Abstract: In this paper, we investigate whether social support exchanged in an online healthcare community benefits patients' mental health. We propose a nonhomogeneous Partially Observed Markov Decision Process POMDP model to examine the latent health outcomes for online health community members. The transition between different health states is modeled as a probability function that incorporates different forms of social support that patients exchange via discussion board posts. We find that patients benefit from learning from others and that their participation in the online community helps them to improve their health and to better engage in their disease self-management process. Our results also reveal differences in the influence of various forms of social support exchanged on the evolution of patients' health conditions. We find evidence that informational support is the most prevalent type in the online healthcare community. Nevertheless, emotional support plays the most significant role in helping patients move to a healthier state. Overall, the influence of social support is found to vary depending on patients' health conditions. Finally, we demonstrate that our proposed POMDP model can provide accurate predictions for patients' health states and can be used to recover missing or unavailable information on patients' health conditions.

288 citations


Journal ArticleDOI
TL;DR: It is shown that reviews never hurt the retailer and the manufacturers with favorable reviews, and never benefit the manufacturer with unfavorable reviews, a finding that demonstrates why reviews' effect on upstream competition is critical for firms in online marketplaces.
Abstract: This paper studies the effect of online product reviews on different players in a channel structure. We consider a retailer selling two substitutable products produced by different manufacturers, and the products differ in both their qualities and fits to consumers' needs. Online product reviews provide additional information for consumers to mitigate the uncertainty about the quality of a product and about its fit to consumers' needs. We show that the effect of reviews on the upstream competition between the manufacturers is critical in understanding which firms gain and which firms lose. The upstream competition is affected in fundamentally different ways by quality information and fit information, and each information type has different implications for the retailer and manufacturers. Quality information homogenizes consumers' perceived utility differences between the two products and increases the upstream competition, which benefits the retailer but hurts the manufacturers. Fit information heterogenizes consumers' estimated fits to the products and softens the upstream competition, which hurts the retailer but benefits the manufacturers. Furthermore, reviews may also alter the nature of upstream competition from one in which consumers' own assessment on the quality dimension plays a dominant role in consumers' comparative evaluation of products to one in which fit dimension plays a dominant role. If manufacturers do not respond strategically to reviews and keep the same wholesale prices regardless of reviews i.e., the upstream competition is assumed to be unaffected by reviews, then, we show that reviews never hurt the retailer and the manufacturer with favorable reviews, and never benefit the manufacturer with unfavorable reviews, a finding that demonstrates why reviews' effect on upstream competition is critical for firms in online marketplaces.

281 citations


Journal ArticleDOI
TL;DR: The results show the distinction between product fit uncertainty and quality uncertainty as two distinct dimensions of product uncertainty and interestingly show that, relative to product quality uncertainty, product Fit uncertainty has a significantly stronger effect on product returns.
Abstract: Product fit uncertainty defined as the degree to which a consumer cannot assess whether a product's attributes match her preference is proposed to be a major impediment to online markets with costly product returns and lack of consumer satisfaction. We conceptualize the nature of product fit uncertainty as an information problem and theorize its distinct effect on product returns and consumer satisfaction versus product quality uncertainty, particularly for experience versus search goods without product familiarity. To reduce product fit uncertainty, we propose two Internet-enabled systems---website media visualization systems and online product forums collaborative shopping systems---that are hypothesized to attenuate the effect of product type experience versus search goods on product fit uncertainty. Hypotheses that link experience goods to product returns through the mediating role of product fit uncertainty are tested with analyses of a unique data set composed of secondary data matched with primary direct data from numerous consumers who had recently participated in buy-it-now auctions. The results show the distinction between product fit uncertainty and quality uncertainty as two distinct dimensions of product uncertainty and interestingly show that, relative to product quality uncertainty, product fit uncertainty has a significantly stronger effect on product returns. Notably, whereas product quality uncertainty is mainly driven by the experience attributes of a product, product fit uncertainty is mainly driven by both experience attributes and lack of product familiarity. The results also suggest that Internet-enabled systems are differentially used to reduce product fit and quality uncertainty. Notably, the use of online product forums is shown to moderate the effect of experience goods on product fit uncertainty, and website media are shown to attenuate the effect of experience goods on product quality uncertainty. The results are robust to econometric specifications and estimation methods. The paper concludes by stressing the importance of reducing the increasingly prevalent information problem of product fit uncertainty in online markets with the aid of Internet-enabled systems.

245 citations


Journal ArticleDOI
TL;DR: This work constructed and tested a framework that demonstrates what engagement is, where it comes from, and how it powerfully explains both knowledge contribution and word of mouth, and found signs that the contributions of the most knowledgeable users are not purely from engagement, but also from a competing sense of self-efficacy.
Abstract: Online communities are new social structures dependent on modern information technology, and they face equally modern challenges. Although satisfied members regularly consume content, it is considerably harder to coax them to contribute new content and help recruit others because they face unprecedented social comparison and criticism. We propose that engagement-a concept only abstractly alluded to in information systems research-is the key to active participation in these unique sociotechnical environments. We constructed and tested a framework that demonstrates what engagement is, where it comes from, and how it powerfully explains both knowledge contribution and word of mouth. Our results show that members primarily contribute to and revisit an online community from a sense of engagement. Nonetheless, word of mouth is partly influenced by prior satisfaction. Therefore, engagement and satisfaction appear to be parallel mediating forces at work in online communities. Both mediators arise from a sense of communal identity and knowledge self-efficacy, but engagement also emerges from validation of self-identity. Nevertheless, we also found signs that the contributions of the most knowledgeable users are not purely from engagement, but also from a competing sense of self-efficacy. Our findings significantly contribute to the area of information systems by highlighting that engagement is a concrete phenomenon on its own, and it can be directly modeled and must be carefully managed.

206 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the trade-off between investing in high platform performance versus reducing investment in order to facilitate third party content development, and provide insights on the optimum investment in platform performance and demonstrate how conventional wisdom about product development may be misleading in the presence of strong crossnetwork externalities.
Abstract: Managers of emerging platforms must decide what level of platform performance to invest in at each product development cycle in markets that exhibit two-sided network externalities. High performance is a selling point for consumers, but in many cases it requires developers to make large investments to participate. Abstracting from an example drawn from the video game industry, we build a strategic model to investigate the trade-off between investing in high platform performance versus reducing investment in order to facilitate third party content development. We carry out a full analysis of three distinct settings: monopoly, price-setting duopoly, and price-taking duopoly. We provide insights on the optimum investment in platform performance and demonstrate how conventional wisdom about product development may be misleading in the presence of strong cross-network externalities. In particular, we show that, contrary to the conventional wisdom about “winner-take-all” markets, heavily investing in the core performance of a platform does not always yield a competitive edge. We characterize the conditions under which offering a platform with lower performance but greater availability of content can be a winning strategy.

140 citations


Journal ArticleDOI
TL;DR: Analyzing a detailed transactional data set with more than 1,800,000 bids corresponding to 270,000 projects posted between 2001 and 2010 in a leading online intermediary for software development services, it is concluded that participants in this market are very responsive to the numerical reputation score and also to the unstructured reputational information, which behaves in a similar way to the structured numerical reputation Score.
Abstract: Online service marketplaces allow service buyers to post their project requests and service providers to bid for them. To reduce the transactional risks, marketplaces typically track and publish previous seller performance. By analyzing a detailed transactional data set with more than 1,800,000 bids corresponding to 270,000 projects posted between 2001 and 2010 in a leading online intermediary for software development services, we empirically study the effects of the reputation system on market outcomes. We consider both a structured measure summarized in a numerical reputation score and an unstructured measure based on the verbal praise left by previous buyers, which we encode using text mining techniques. We find that buyers trade off reputation both structured and unstructured and price and are willing to accept higher bids posted by more reputable bidders. Sellers also respond to changes in their own reputation through three different channels. They increase their bids with their reputation score price effect but primarily use a superior reputation to increase their probability of being selected volume effect as opposed to increasing their bid prices. Negative shocks in seller reputation are associated to an increase in the probability of seller exit exit effect, but this effect is moderated by the investment that the seller has made in the site. We conclude that participants in this market are very responsive to the numerical reputation score and also to the unstructured reputational information, which behaves in a similar way to the structured numerical reputation score but provides complementary information.

138 citations


Journal ArticleDOI
TL;DR: This paper proposes an analytical lens for studying social status production processes across a wide variety of user-generated content UGC platforms and introduces the notion of an online field and associated sociological concepts to help explain how diverse types of producers and consumers of content jointly generate unique power relations online.
Abstract: In this paper, we propose an analytical lens for studying social status production processes across a wide variety of user-generated content UGC platforms. Various streams of research, including those focused on social network analysis in social media, online communities, reputation systems, blogs, and multiplayer games, have discussed social status production online in ways that are diverse and incompatible. Drawing on Bourdieu's theory of fields of cultural production, we introduce the notion of an online field and associated sociological concepts to help explain how diverse types of producers and consumers of content jointly generate unique power relations online. We elaborate on what role external resources and status markers may play in shaping social dynamics in online fields. Using this unifying theory we are able to integrate previous research findings and propose an explanation of social processes behind both the similarity across UGC platforms, which all offer multiple ways of pursuing distinction through content production, as well as the differences across such platforms in terms of which distinctions matter. We elaborate what role platform design choices play in shaping which forms of distinction count and how they are pursued as well as implications these have for status gaining strategies. We conclude the paper by suggesting how our theory can be used in future qualitative and quantitative research studies.

Journal ArticleDOI
TL;DR: Results indicated a disparity in levels of danger presented by different influence techniques used in phishing attacks, which clarifies significant vulnerabilities and lays the foundation for individuals and organizations to combat phishing through awareness and training efforts.
Abstract: Phishing is a major threat to individuals and organizations. Along with billions of dollars lost annually, phishing attacks have led to significant data breaches, loss of corporate secrets, and espionage. Despite the significant threat, potential phishing targets have little theoretical or practical guidance on which phishing tactics are most dangerous and require heightened caution. The current study extends persuasion and motivation theory to postulate why certain influence techniques are especially dangerous when used in phishing attacks. We evaluated our hypotheses using a large field experiment that involved sending phishing messages to more than 2,600 participants. Results indicated a disparity in levels of danger presented by different influence techniques used in phishing attacks. Specifically, participants were less vulnerable to phishing influence techniques that relied on fictitious prior shared experience and were more vulnerable to techniques offering a high level of self-determination. By extending persuasion and motivation theory to explain the relative efficacy of phishers' influence techniques, this work clarifies significant vulnerabilities and lays the foundation for individuals and organizations to combat phishing through awareness and training efforts.

Journal ArticleDOI
TL;DR: Crowd information quality crowd IQ is defined, empirically examines implications of class-based modeling approaches for crowd IQ, and a path for improving crowd IQ using instance-and-attribute based modeling is offered.
Abstract: User-generated content UGC is becoming a valuable organizational resource, as it is seen in many cases as a way to make more information available for analysis To make effective use of UGC, it is necessary to understand information quality IQ in this setting Traditional IQ research focuses on corporate data and views users as data consumers However, as users with varying levels of expertise contribute information in an open setting, current conceptualizations of IQ break down In particular, the practice of modeling information requirements in terms of fixed classes, such as an Entity-Relationship diagram or relational database tables, unnecessarily restricts the IQ of user-generated data sets This paper defines crowd information quality crowd IQ, empirically examines implications of class-based modeling approaches for crowd IQ, and offers a path for improving crowd IQ using instance-and-attribute based modeling To evaluate the impact of modeling decisions on IQ, we conducted three experiments Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level In addition, we found greater overall accuracy when participants could provide free-form data compared to a condition in which they selected from constrained choices We further demonstrate that, relative to attribute-based data collection, information loss occurs when class-based models are used Our findings have significant implications for information quality, information modeling, and UGC research and practice

Journal ArticleDOI
TL;DR: A web-based, medical research network that relies on patient self-reporting to collect and analyze data on the health status of patients, mostly suffering from severe conditions, which could be seen as a harbinger of new models of organizing medical knowledge creation and medical work in the digital age.
Abstract: This paper investigates a web-based, medical research network that relies on patient self-reporting to collect and analyze data on the health status of patients, mostly suffering from severe conditions. The network organizes patient participation in ways that break with the strong expert culture of medical research. Patient data entry is largely unsupervised. It relies on a data architecture that encodes medical knowledge and medical categories, yet remains open to capturing details of patient life that have as a rule remained outside the purview of medical research. The network thus casts the pursuit of medical knowledge in a web-based context, marked by the pivotal importance of patient experience captured in the form of patient data. The originality of the network owes much to the innovative amalgamation of networking and computational functionalities built into a potent social media platform. The arrangements the network epitomizes could be seen as a harbinger of new models of organizing medical knowledge creation and medical work in the digital age, and a complement or alternative to established models of medical research.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a framework to elaborate the concept of IT-enabled business models and identify areas for future research that will enhance our understanding of the subject and introduce the idea that two business-to-business B2B IT capabilities (dyadic IT customization and network IT standardization) are mediating execution mechanisms between the strategic intent of interfirm collaboration and the reconfiguration of BMs to both create and appropriate value.
Abstract: There is growing recognition that firms' information technology IT-enabled business models i.e., how interfirm transactions with suppliers, customers, and partners are structured and executed are a distinctive source of value creation and appropriation. However, the concept of business models' BMs “IT enablement” remains coarse in the information systems and strategic management literatures. Our objectives are to introduce a framework to elaborate the concept of IT-enabled BMs and to identify areas for future research that will enhance our understanding of the subject. We introduce the idea that two business-to-business B2B IT capabilities---dyadic IT customization and network IT standardization---are the mediating execution mechanisms between the strategic intent of interfirm collaboration and the reconfiguration of BMs to both create and appropriate value. We develop the logic that B2B IT capabilities for BM reconfiguration operate at two levels---IT customization at the dyadic relationship level and IT standardization at the interfirm network level---that together provide the complementary IT capabilities for firms to exchange content, govern relationships, and structure interconnections between products and processes with a diverse set of customers, suppliers, and partners. We discuss how these two complementary B2B IT capabilities are pivotal for firms to pursue different sources of value creation and appropriation. We identify how a firm's governance choices to engage in interfirm collaboration and its interfirm networks coevolve with its B2B IT capabilities as fruitful areas for future research.

Journal ArticleDOI
TL;DR: This research used a discrete choice model to analyze 682,781 messages on Yahoo! Finance message boards for 29 Dow Jones stocks and revealed that, despite the benefits from heterophily, investors are not immune to the allure of homophily in interactions in VICs.
Abstract: Millions of people participate in online social media to exchange and share information. Presumably, such information exchange could improve decision making and provide instrumental benefits to the participants. However, to benefit from the information access provided by online social media, the participant will have to overcome the allure of homophily-which refers to the propensity to seek interactions with others of similar status e.g., religion, education, income, occupation or values e.g., attitudes, beliefs, and aspirations. This research assesses the extent to which social media participants exhibit homophily versus heterophily in a unique context-virtual investment communities VICs. We study the propensity of investors in seeking interactions with others with similar sentiments in VICs and identify theoretically important and meaningful conditions under which homophily is attenuated. To address this question, we used a discrete choice model to analyze 682,781 messages on Yahoo! Finance message boards for 29 Dow Jones stocks and assess how investors select a particular thread to respond. Our results revealed that, despite the benefits from heterophily, investors are not immune to the allure of homophily in interactions in VICs. The tendency to exhibit homophily is attenuated by an investor's experience in VICs, the amount of information in the thread, but amplified by stock volatility. The paper discusses important implications for practice.

Journal ArticleDOI
TL;DR: This study shows that IT-enabled operations and sensemaking are key enablers of IOR ambidexterity and that vendors should combine these IT capabilities with relationship-specific knowledge that accumulates with relationship duration.
Abstract: Contextual ambidexterity of an interorganizational relationship IOR is the ability of its management system to align partners' activities and resources for short-term goals and adapt partners' cognitions and actions for long-term viability. It is an alternative to structural ambidexterity in which separate units of the IOR pursue short-and long-term goals. We theorize that when utilized to coordinate the IOR, information technology IT-enabled operations and sensemaking, along with interdependent decision making, promote the IOR's contextual ambidexterity. We test our hypotheses on both sides of a customer-vendor relationship using data collected from 1 the account executives of one of the world's largest supply chain vendors n = 76 and 2 its customers n = 238. We find commonalities and differences in the influence coordination mechanisms have on contextual ambidexterity from the vendor's and the customer's perspectives. For both customers and vendors, contextual ambidexterity improves the quality and performance of the relationship, and decision interdependence promotes contextual ambidexterity. For customers, using operations support systems OSSs and interpretation support systems ISSs enhances contextual ambidexterity. For vendors, the impact of both OSS use and ISS use on contextual ambidexterity depends on the duration of the relationship. Our study shows that IT-enabled operations and sensemaking are key enablers of IOR ambidexterity and that vendors should combine these IT capabilities with relationship-specific knowledge that accumulates with relationship duration.

Journal ArticleDOI
TL;DR: The proposed iterative approach is used to improve treatment strategies by predicting and eliminating treatment failures, i.e., insufficient or excessive treatment actions, based on information that is available in electronic medical record systems.
Abstract: Decision strategies in dynamic environments do not always succeed in producing desired outcomes, particularly in complex, ill-structured domains. Information systems often capture large amounts of data about such environments. We propose a domain-independent, iterative approach that a applies data mining classification techniques to the collected data in order to discover the conditions under which dynamic decision-making strategies produce undesired or suboptimal outcomes and b uses this information to improve the decision strategy under these conditions. In this paper, we formally develop this approach and illustrate it by providing detailed examples of its application to a chronic disease care problem in a healthcare management organization, specifically the treatment of patients with type 2 diabetes mellitus. In particular, the proposed iterative approach is used to improve treatment strategies by predicting and eliminating treatment failures, i.e., insufficient or excessive treatment actions, based on information that is available in electronic medical record systems. We also apply the proposed approach to a manufacturing task, resulting in substantial decision strategy improvements, which further demonstrates the generality and flexibility of the proposed approach.

Journal ArticleDOI
TL;DR: In this article, the authors explore the economics of free under perpetual licensing and derive the equilibria for each model and identify optimality regions, where consumers significantly underestimate the value of functionality and cross-module synergies are weak.
Abstract: In this paper, we explore the economics of free under perpetual licensing. In particular, we focus on two emerging software business models that involve a free component: feature-limited freemiumFLF and uniform seedingS. Under FLF, the firm offers the basic software version for free, while charging for premium features. Under S, the firm gives away for free the full product to a percentage of the addressable market uniformly across consumer types. We benchmark their performance against a conventional business model under which software is sold as a bundle labeled as “charge for everything” or CE without free offers. In the context of consumer bounded rationality and information asymmetry, we develop a unified two-period consumer valuation learning framework that accounts for both word-of-mouth WOM effects and experience-based learning, and use it to compare and contrast the three business models. Under both constant and dynamic pricing, for moderate strength of WOM signals, we derive the equilibria for each model and identify optimality regions. In particular, S is optimal when consumers significantly underestimate the value of functionality and cross-module synergies are weak. When either cross-module synergies are stronger or initial priors are higher, the firm decides between CE and FLF. Furthermore, we identify nontrivial switching dynamics from one optimality region to another depending on the initial consumer beliefs about the value of the embedded functionality. For example, there are regions where, ceteris paribus, FLF is optimal when the prior on premium functionality is either relatively low or high, but not in between. We also demonstrate the robustness of our findings with respect to various parameterizations of cross-module synergies, strength of WOM effects, and number of periods. We find that stronger WOM effects or more periods lead to an expansion of the seeding optimality region in parallel with a decrease in the seeding ratio. Moreover, under CE and dynamic pricing, second period price may be decreasing in the initial consumer valuation beliefs when WOM effects are strong and the prior is relatively low. However, this is not the case under weak WOM effects. We also discuss regions where price skimming and penetration pricing are optimal. Our results provide key managerial insights that are useful to firms in their business model search and implementation.

Journal ArticleDOI
TL;DR: This work proposes that heuristic theorizing is a useful alternative to established theorizing approaches, i.e., reasoning-based approaches for proactive design theorizing, and illustrates the effectiveness of this framework through a detailed example of a multiyear design science research program in which it is proactively generated a design theory for solving problems in the area of intelligent information management and so-called big data in the finance domain.
Abstract: Design theories provide explicit prescriptions, such as principles of form and function, for constructing an artifact that is designed to meet a set of defined requirements and solve a problem. Design theory generation is increasing in importance because of the increasing number and diversity of problems that require the participation and proactive involvement of academic researchers to build and test artifact-based solutions. However, we have little understanding of how design theories are generated. Drawing on key contributions by Herbert A. Simon, including the ideas of satisfice and bounded rationality and reviewing a large body of information systems and problem-solving literature, we develop a normative framework for proactive design theorizing based on the notion of heuristic theorizing. Heuristics are rules of thumb that provide a plausible aid in structuring the problem at hand or in searching for a satisficing artifact design. An example of a problem-structuring heuristic is problem decomposition and an example of an artifact design heuristic is analogical design. We define heuristic theorizing as the process of proactively generating design theory for prescriptive purposes from problem-solving experiences and prior theory by constantly iterating between the search for a satisficing problem solution, i.e., heuristic search, and the synthesis of new information that is generated during heuristic search, i.e., heuristic synthesis. Heuristic search involves alternating between structuring the problem at hand and generating new artifact design components, whereas heuristic synthesis involves different ways of thinking, including reflection and learning and forms of reasoning, that complement the use of heuristics for theorizing purposes. We illustrate the effectiveness of our heuristic theorizing framework through a detailed example of a multiyear design science research program in which we proactively generated a design theory for solving problems in the area of intelligent information management and so-called big data in the finance domain. We propose that heuristic theorizing is a useful alternative to established theorizing approaches, i.e., reasoning-based approaches. Heuristic theorizing is particularly relevant for proactive design theorizing, which emphasizes problem solving as being highly intertwined with theorizing, involves a greater variety of ways of thinking than other theorizing approaches, and assumes an engaged relationship between academics and practitioners.

Journal ArticleDOI
TL;DR: Examination of how individuals filter knowledge encountered in online forums, a common platform for knowledge exchange in an electronic network of practice, shows that peripheral cues source expertise and validation have a greater influence on knowledge filtering decisions than does the content quality of the solution.
Abstract: Electronic networks of practice have become a prevalent means for acquiring new knowledge. Knowledge seekers commonly turn to online repositories constructed by these networks to find solutions to domain-specific problems and questions. Yet little is understood about the process by which such knowledge is evaluated and adopted by knowledge seekers. This study examines how individuals filter knowledge encountered in online forums, a common platform for knowledge exchange in an electronic network of practice. Drawing on dual process theory, we develop research hypotheses regarding both central and peripheral evaluation of knowledge. These hypotheses are examined in a field experiment in which participants evaluate online solutions for computer programming problems. Results show that peripheral cues source expertise and validation have a greater influence on knowledge filtering decisions than does the content quality of the solution. Moreover, elaboration increases the effect of content quality but does not seem to attenuate the effect of peripheral cues. Implications for research and practice are discussed.

Journal ArticleDOI
TL;DR: The study provides unique empirical evidence of the importance of PMs' PI in software offshore outsourcing projects and theorizes that project complexity and familiarity contribute to its information constraints and the likelihood of critical incidents in a project, thereby moderating the relationship between PM's' PI and project performance.
Abstract: This study examines the role of project managers' PM practical intelligence PI in the performance of software offshore outsourcing projects. Based on the extant literature, we conceptualize PI for PMs as their capability to resolve project related work problems, given their long-range and short-range goals; PI is targeted at resolving unexpected and difficult situations, which often cannot be resolved using established processes and frameworks. We then draw on the information processing literature to argue that software offshore outsourcing projects are prone to severe information constraints that lead to unforeseen critical incidents that must be resolved adequately for the projects to succeed. We posit that PMs can use PI to effectively address and resolve such incidents, and therefore the level of PMs' PI positively affects project performance. We further theorize that project complexity and familiarity contribute to its information constraints and the likelihood of critical incidents in a project, thereby moderating the relationship between PMs' PI and project performance. To evaluate our hypotheses, we analyze longitudinal data collected in an in-depth field study of a leading software vendor organization in India. Our data include project and personnel level archival data on 530 projects completed by 209 PMs. We employ the critical incidents methodology to assess the PI of the PMs who led these projects. Our findings indicate that PMs' PI has a significant and positive impact on project performance. Further, projects with higher complexity or lower familiarity benefit even more from PMs' PI. Our study extends the literatures on project management and outsourcing by conceptualizing and measuring PMs' PI, by theorizing its relationship with project performance, and by positing how that relationship is moderated by project complexity and familiarity. Our study provides unique empirical evidence of the importance of PMs' PI in software offshore outsourcing projects. Given that PMs with high PI are scarce resources, our findings also have practical implications for the optimal resource allocation and training of PMs in software offshore services companies.

Journal ArticleDOI
TL;DR: The results suggest that firm information strategy should take into account consumers' characteristics, their past observed behaviors, and the impact of consumer informedness, and suggest that price informedness is more influential among consumers in the commodity segment.
Abstract: Consumer informedness plays a critical role in determining consumer choice in the presence of information technology deployed by competing firms in the marketplace. This paper develops a new theory of consumer informedness. Using data collected through a series of stated choice experiments in two different research contexts, we examine how consumer characteristics and observed behaviors moderate the influence of price and product informedness on consumer choice. The results indicate that different types of consumer informedness amplify different consumer behaviors in specific consumer segments. In particular, we found that price informedness is more influential among consumers in the commodity segment. They exhibit greater trading down behavior, which represents stronger preferences for choosing the products that provide the best price. In contrast, product informedness is more influential among consumers in the differentiated segment. This group exhibits greater trading out behavior, involving stronger preferences for choosing products that best suit their specific needs. These results suggest that firm information strategy should take into account consumers' characteristics, their past observed behaviors, and the impact of consumer informedness. We also discuss the theoretical contributions of this research and its broader implications for firm-level information strategy.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the dynamics of blog reading behavior of employees in an enterprise blogosphere and identify a variety-seeking behavior of blog readers where they dynamically switch from reading on one set of topics to another.
Abstract: We investigate the dynamics of blog reading behavior of employees in an enterprise blogosphere. A dynamic model is developed and calibrated using longitudinal data from a Fortune 1,000 IT services firm. Our modeling framework allows us to segregate the impact of textual characteristics sentiment and quality of a post on attracting readers from retaining them. We find that the textual characteristics that appeal to the sentiment of the reader affect both reader attraction and retention. However, textual characteristics that reflect only the quality of the posts affect only reader retention. We identify a variety-seeking behavior of blog readers where they dynamically switch from reading on one set of topics to another. The modeling framework and findings of this study highlight opportunities for the firm to influence blog-reading behavior of its employees to align it with its goals. Overall, this study contributes to improved understanding of reading behavior of individuals in communities formed around user generated content.

Journal ArticleDOI
TL;DR: Results show that service leadership and customization-personalization control have significant direct impacts on ICT service providers' brand equity, and brand equity has significant impacts on consumers' affective loyalty and conative loyalty, but not on cognitive loyalty.
Abstract: This paper examines the effects of information and communication technology ICT service innovation and its complementary strategies on brand equity and customer loyalty toward ICT service providers. We draw from research on brand equity and customer loyalty, ICT innovation management, and strategy complementarity to propose a model that includes new constructs representing ICT service innovation, i.e., service leadership, and its two complementary strategies, i.e., customization-personalization control and technology leadership, and how their interactions influence customer loyalty through customer-based brand equity. We test our model using data from an online survey of 1,210 customers of mobile data services. The results show that service leadership and customization-personalization control have significant direct impacts on ICT service providers' brand equity. Moreover, when either the level of technology leadership or the level of customization-personalization control is high, the impact of service leadership on brand equity is enhanced. In turn, brand equity has significant impacts on consumers' affective loyalty and conative loyalty, but not on cognitive loyalty. Our study contributes to the literature on service management and service science, and in particular to the management of ICT service innovation in a consumer technology market.

Journal ArticleDOI
TL;DR: The Model of Online Service Technologies MOST is proposed to theorize that the capacity of a service provider to accommodate the variability of customer inputs into the service process is the key difference among various types of service technologies, and empirically investigates the impact ofservice technologies that possess different capacities to accommodate input variability on efficiency and personalization, the two competing goals of service adoption.
Abstract: Online retailers are increasingly providing service technologies, such as technology-based and human-based services, to assist customers with their shopping. Despite the prevalence of these service technologies and the scholarly recognition of their importance, surprisingly little empirical research has examined the fundamental differences among them. Consequently, little is known about the factors that may favor the use of one type of service technology over another. In this paper, we propose the Model of Online Service Technologies MOST to theorize that the capacity of a service provider to accommodate the variability of customer inputs into the service process is the key difference among various types of service technologies. We posit two types of input variability: Service Provider-Elicited Variability SPEV, where variability is determined in advance by the service provider; and User-Initiated Variability UIV, where customers determine variability in the service process. We also theorize about the role of task complexity in changing the effectiveness of service technologies. We then empirically investigate the impact of service technologies that possess different capacities to accommodate input variability on efficiency and personalization, the two competing goals of service adoption. Our empirical approach attempts to capture both the perspective of the vendor who may deploy such technologies, as well as the perspective of customers who might choose among service technology alternatives. Our findings reveal that SPEV technologies i.e., technologies that can accommodate SPEV are more efficient, but less personalized, than SPEUIV technologies i.e., technologies that can accommodate both SPEV and UIV. However, when task complexity is high vs. low, the superior efficiency of SPEV technologies is less prominent, while both SPEV and SPEUIV technologies have higher personalization. We also find that when given a choice, a majority of customers tend to choose to use both types of technologies. The results of this study further our understanding of the differences in efficiency and personalization experienced by customers when using various types of online service technologies. The results also inform practitioners when and how to implement these technologies in the online shopping environment to improve efficiency and personalization for customers.

Journal ArticleDOI
TL;DR: This is the first study of a large-scale dynamic network that shows that a product network contains useful distributed information for demand prediction, and the economic implications of algorithmically predicting demand for large numbers of products are significant.
Abstract: We define an economic network as a linked set of entities, where links are created by actual realizations of shared economic outcomes between entities. We analyze the predictive information contained in a specific type of economic network, namely, a product network, where the links between products reflect aggregated information on the preferences of large numbers of individuals to co-purchase pairs of products. The product network therefore reflects a simple “smoothed” model of demand for related products. Using a data set containing more than 70 million observations of a nonstatic co-purchase network over a period of two years, we predict network entities' future demand by augmenting data on their historical demand with data on the demand for their immediate neighbors, in addition to network properties, specifically, local clustering and PageRank. To our knowledge, this is the first study of a large-scale dynamic network that shows that a product network contains useful distributed information for demand prediction. The economic implications of algorithmically predicting demand for large numbers of products are significant.

Journal ArticleDOI
TL;DR: The Aguirre-Urreta and Marakas' 2014 study does not support valid inference about the behavior of PLS path modeling with respect to endogenous formatively measured constructs.
Abstract: Aguirre-Urreta and Marakas [Aguirre-Urreta MI, Marakas GM 2014 Research note-Partial least squares and models with formatively specified endogenous constructs: A cautionary note . Inform. Systems Res. 254:761--778] aim to evaluate the performance of partial least squares PLS path modeling when estimating models with formative endogenous constructs, but their ability to reach valid conclusions is compromised by three major flaws in their research design. First, their population data generation model does not represent "formative measurement" as researchers generally understand that term. Second, their design involves a PLS path model that is misspecified with respect to their population model. Third, although their aim is to estimate a composite-based PLS path model, their design uses simulation data generated via a factor analytic procedure. In consequence of these flaws, Aguirre-Urreta and Marakas' 2014 study does not support valid inference about the behavior of PLS path modeling with respect to endogenous formatively measured constructs.

Journal ArticleDOI
TL;DR: This research provides a systematic theoretical framework that accounts for the prevalence of gain-share contracts in the IT industry's joint improvement efforts, and it provides guiding principles for understanding the increased role for customer support centers in product improvement.
Abstract: We study the role of different contract types in coordinating the joint product improvement effort of a client and a customer support center. The customer support center's costly efforts at joint product improvement include transcribing and analyzing customer feedback, analyzing market trends, and investing in product design. Yet this cooperative role must be adequately incentivized by the client, since it could lead to fewer service requests and hence lower revenues for the customer support center. We model this problem as a sequential game with double-sided moral hazard in a principal-agent framework in which the client is the principal. We follow the contracting literature in modeling the effort of the customer support center, which is the first mover, as either unobservable or observable; in either case, the efforts are unverifiable and so cannot be contracted on directly. We show that it is optimal for the client to offer the customer support center a linear gain-share contract when efforts are unobservable, even though it can yield only the second-best solution for the client. We also show that the cost-plus contracts widely used in practice do not obtain the optimal solution. However, we demonstrate that if efforts are observable then a gain-share and cost-plus options-based contract is optimal and will also yield the first-best solution. Our research provides a systematic theoretical framework that accounts for the prevalence of gain-share contracts in the IT industry's joint improvement efforts, and it provides guiding principles for understanding the increased role for customer support centers in product improvement.

Journal ArticleDOI
TL;DR: A continued need to study their relationships, and to separate short-term and long-term effects, is observed and it is concluded that patience is required to achieve increased understanding in this important domain.
Abstract: The information systems field started with the expectation that information and technology will significantly shape the nature of work. The topic provides ample scope for significant scholarly inquiry. Work content, process, and organization are now different from what they were in the 1960s and 1970s, which provided a foundation for theories and understanding. Although investigations about the changing nature of work have been made for years, this special section recognizes that the time of reckoning has come again. There is a growing need for deeper understanding of information, technology, and work. The specific contributions of this special section are at the heart of new frontiers of research in information, technology, and work. We observe a continued need to study their relationships, and to separate short-term and long-term effects. We expect continued surprises and conclude that patience is required to achieve increased understanding in this important domain.