scispace - formally typeset
Search or ask a question

Showing papers in "Information Technology & Management in 2007"


Journal ArticleDOI
TL;DR: This paper explores RFID and proposes a research agenda to address a series of broad research questions related to how RFID technology is developed, adopted, and implemented by organizations; is used, supported, and evolved within organizations and alliances; and impacts individuals, business processes, organizations, and markets.
Abstract: Radio frequency identification (RFID) technology dramatically increases the ability of the organization to acquire a vast array of data about the location and properties of any entity that can be physically tagged and wirelessly scanned within certain technical limitations. RFID can be applied to a variety of tasks, structures, work systems and contexts along the value chain, including business-to-business logistics, internal operations, business-to-consumer marketing, and after-sales service applications. As industry adoption of RFID increases there is an emerging interest by academic researchers to engage in scholarly investigation to understand how RFID relates to mobility, organizational and systems technologies (MOST). In this paper, we explore RFID and propose a research agenda to address a series of broad research questions related to how RFID technology: (1) is developed, adopted, and implemented by organizations; (2) is used, supported, and evolved within organizations and alliances; and (3) impacts individuals, business processes, organizations, and markets. As with many technological innovations, as the technical problems associated with implementing and using RFID are addressed and resolved, the managerial and organizational issues will emerge as critical areas for IS research.

382 citations


Journal ArticleDOI
TL;DR: A new learning-oriented model for ontology development and a framework for ontological learning are proposed and important dimensions for classifying ontology learning approaches and techniques are identified.
Abstract: Ontology is one of the fundamental cornerstones of the semantic Web The pervasive use of ontologies in information sharing and knowledge management calls for efficient and effective approaches to ontology development Ontology learning, which seeks to discover ontological knowledge from various forms of data automatically or semi-automatically, can overcome the bottleneck of ontology acquisition in ontology development Despite the significant progress in ontology learning research over the past decade, there remain a number of open problems in this field This paper provides a comprehensive review and discussion of major issues, challenges, and opportunities in ontology learning We propose a new learning-oriented model for ontology development and a framework for ontology learning Moreover, we identify and discuss important dimensions for classifying ontology learning approaches and techniques In light of the impact of domain on choosing ontology learning approaches, we summarize domain characteristics that can facilitate future ontology learning effort The paper offers a road map and a variety of insights about this fast-growing field

211 citations


Journal ArticleDOI
TL;DR: It is argued that, instead of considering technologies in isolation, technology evolution is best viewed as a dynamic system or ecosystem that includes a variety of interrelated technologies.
Abstract: We propose a new conceptual model for understanding technology evolution that highlights dynamic and highly interdependent relationships among multiple technologies. We argue that, instead of considering technologies in isolation, technology evolution is best viewed as a dynamic system or ecosystem that includes a variety of interrelated technologies. By considering the interdependent nature of technology evolution, we identify three roles that technologies play within a technology ecosystem. These roles are components, products and applications, and support and infrastructure. Technologies within an ecosystem interact through these roles and impact each others' evolution. We also classify types of interactions between technology roles, which we term paths of influence. We demonstrate the use of our proposed model through examples of wireless networking (Wi-Fi) technologies and a business mini-case on the digital music industry.

115 citations


Journal ArticleDOI
TL;DR: A model linking consumer personality type with a decision to purchase from a virtual store indicates that a consumer’s personality type has an effect on perceived ease of use and peer influence; and those two variables, together with perceived usefulness, have an effect with a consumer's eventual decision to Purchase from avirtual store.
Abstract: Despite the proliferation of virtual stores, research into the consumer personality characteristics that influence consumer interactions with virtual stores has been lagging. In this paper we propose and test a model linking consumer personality type with a decision to purchase from a virtual store. The results indicate that a consumer's personality type has an effect on perceived ease of use and peer influence; and those two variables, together with perceived usefulness, have an effect on a consumer's eventual decision to purchase from a virtual store. The practical implications of the findings are that consumer perceptions and attitudes towards virtual stores can be altered by personalizing virtual stores in a manner which will increase their likelihood of making a purchase.

92 citations


Journal ArticleDOI
TL;DR: An analytical model is presented based on real options that shows the process by which IT infrastructure potential is converted into business value, and discusses middleware as an example technology in this context.
Abstract: Decisions to invest in information technology (IT) infrastructure are often made based on an assessment of its immediate value to the organization. However, an important source of value comes from the fact that such technologies have the potential to be leveraged in the development of future applications. From a real options perspective, IT infrastructure investments create growth options that can be exercised if and when an organization decides to develop systems to provide new or enhanced IT capabilities. We present an analytical model based on real options that shows the process by which this potential is converted into business value, and discuss middleware as an example technology in this context. We derive managerial implications for the evaluation of IT infrastructure investments, and the main findings are: (1) the flexibility provided by IT infrastructure investment is more valuable when uncertainty is higher; (2) the cost advantage that IT infrastructure investment brings about is amplified by demand volatility for IT-supported products and services; (3) in duopoly competition, the value of IT infrastructure flexibility increases with the level of product or service substitutability; and (4) when demand volatility is high, inter-firm competition has a lower impact on the value of IT infrastructure.

69 citations


Journal ArticleDOI
TL;DR: This work shows how a long-term vision combined with homogeneity in information management capabilities across workgroups can lead to organizationally desirable levels of information exchange, and how benefit sharing can either help or hurt individual and organizational information exchange outcomes under different circumstances.
Abstract: Organizations which have invested heavily in Enterprise Resource Planning (ERP) systems, intranets and Enterprise Information Portals (EIP) with standardized workflows, data definitions and a common data repository, have provided the technlogical capability to their workgroups to share information at the enterprise level. However, the responsibility of populating the repository with relevant and high quality data required for customized data analyses is spread across workgroups associated with specific business processes. In an information interdependent setting, factors such as short-term organizational focus and the lack of uniformity in information management skills across workgroups can act as impediments to information sharing. Using an analytical model of information exchange between two workgroups, we study the impact of measures (e.g., creating a perception of continuity and persistence in interactions, benefit sharing, etc.) on the performance of the workgroups and the organization. The model considers a setting we describe as information complementarity, where the payoff to a workgroup depends not only on the quality of its own information, but also on that of the information provided by other workgroups. We show how a long-term vision combined with homogeneity in information management capabilities across workgroups can lead to organizationally desirable levels of information exchange, and how benefit sharing can either help or hurt individual and organizational information exchange outcomes under different circumstances. Our analysis highlights the need for appropriate organizational enablers to realize the benefits of enterprise systems and related applications.

67 citations


Journal ArticleDOI
TL;DR: The results indicate that software size in function points significantly impacts the software development effort.
Abstract: In this paper, we investigate the impact of team size on the software development effort. Using field data of over 200 software projects from various industries, we empirically test the impact of team size and other variables--such as software size in function points, ICASE tool and programming language type--on software development effort. Our results indicate that software size in function points significantly impacts the software development effort. The two-way interactions between function points and use of ICASE tool, and function points and language type are significant as well. Additionally, the interactions between team size and programming language type, and team size and use of ICASE tool were all significant.

56 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a vertical differentiation game-theoretic model that addresses the issue of designing free software samples (shareware) for attaining follow-on sales.
Abstract: We develop a vertical differentiation game-theoretic model that addresses the issue of designing free software samples (shareware) for attaining follow-on sales. When shareware can be reinstalled, cannibalization of sales of the commercial product may ensue. We analyze the optimal design of free software according to two characteristics: the evaluation period allotted for sampling (potentially renewable) and the proportion of features included in the sample. We introduce a new software classification scheme based on the characteristics of the sample that aid consumer learning. We find that the optimal combination of features and trial time greatly depends on the category of software within the classification scheme. Under alternative learning scenarios, we show that the monopolist may be better off not suppressing potential shareware reinstallation.

44 citations


Journal ArticleDOI
TL;DR: In this article, the authors examine the relationship between firms' human resources (HR) practices and their information technology (IT) practices, focusing on the dichotomy between autonomy and control, and consider the effects of this alignment on worker performance.
Abstract: We examine the relationship between firms' human resources (HR) practices and their information technology (IT) practices, focusing on the dichotomy between autonomy and control. We define facilitating HR practices as those that exhibit the following characteristics: worker autonomy, connectedness, learning, valuing individuals, trust, and flexibility in business processes. We then characterize facilitating IT practices, which are practices that facilitate employee collaboration, autonomy, and wider access to information. We contrast these categories of practice to traditional HR and monitoring IT, respectively. Drawing from theories of complementarities and configuration, we propose that alignment between HR and IT strategies originates at the level of individual practices. We consider the effects of this alignment on worker performance. We then ground our discussion in exploratory empirical and qualitative results.

38 citations


Journal ArticleDOI
TL;DR: It is found that method variations do influence programming results, and better performance and satisfaction outcomes are achieved when the pair programming is performed in face-to-face versus virtual settings, in combination with the test-driven approach, and with more experienced programmers.
Abstract: The use of agile methods is growing in industrial practice due to the documented benefits of increased software quality, shared programmer expertise, and user satisfaction. These methods include pair programming (two programmers working side-by-side producing the code) and test-driven approaches (test cases written first to prepare for coding). In practice, software development organizations adapt agile methods to their environment. The purpose of this research is to understand better the impacts of adapting these methods. We perform a set of controlled experiments to investigate how adaptations, or variations, to the pair programming method impact programming performance and user satisfaction. We find that method variations do influence programming results. In particular, better performance and satisfaction outcomes are achieved when the pair programming is performed in face-to-face versus virtual settings, in combination with the test-driven approach, and with more experienced programmers. We also find that limiting the extent of collaboration can be effective, especially when programmers are more experienced. These experimental results provide a rigorous foundation for deciding how to adapt pair programming methods into specific project contexts.

34 citations


Journal ArticleDOI
TL;DR: A theoretical framework is developed to better understand the role of CPC in enabling collaboration in a product development environment, and several research propositions are developed which provide a roadmap for conducting future empirical research to measure the impact of CPC on product development.
Abstract: Collaborative product commerce (CPC) solutions span software and services which permit individuals to share product data to improve the design, development, and management of products throughout the product development lifecycle. Drawing upon prior developments in adaptive structuration theory (AST) and media richness theory, I develop a theoretical framework to better understand the role of CPC in enabling collaboration in a product development environment. I study the impact of CPC on product design and development processes using a cross-sectional survey of 36 firms. The study reveals that CPC usage varies across different phases of the product development lifecycle. Preliminary results indicate that CPC has enabled firms to collaborate effectively with external stakeholders, which has resulted in tangible business benefits. I conclude by developing several research propositions which provide a roadmap for conducting future empirical research to measure the impact of CPC on product development, and highlight potential research topics that can be explored.

Journal ArticleDOI
TL;DR: Analysis of outsourcing transactions between 1998 and 2004 indicates that risk-mitigating strategies have significant explanatory power, indicating that the capital market’s reaction to an outsourcing announcement might at least partly be forecast.
Abstract: Outsourcing is a widely accepted option in strategic management, which, like every business venture, bears opportunities and risks. Supplementing the popular area of research on the merits of outsourcing, this paper examines how stockholders rate corporate sourcing decisions with regard to the risk they associate with this transaction. Using event study methodology and multivariate cross-sectional OLS-regression, we analyze a sample of 182 outsourcing transactions in the global financial services industry between 1998 and 2004 in order to investigate the risk-specific drivers of excess returns to shareholders. The analysis studies the impact of risk-specific independent variables, including transaction size, length, outsourced business functionality, and experience with outsourcing. Our findings indicate that risk-mitigating strategies have significant explanatory power, indicating that the capital market's reaction to an outsourcing announcement might at least partly be forecast. Results show a positive correlation between market reaction and business process outsourcing by financial services companies. We also find strong evidence indicating that capital markets react positively to relatively large transactions compared to the market capitalization of the outsourcing firm. For service providers our results show that traditional IT-related sourcing projects or the insourcing of administrative processes have a significant positive correlation with market reaction.

Journal ArticleDOI
TL;DR: It is demonstrated that when entry costs are equal, one of the existing retailers enters the Internet channel first and if the market is covered by existing retailers before entry, then because of the threat of Internet channel entry by the potential new entrant, retailer entry cannibalizes existing retail profits—cannibalizing at a loss.
Abstract: In this research we study how existing market coverage affects the outcome of the Internet channel entry game between an existing retailer and a new entrant. A market is not covered when some consumers with low reservation prices are priced out by existing retailers and do not purchase. In a model with multiple existing retailers and a potential new entrant, we demonstrate that when entry costs are equal, one of the existing retailers enters the Internet channel first. However, if the market is covered by existing retailers before entry, then because of the threat of Internet channel entry by the potential new entrant, retailer entry cannibalizes existing retail profits--cannibalizing at a loss. In addition, if a potential new entrant has a slight advantage in Internet channel entry costs and the market is not covered by existing retailers, then the new entrant enters the Internet channel first. If the market is covered by existing retailers, then the new entrant must have a larger Internet channel entry cost advantage to be first to enter the Internet channel.

Journal ArticleDOI
TL;DR: A mechanism under vendor-managed inventory (VMI) by which a manufacturer provides an incentive contract to a retailer to convert lost sales stockouts into backorders is developed.
Abstract: We develop a mechanism under vendor-managed inventory (VMI) by which a manufacturer provides an incentive contract to a retailer to convert lost sales stockouts into backorders. An incentive contract is required since the retailer's efforts are not directly observable. We first show that when there are no limits on order quantities or inventory levels imposed on the manufacturer, the manufacturer will push inventory onto the retailer. The manufacturer minimizes the possibility for lost sales stockouts by maintaining high inventory levels at the retailer rather than by paying incentives to the retailer. However, modern information systems (IS), such as radio frequency identification (RFID), allow the retailer to monitor inventory at its premises and to enforce limits on order quantities. With strict limits on order quantities, the manufacturer will provide incentives to the retailer to convert lost sales stockouts to backorders. We analyze the conditions under which these incentive payments are likely to be highest.

Journal ArticleDOI
TL;DR: In this article, the authors argue that the single integrated view approach is unnecessarily restrictive, and instead offer the view that ontologies can simultaneously accommodate multiple integrated views provided the accompaniment of contexts, a set of axioms on the interpretation of data allowing local variations in representation and nuances in meaning, and a conversion function network between contexts to reconcile contextual differences.
Abstract: The prospect of combining information from diverse sources for superior decision making is plagued by the challenge of semantic heterogeneity, as data sources often adopt different conventions and interpretations when there is no coordination. An emerging solution in information integration is to develop an ontology as a standard data model for a domain of interest, and then to define the correspondences between the data sources and this common model to eliminate their semantic heterogeneity and produce a single integrated view of the data sources. We first claim that this single integrated view approach is unnecessarily restrictive, and instead offer the view that ontologies can simultaneously accommodate multiple integrated views provided the accompaniment of contexts, a set of axioms on the interpretation of data allowing local variations in representation and nuances in meaning, and a conversion function network between contexts to reconcile contextual differences. Then, we illustrate how to achieve semantic interoperability between multiple ontology-based applications. During this process, application ontologies are aligned through the reconciliation of their context models, and a new application with a virtual merged ontology is created. We illustrate this alternative approach with the alignment of air travel and car rental domains, an actual example from our prototype implementation.

Journal ArticleDOI
TL;DR: The use of semantic technology in support of business software heterogeneity is investigated as a likely tool to support a diverse and distributed software inventory and user.
Abstract: Web services have emerged as a prominent paradigm for the development of distributed software systems as they provide the potential for software to be modularized in a way that functionality can be described, discovered and deployed in a platform independent manner over a network (eg, intranets, extranets and the Internet) This paper examines an extension of this paradigm to encompass `Grid Services', which enables software capabilities to be recast with an operational focus and support a heterogeneous mix of business software and data, termed a Business Grid--"the grid of semantic services" The current industrial representation of services is predominantly syntactic however, lacking the fundamental semantic underpinnings required to fulfill the goals of any semantically-oriented Grid Consequently, the use of semantic technology in support of business software heterogeneity is investigated as a likely tool to support a diverse and distributed software inventory and user Service discovery architecture is therefore developed that is (a) distributed in form, (2) supports distributed service knowledge and (3) automatically extends service knowledge (as greater descriptive precision is inferred from the operating application system) This discovery engine is used to execute several real-word scenarios in order to develop and test a framework for engineering such grid service knowledge The examples presented comprise software components taken from a group of Investment Banking systems Resulting from the research is a framework for engineering service knowledge from operational enterprise systems for the purposes of service selection and subsequent reuse

Journal ArticleDOI
TL;DR: This paper demonstrates how an ontology can be used to extract knowledge from an exemplar XML repository of Shakespeare’s plays and implements an architecture for this ontology using de facto languages of the semantic Web including OWL and RuleML, thus preparing the ontology for use in data sharing.
Abstract: XML plays an important role as the standard language for representing structured data for the traditional Web, and hence many Web-based knowledge management repositories store data and documents in XML. If semantics about the data are formally represented in an ontology, then it is possible to extract knowledge: This is done as ontology definitions and axioms are applied to XML data to automatically infer knowledge that is not explicitly represented in the repository. Ontologies also play a central role in realizing the burgeoning vision of the semantic Web, wherein data will be more sharable because their semantics will be represented in Web-accessible ontologies. In this paper, we demonstrate how an ontology can be used to extract knowledge from an exemplar XML repository of Shakespeare's plays. We then implement an architecture for this ontology using de facto languages of the semantic Web including OWL and RuleML, thus preparing the ontology for use in data sharing. It has been predicted that the early adopters of the semantic Web will develop ontologies that leverage XML, provide intra-organizational value such as knowledge extraction capabilities that are irrespective of the semantic Web, and have the potential for inter-organizational data sharing over the semantic Web. The contribution of our proof-of-concept application, KROX, is that it serves as a blueprint for other ontology developers who believe that the growth of the semantic Web will unfold in this manner.

Journal ArticleDOI
TL;DR: A modification of the original formulation of metagraphs that eliminates some of the inconveniences that have hindered the use of this technique and presents a generalized graphical algorithm for structural (i.e., syntactic) verification that runs correctly not only on TPMGs containing directed cycles, but even on those that have overlapping patterns.
Abstract: Business processes can be modeled using a variety of schemes such as Petri Nets, Metagraphs and UML Activity Diagrams. When information analysis is as important an objective as the proper sequencing of tasks, the metagraph formalism is the most appropriate. In practice, however, metagraphs have not achieved wide popularity. Here we propose a modification of the original formulation that eliminates some of the inconveniences that have hindered the use of this technique. We represent a business process as a Task-Precedence Metagraph (TPMG), which is a type of AND/OR graph. A TPMG is similar to a metagraph but is visually clearer and more appealing, and the algorithmic procedures are graphical rather than algebraic. We first describe the proposed representation scheme for TPMGs and present a simple graph-search algorithm for the analysis of information flow. This can be readily extended to perform task analysis, resource analysis, and operational (i.e., semantic) verification. We then present a generalized graphical algorithm for structural (i.e., syntactic) verification that runs correctly not only on TPMGs containing directed cycles, but even on those that have overlapping patterns.

Journal ArticleDOI
TL;DR: Despite the proliferation of virtual stores, research into the consumer personality characteristics that influence consumer interactions with virtual stores has been lagging, and this paper proposes to address this problem.
Abstract: Despite the proliferation of virtual stores, research into the consumer personality characteristics that influence consumer interactions with virtual stores has been lagging. In this paper we propo...

Journal ArticleDOI
TL;DR: This work addresses the physical SONET network design problem of selecting stackable, unidirectional rings connecting central office nodes (COs) and remote nodes (RNs) by forming a 0–1 programming model for this problem.
Abstract: We address the physical SONET network design problem of selecting stackable, unidirectional rings connecting central office nodes (COs) and remote nodes (RNs). This problem frequently arises in designing feeder transport networks to support centralized traffic between the COs and RNs. We formulate a 0---1 programming model for this problem. A simulated annealing-based Lagrangian relaxation procedure to find optimal or near-optimal solutions is then described. Computational results are reported showing that our procedures produce solutions that are on average within 1.1% of optimality. We show that using simulated annealing to augment the pure Lagrangian approach produces superior solutions to the Lagrangian approach.


Journal ArticleDOI
TL;DR: The purpose of this special issue is to provide a focused outlet for recent advances in realizing the semantic Web vision, including new research results and developments as well as applications of existing research results in this emerging fascinating area.
Abstract: The Web is constantly changing the way we live and the way businesses operate. While the Web is successfully growing into a massively distributed reservoir of information, the information often lacks an explicit, well-defined, machine-understandable meaning attached to it, prohibiting automated manipulation and reasoning about it. The emerging semantic Web paradigm promises to remedy this deficiency and thereby enable the full potential of the Web. The ambitious vision of this paradigm has excited researchers in various areas, including semantic modeling, distributed and heterogeneous information systems, and artificial intelligence. New technologies, such as ontologies and semantic Web services, are being proposed, developed, and standardized. Existing methodologies and techniques are being adapted and applied in this new paradigm. New application opportunities are being discovered and pursued. The purpose of this special issue is to provide a focused outlet for recent advances in realizing the semantic Web vision, including new research results and developments as well as applications of existing research results in this emerging fascinating area. Our goal is to identify the current state of the art in available and emerging methodologies, tools, technologies, standards, and applications, and to envisage future opportunities and challenges of the research in this area. In this special issue, we present three selected articles that address some of these aspects. The first article, entitled ‘‘Extracting Knowledge from XML Document Repository: A Semantic WebBased Approach’’ by Henry M. Kim and Arijit Sengupta, describes a method to automatically extract knowledge from a document corpus of some domain. It presents an ontology based methodology and tool framework, called KROX (Knowledge Retrieval using Ontologies and XML). The ontology engineering methodology used in KROX comprises several steps, including motivating scenario specification, competency questions formulation, ontology development, and ontology evaluation. The resulting ontology is formally defined by a terminology and an associated set of axioms. The ontology then enables automatic inference of additional knowledge not explicitly represented in the document repository. The second article, entitled ‘‘Enterprise Application Reuse: Semantic Discovery of Business Grid Services’’ by David Bell, Simone A. Ludwig, and Mark Lycett, investigates an extension of the Web services paradigm by ‘‘business grid services’’. It presents a capability based service discovery architecture, called SEDI4G (Semantic Discovery for Grid Services), for discovering reusable grid-enabled enterprise software components. SEDI4G extracts the semantics of software components from their syntactic capability descriptions and operational characteristics and uses a semantic matching method to match the capabilities of potentially suitable components against user queries. This work demonstrates the potential benefits of the fusion of the Web services paradigm and the grid paradigm. S. Ram Department of Management Information Systems, Eller College of Management, University of Arizona, 1130 E. Helen Street, Tucson, AZ 85721-0108, USA e-mail: ram@eller.arizona.edu