scispace - formally typeset
Search or ask a question

Showing papers in "Information Systems Frontiers in 2014"


Journal ArticleDOI
TL;DR: A critical examination of the substrate of crowdsourcing research is presented by surveying the landscape of existing studies, including theoretical foundations, research methods, and research foci, and identifies several important research directions for IS scholars from three perspectives—the participant, organization, and system—and which warrant further study.
Abstract: Crowdsourcing is one of the emerging Web 2.0 based phenomenon and has attracted great attention from both practitioners and scholars over the years. It can facilitate the connectivity and collaboration of people, organizations, and societies. We believe that Information Systems scholars are in a unique position to make significant contributions to this emerging research area and consider it as a new research frontier. However, so far, few studies have elaborated what have been achieved and what should be done. This paper seeks to present a critical examination of the substrate of crowdsourcing research by surveying the landscape of existing studies, including theoretical foundations, research methods, and research foci, and identifies several important research directions for IS scholars from three perspectives--the participant, organization, and system--and which warrant further study. This research contributes to the IS literature and provides insights for researchers, designers, policy-makers, and managers to better understand various issues in crowdsourcing systems and projects.

535 citations


Journal ArticleDOI
TL;DR: A novel service cloud architecture is presented, and an auto-scaling mechanism is proposed to scale virtual resources at different resource levels in service clouds that can satisfy the user Service Level Agreement (SLA) while keeping scaling costs low.
Abstract: Service clouds are distributed infrastructures which deploys communication services in clouds. The scalability is an important characteristic of service clouds. With the scalability, the service cloud can offer on-demand computing power and storage capacities to different services. In order to achieve the scalability, we need to know when and how to scale virtual resources assigned to different services. In this paper, a novel service cloud architecture is presented, and a linear regression model is used to predict the workload. Based on this predicted workload, an auto-scaling mechanism is proposed to scale virtual resources at different resource levels in service clouds. The auto-scaling mechanism combines the real-time scaling and the pre-scaling. Finally experimental results are provided to demonstrate that our approach can satisfy the user Service Level Agreement (SLA) while keeping scaling costs low.

114 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed Phone2Cloud, a computation offloading-based system for energy saving on smartphones in the context of mobile cloud computing, which offloads computation of an application running on smartphones to the cloud.
Abstract: With prosperity of applications on smartphones, energy saving for smartphones has drawn increasing attention. In this paper we devise Phone2Cloud, a computation offloading-based system for energy saving on smartphones in the context of mobile cloud computing. Phone2Cloud offloads computation of an application running on smartphones to the cloud. The objective is to improve energy efficiency of smartphones and at the same time, enhance the application's performance through reducing its execution time. In this way, the user's experience can be improved. We implement the prototype of Phone2Cloud on Android and Hadoop environment. Two sets of experiments, including application experiments and scenario experiments, are conducted to evaluate the system. The experimental results show that Phone2Cloud can effectively save energy for smartphones and reduce the application's execution time.

110 citations


Journal ArticleDOI
TL;DR: This paper uses price-based resource allocation strategy and presents both centralized and distributed algorithms to find optimal solutions to these games, showing robust performance for resource allocation and requiring minimal computation time.
Abstract: Distributed resource allocation is a very important and complex problem in emerging horizontal dynamic cloud federation (HDCF) platforms, where different cloud providers (CPs) collaborate dynamically to gain economies of scale and enlargements of their virtual machine (VM) infrastructure capabilities in order to meet consumer requirements HDCF platforms differ from the existing vertical supply chain federation (VSCF) models in terms of establishing federation and dynamic pricing There is a need to develop algorithms that can capture this complexity and easily solve distributed VM resource allocation problem in a HDCF platform In this paper, we propose a cooperative game-theoretic solution that is mutually beneficial to the CPs It is shown that in non-cooperative environment, the optimal aggregated benefit received by the CPs is not guaranteed We study two utility maximizing cooperative resource allocation games in a HDCF environment We use price-based resource allocation strategy and present both centralized and distributed algorithms to find optimal solutions to these games Various simulations were carried out to verify the proposed algorithms The simulation results demonstrate that the algorithms are effective, showing robust performance for resource allocation and requiring minimal computation time

89 citations


Journal ArticleDOI
TL;DR: It is found that institutional pressures are an important antecedent to absorptive capacity, an important measure of organizational learning capability, which mediates its influence on assimilation.
Abstract: Firms are increasingly employing social media to manage relationships with partner organizations, yet the role of institutional pressures in social media assimilation has not been studied. We investigate social media assimilation in firms using a model that combines the two theoretical streams of IT adoption: organizational innovation and institutional theory. The study uses a composite view of absorptive capacity that includes both previous experience with similar technology and the general ability to learn and exploit new technologies. We find that institutional pressures are an important antecedent to absorptive capacity, an important measure of organizational learning capability. The paper augments theory in finding the role and limits of institutional pressures. Institutional pressures are found to have no direct effect on social media assimilation but to impact absorptive capacity, which mediates its influence on assimilation.

84 citations


Journal ArticleDOI
TL;DR: The proposed iterative feature selection approach outperforms the non-iterative approach and is designed to find a ranked feature list which is particularly effective on the more balanced dataset resulting from sampling while minimizing the risk of losing data through the sampling step and missing important features.
Abstract: Two important problems which can affect the performance of classification models are high-dimensionality (an overabundance of independent features in the dataset) and imbalanced data (a skewed class distribution which creates at least one class with many fewer instances than other classes). To resolve these problems concurrently, we propose an iterative feature selection approach, which repeated applies data sampling (in order to address class imbalance) followed by feature selection (in order to address high-dimensionality), and finally we perform an aggregation step which combines the ranked feature lists from the separate iterations of sampling. This approach is designed to find a ranked feature list which is particularly effective on the more balanced dataset resulting from sampling while minimizing the risk of losing data through the sampling step and missing important features. To demonstrate this technique, we employ 18 different feature selection algorithms and Random Undersampling with two post-sampling class distributions. We also investigate the use of sampling and feature selection without the iterative step (e.g., using the ranked list from a single iteration, rather than combining the lists from multiple iterations), and compare these results from the version which uses iteration. Our study is carried out using three groups of datasets with different levels of class balance, all of which were collected from a real-world software system. All of our experiments use four different learners and one feature subset size. We find that our proposed iterative feature selection approach outperforms the non-iterative approach.

76 citations


Journal ArticleDOI
TL;DR: The group decision-making to emergency situations is formulated as a multiple attribute group decision making (MAGDM) problem, the consensus among experts is modeled, and a new methodology is proposed to reach the understanding of emergency response plans with the maximized consensus in course of decision- making.
Abstract: An emergency response system (ERS) can assist a municipality or government in improving its capabilities to respond urgent and severe events. The responsiveness and effectiveness of an ERS relies greatly on its data acquisition and processing system, which has been evolved with information technology (IT). With the rapid development of sensor networks and cloud computing, the emerging Internet of things (IoT) tends to play an increasing role in ERSs; the networks of sensors, public services, and experts are able to interact with each other and make scientific decisions to the emergencies based on real-time data. When group decision making is required in an ERS, one critical challenge is to obtain the good understanding of massive and diversified data and make consensus group decisions under a high-level stress and strict time constraint. Due to the nature of unorganized data and system complexity, an ERS depends on the perceptions and judgments of experts from different domains; it is challenging to assess the consensus of understanding on the collected data and response plans before appropriate decisions can be reached for emergencies. In this paper, the group decision-making to emergency situations is formulated as a multiple attribute group decision making (MAGDM) problem, the consensus among experts is modeled, and a new methodology is proposed to reach the understanding of emergency response plans with the maximized consensus in course of decision-making. In the implementation, the proposed methodology in integrated with computer programs and encapsulated as a service on the server. The objectives of the new methodology are (i) to enhance the comprehensive group cognizance on emergent scenarios and response plans and (ii) to accelerate the consensus for decision making with an intelligent clustering algorithm, (iii) to adjust the experts' opinions without affecting the reliability of the decision when the consensus cannot be reached from the preliminary decision-making steps. Partitioning Around Medoids (PAM) has been applied as the clustering algorithm, Particle Swarm Optimization (PSO) is deployed to adjust evaluation values automatically. The methodology is applied in a case study to illustrate its effectiveness in converging group opinions and promoting the consensus of understanding on emergencies.

66 citations


Journal ArticleDOI
TL;DR: These results are of interest for policy makers and healthcare managers that must deal with continuous increases in healthcare costs and major demographic changes, and thus need to improve the efficiency and quality of healthcare services provided to patients through IT-based innovations.
Abstract: To face their changing environment, a growing number of healthcare institutions are investing in ERP systems as their basic technological infrastructure, highlighting a phenomenon that recalls the earlier popularity of the ERP movement in the manufacturing and financial sectors. Based on the analysis of 180 stories published on ERP vendors' websites, the primary aim of this study is to identify, characterize and contextualize the motivations that lead to the adoption of these systems in healthcare organizations. Our findings first indicate that these motivations can be classified into six broad categories, namely technological, managerial-operational, managerial-strategic, clinical-operational, clinical-strategic, and financial. Moreover, three clusters of healthcare organizations were identified with regard to these motivations, and labelled as taking a "business", "clinical" or "institutional" view of ERP adoption decisions. Given the specificities of IT adoption in the healthcare sector, the importance of these results from a theoretical standpoint lies in filling a knowledge gap in both the ERP and health IT research domains. From a practical standpoint, these results are of interest for policy makers and healthcare managers that must deal with continuous increases in healthcare costs and major demographic changes, and thus need to improve the efficiency and quality of healthcare services provided to patients through IT-based innovations.

57 citations


Journal ArticleDOI
TL;DR: Several promising future-oriented technology analysis techniques were found and are discussed and much work remains to be done to customize them, integrate them, and codify them for use in education and high-quality IS research on very complex sociotechnical contexts like the global financial network.
Abstract: The ACM Code of Ethics asserts that computing professionals have an ethical responsibility to minimize the negative consequences of information and communication technologies (ICT). Negative consequences are rarely intended, but they can often be foreseen with careful sociotechnical analysis in advance of system building. Motivated by an interest in extremely complex sociotechnical contexts (e.g., mortgage lending and automated trading) where ICT appears to be having negative consequences in addition to many benefits, this paper identifies and evaluates future-oriented problem analysis and solution design tools in three potentially relevant literatures: 1) ICT ethics, 2) environmental sustainability, and 3) technology hazards. Several promising future-oriented technology analysis techniques (e.g., anticipatory technology ethics, technology roadmapping, morphological analysis, and control structure analysis) were found and are discussed in this paper, but much work remains to be done to customize them, integrate them, and codify them for use in education and high-quality IS research on very complex sociotechnical contexts like the global financial network.

54 citations


Journal ArticleDOI
TL;DR: This framework first predicts the missing multi-QoS values according to the historical QoS experience from users, and then selects the global optimal solution for multi-user by the fast match approach.
Abstract: In order to find best services to meet multi-user's QoS requirements, some multi-user Web service selection schemes were proposed. However, the unavoidable challenges in these schemes are the efficiency and effect. Most existing schemes are proposed for the single request condition without considering the overload of Web services, which cannot be directly used in this problem. Furthermore, existing methods assumed the QoS information for users are all known and accurate, and in real case, there are always many missing QoS values in history records, which increase the difficulty of the selection. In this paper, we propose a new framework for multi-user Web service selection problem. This framework first predicts the missing multi-QoS values according to the historical QoS experience from users, and then selects the global optimal solution for multi-user by our fast match approach. Comprehensive empirical studies demonstrate the utility of the proposed method.

51 citations


Journal ArticleDOI
TL;DR: A tool called Best Friend Forever is presented that automatically classifies the friends of a user in communities and assigns a value to the strength of the relationship ties to each one, and an experimental evaluation showed that BFF can significantly alleviate the burden of eliciting communities and relationship strength.
Abstract: The use of social networking services (SNSs) such as Facebook has explosively grown in the last few years. Users see these SNSs as useful tools to find friends and interact with them. Moreover, SNSs allow their users to share photos, videos, and express their thoughts and feelings. However, users are usually concerned about their privacy when using SNSs. This is because the public image of a subject can be affected by photos or comments posted on a social network. In this way, recent studies demonstrate that users are demanding better mechanisms to protect their privacy. An appropriate approximation to solve this could be a privacy assistant software agent that automatically suggests a privacy policy for any item to be shared on a SNS. The first step for developing such an agent is to be able to elicit meaningful information that can lead to accurate privacy policy predictions. In particular, the information needed is user communities and the strength of users' relationships, which, as suggested by recent empirical evidence, are the most important factors that drive disclosure in SNSs. Given the number of friends that users can have and the number of communities they may be involved on, it is infeasible that users are able to provide this information without the whole eliciting process becoming confusing and time consuming. In this work, we present a tool called Best Friend Forever (BFF) that automatically classifies the friends of a user in communities and assigns a value to the strength of the relationship ties to each one. We also present an experimental evaluation involving 38 subjects that showed that BFF can significantly alleviate the burden of eliciting communities and relationship strength.

Journal ArticleDOI
TL;DR: This study considers “quality of online discussion” an appropriate metric for assessing group-level outcomes of virtual social interactions, and thus for predicting member willingness to sustain an ongoing relationship with a virtual community (VC).
Abstract: People participate in virtual communities (VCs) for knowledge sharing or social interaction. However, most studies of VCs have focused on elucidating knowledge sharing rather than predicting virtual social interactions. This study considers "quality of online discussion" an appropriate metric for assessing group-level outcomes of virtual social interactions, and thus for predicting member willingness to sustain an ongoing relationship with a virtual community (VC). This study develops a research model, grounded in Web interactivity, social identity and social bond theories, for predicting the quality of online discussion in terms of cognitive and social influences. Empirical results from an online survey of a VC verify distinct direct and indirect social influences (perceived internalization bonds and perceived identification bonds) and cognitive influences (perceived communication and perceived control). Implications for academics and practitioners are also discussed.

Journal ArticleDOI
TL;DR: The final evaluation performed on the system has proved that the main goals initially stated were successfully achieved, and the paper describes how personalization issues influenced the development of the system.
Abstract: Mobile technologies are present in our daily life and the functionality of these devices should not be restricted to support phone calls or to organize people's work. We live in a world of connectivity and mobile devices allow users to access a great variety of resources using IR, Wi-Fi, RFID, Bluetooth, GPS, GPRS, and so on. In particular, PDA and smartphones are being successfully applied to cultural heritages as an alternative to traditional audio-guides. These electronic guides are offered to users in order to make the visit to the exhibition more pleasant and effective. Our research group has been working in the design and development of mobile software for art museums based on PDA for the past four years. During this period two systems were developed. This paper summarizes the basic features of both applications. The first one is currently operating in a real museum. These applications have been designed by applying a user-centered development process and the main goal of these systems is to improve visitors' satisfaction by using the PDA while visiting the museum. As the result of usability evaluations performed with real users on the first application, the definition, implementation and evaluation of a new system was conceived. Moreover, the paper describes how personalization issues influenced the development of the system. Location-awareness, internationalization, HCI patterns, language adaptation to visitors and accessibility issues were the most important contributions to personalize the application. The final evaluation performed on the system has proved that the main goals initially stated were successfully achieved.

Journal ArticleDOI
TL;DR: This paper introduces a three-stage framework that automatically generates expertise profiles of online community members, and empirically compares two state-of-the-art information retrieval techniques, the vector space model and the language model, with a Latent Dirichlet Allocation (LDA) based model for computing document-topic relevance.
Abstract: Building expertise profiles in global online communities is a critical step in leveraging the range of expertise available in the global knowledge economy. In this paper we introduce a three-stage framework that automatically generates expertise profiles of online community members. In the first two stages, document-topic relevance and user-document association are estimated for calculating users' expertise levels on individual topics. We empirically compare two state-of-the-art information retrieval techniques, the vector space model and the language model, with a Latent Dirichlet Allocation (LDA) based model for computing document-topic relevance as well as the direct and indirect association models for computing user-document association. In the third stage we test whether a filtering strategy can improve the performance of expert profiling. Our experimental results using two real datasets provide useful insights on how to select the best models for profiling users' expertise in online communities that can work across a range of global communities.

Journal ArticleDOI
TL;DR: Con conceptual graph formalism is used to represent CGPs in which reasoning is based on graph-theory operations to support sound logical reasoning in a visual manner and allows users to have a maximal understanding and control over each step of the knowledge reasoning process in the CGPs exploitation.
Abstract: The intrinsic complexity of the medical domain requires the building of some tools to assist the clinician and improve the patient's health care. Clinical practice guidelines and protocols (CGPs) are documents with the aim of guiding decisions and criteria in specific areas of healthcare and they have been represented using several languages, but these are difficult to understand without a formal background. This paper uses conceptual graph formalism to represent CGPs. The originality here is the use of a graph-based approach in which reasoning is based on graph-theory operations to support sound logical reasoning in a visual manner. It allows users to have a maximal understanding and control over each step of the knowledge reasoning process in the CGPs exploitation. The application example concentrates on a protocol for the management of adult patients with hyperosmolar hyperglycemic state in the Intensive Care Unit.

Journal ArticleDOI
TL;DR: An effective approach named IDTCP (Incast Decrease TCP) is proposed to mitigate the TCP incast problem by focusing on the relationships between the TCP throughput and the congestion control window size of TCP.
Abstract: Recently, TCP incast problem in data center networks has attracted a wide range of industrial and academic attention. Lots of attempts have been made to address this problem through experiments and simulations. This paper analyzes the TCP incast problem in data centers by focusing on the relationships between the TCP throughput and the congestion control window size of TCP. The root cause of the TCP incast problem is explored and the essence of the current methods to mitigate the TCP incast is well explained. The rationality of our analysis is verified by simulations. The analysis as well as the simulation results provides significant implications to the TCP incast problem. Based on these implications, an effective approach named IDTCP (Incast Decrease TCP) is proposed to mitigate the TCP incast problem. Analysis and simulation results verify that our approach effectively mitigates the TCP incast problem and noticeably improves the TCP throughput.

Journal ArticleDOI
TL;DR: I-Competere is a tool developed to forecast competence gaps in key management personnel by predicting planning and scheduling competence levels and allows the forecast and anticipation of competence needs thus articulating personnel development tools and techniques.
Abstract: People in software development teams are crucial in order to gain and retain strategic advantage inside a highly competitive market. As a result, human factors have gained attention in the software industry. Software Project Managers are decisive to achieve project success. A competent project manager is capable of solving any problem that an organization may encounter, regardless of its complexity. This paper presents I-Competere which is a tool developed to forecast competence gaps in key management personnel by predicting planning and scheduling competence levels. Based on applied intelligence techniques, I-Competere allows the forecast and anticipation of competence needs thus articulating personnel development tools and techniques. The results of the test, using several artificial neural networks, are more than promising and show prediction accuracy.

Journal ArticleDOI
TL;DR: This research attempts to apply the results of marketing and information management research concerning customer service convenience with e-retailers to construct an EC-SERVCON managerial grid for managers to use in formulating strategy for improving service convenience.
Abstract: Due to the burgeoning growth of electronic commerce (EC or e-commerce), online shopping has become a key competitive strategy for online retailers (e-retailers) to attract more customers, expand market boundaries, and create more benefits. Service convenience (SERVCON), a concept of benefit and related to customer satisfaction and retention, has received increasing attention and is now treated as an important factor in shopping behavior. Unfortunately, the literature on convenience has explored only traditional retailers. Thus, this research attempts to apply the results of marketing and information management (IM) research concerning customer service convenience with e-retailers. Based on a survey of 304 online shoppers (e-shoppers) in Taiwan, a 14-item e-commerce service convenience (EC-SERVCON) instrument was constructed. We then construct an EC-SERVCON managerial grid for managers to use in formulating strategy for improving service convenience. The instrument, findings, and implications of this study will be valuable to researchers and practitioners interested in designing, implementing, and managing e-commerce.

Journal ArticleDOI
TL;DR: This work presents a methodology to determine how user sales are affected as response time increases, and presents the evaluation of high response time on users for popular applications found in the Web.
Abstract: The widespread adoption of high speed Internet access and it's usage for everyday tasks are causing profound changes in users' expectations in terms of Web site performance and reliability. At the same time, server management is living a period of changes with the emergence of the cloud computing paradigm that enables scaling server infrastructures within minutes. To help set performance objectives for maximizing user satisfaction and sales, while minimizing the number of servers and their cost, we present a methodology to determine how user sales are affected as response time increases. We begin with the characterization of more than 6 months of Web performance measurements, followed by the study of how the fraction of buyers in the workload is higher at peak traffic times, to then build a model of sales through a learning process using a 5-year sales dataset. Finally, we present our evaluation of high response time on users for popular applications found in the Web.

Journal ArticleDOI
TL;DR: A general framework that integrates the intelligent techniques as a component into the architecture of service oriented GDSS is put forward and how Artificial Intelligence techniques can resolve the conflicts of distributed group decisions is illustrated.
Abstract: In today's ever changing consumer driven market economy, it is imperative for providers to respond expeditiously to the changes demanded by the customer. This phenomenon is no different in the transportation sector in which a service-oriented Group Decision Support System (GDSS) provides an important role in transportation enterprise to effectively manage and rapidly respond to the varying needs of the customer. In this paper, we explore the integration problem of service-oriented system and intelligence technology through the use of a GDSS. Initially, we analyze a service-oriented architecture and then, propose the design architecture of a service-oriented GDSS. Next, we put forward a general framework that integrates the intelligent techniques as a component into the architecture of service oriented GDSS. In addition, we illustrate how Artificial Intelligence techniques can resolve the conflicts of distributed group decisions. The paper is concluded by providing a number of applications in the railway management system that demonstrates the benefits of the utilization of a service oriented intelligent GDSS.

Journal ArticleDOI
TL;DR: The results show that even though several SMEs have adopted broadband, they are not making full use of the technology, while using broadband has not changed significantly the way they operate their businesses.
Abstract: This study is examining the challenges involved in the effective diffusion of innovation technologies among Small and Medium Size Enterprises (SMEs). Having broadband Internet as the technology in focus we report that broadband adoption by SMEs was initially particularly slow while there has been little research on the reasons behind this phenomenon. This study provides an in-depth view of the broadband diffusion process to SMEs in south east UK by examining the views and activities of the various groups involved in the process. Innovation diffusion and social construction of technology theory are applied in order to construct a framework that addresses some of the issues not covered in previous literature. Our results show that even though several SMEs have adopted broadband, they are not making full use of the technology, while using broadband has not changed significantly the way they operate their businesses. The point we are raising with this study is that the SMEs' lack of understanding on the effective application of the technology is the main impediment for effective adoption. We believe that our results are useful to providers looking to diffuse broadband to SMEs as well as other IT innovations.

Journal ArticleDOI
TL;DR: The methodology and the associated tool have been validated in the development of a MAS for fault diagnosis in FTTH (Fiber To The Home) networks and have been measured in quantifiable way obtaining a reduction of the tests implementation time.
Abstract: This paper presents a testing methodology to apply Behaviour Driven Development (BDD) techniques while developing Multi-Agent Systems (MASs), termed BEhavioural Agent Simple Testing (BEAST) Methodology. This methodology is supported by the open source framework (BEAST Tool) which automatically generates test cases skeletons from BDD scenarios specifications. The developed framework allows the testing of MASs based on JADE or JADEX platforms. In addition, this framework offers a set of configurable Mock Agents with the aim of being able to execute tests while the MAS is under development. The BEAST Methodology presents transparent traceability from user requirements to test cases. Thus, the stakeholders can be aware of the project status. The methodology and the associated tool have been validated in the development of a MAS for fault diagnosis in FTTH (Fiber To The Home) networks. The results have been measured in quantifiable way obtaining a reduction of the tests implementation time.

Journal ArticleDOI
TL;DR: This study statistically confirms the important mediating roles of outcome expectations and perceived quality in the indirect effects of perceived customization and perceived sociability on customer satisfaction and verifies that customer satisfaction is a critical mediator of the indirect influence of the other four constructs in the proposed model on the consumers’ repurchase intention.
Abstract: Researchers have not specifically considered the determinants of satisfaction and repurchase intention with regard to virtual products Using the expectancy disconfirmation model and symbolic consumption theory, this study presents and empirically examines a model of customer satisfaction and repurchase intention in this context Using structural equation modeling to analyze the data collected from 477 consumers of virtual products, this study validates the influence of perceived customization and perceived sociability on customer satisfaction and repurchase intention with regard to virtual products Additionally, this study statistically confirms the important mediating roles of outcome expectations and perceived quality in the indirect effects of perceived customization and perceived sociability on customer satisfaction This study also verifies that customer satisfaction is a critical mediator of the indirect influence of the other four constructs in the proposed model on the consumers' repurchase intention with regard to virtual products

Journal ArticleDOI
TL;DR: The key findings are that IS professionals are primarily interested in the job at hand and less so in the ethical concerns that the job might bring; ethics is a concern that is best left for others to deal with.
Abstract: This paper explores the question of how foresight and futures research can identify and address ethical issues in the field of Information Systems (IS). Starting from the premise that such IS are part of socio-technical systems, the interaction between technology and human actors raise ethical concerns. Early recognition of these concerns can address ethical issues and improve the use of the technology for a range of social and organisational goals. This paper discusses research conducted in two futures research projects. Both projects investigated emerging information and communication technologies (ICTs) and ethics. The first project established approaches for identifying future technologies and their related ethical concerns. This led to the identification of 11 emerging ICTs and their associated ethical concerns. The second project took these general ethical concerns and focused on their role in IS. Specifically, how IS professionals view future emerging technologies, their associated ethical concerns, and how they think these concerns could be addressed. The key findings are that IS professionals are primarily interested in the job at hand and less so in the ethical concerns that the job might bring; ethics is a concern that is best left for others to deal with. This paper considers the implications of research on ethics in emerging ICTs and draws general conclusions about the relevance of future technologies research in IS.

Journal ArticleDOI
TL;DR: This work derives the steady state equilibrium of the duopolistic differential game, shows how implicit competition induces overspending in IT defense, and demonstrates how such overinvestment can be combated by innovatively managing the otherwise misaligned incentives for coordination.
Abstract: Hackers evaluate potential targets to identify poorly defended firms to attack, creating competition in IT security between firms that possess similar information assets. We utilize a differential game framework to analyze the continuous time IT security investment decisions of firms in such a target group. We derive the steady state equilibrium of the duopolistic differential game, show how implicit competition induces overspending in IT defense, and then demonstrate how such overinvestment can be combated by innovatively managing the otherwise misaligned incentives for coordination. We show that in order to achieve cooperation, the firm with the higher asset value must take the lead and provide appropriate incentives to elicit participation of the other firm. Our analysis indicates that IT security planning should not remain an internal, firm-level decision, but also incorporate the actions of those firms that hackers consider as alternative targets.

Journal ArticleDOI
TL;DR: A service platform for on-demand virtual enterprises that supports flexible integration of networked resources, and facilitates virtual enterprise construction with business process utility, trusted service composition and data service centric business collaborations is proposed.
Abstract: While constructing virtual enterprises, it is crucial to flexibly integrate heterogeneous business resources and processes of different business partners and make them collaborate dynamically. Keeping involved IT systems or components as autonomous and loose-coupled services, the "Everything as a Service" concept supports flexible integration of heterogeneous applications. We adopt this concept and analyze the challenges in virtual enterprise construction, then propose a service platform for on-demand virtual enterprises. The platform supports flexible integration of networked resources, and facilitates virtual enterprise construction with business process utility, trusted service composition and data service centric business collaborations. At the end of the paper, together with a case study, experimental evaluations in contexts of concurrent multi-users are presented, showing the effectiveness and performance of the platform.

Journal ArticleDOI
TL;DR: This paper proposes mobility prediction based on cellular traces as an infrastructural level service of telecom cloud and equips a hybrid predictor fusing both CBP-based scheme and Markov-based predictor to provide telecom cloud with large-scale mobility prediction capacity.
Abstract: Mobile applications and services relying on mobility prediction have recently spurred lots of interest. In this paper, we propose mobility prediction based on cellular traces as an infrastructural level service of telecom cloud. Mobility Prediction as a Service (MPaaS) embeds mobility mining and forecasting algorithms into a cloud-based user location tracking framework. By empowering MPaaS, the hosted 3rd-party and value-added services can benefit from online mobility prediction. Particularly we took Mobility-aware Personalization and Predictive Resource Allocation as key features to elaborate how MPaaS drives new fashion of mobile cloud applications. Due to the randomness of human mobility patterns, mobility predicting remains a very challenging task in MPaaS research. Our preliminary study observed collective behavioral patterns (CBP) in mobility of crowds, and proposed a CBP-based mobility predictor. MPaaS system equips a hybrid predictor fusing both CBP-based scheme and Markov-based predictor to provide telecom cloud with large-scale mobility prediction capacity.

Journal ArticleDOI
TL;DR: This paper endogenizes the value of an information set which has to be produced and protected and allows the breach probability to be not only convex, but concave, which means that substantial security investment is needed to deter most perpetrators.
Abstract: This paper endogenizes the value of an information set which has to be produced and protected. The profit is inverse U shaped in security investment and production effort. The breach probability is commonly assumed to decrease convexly in security investment, which means that modest security investment is sufficient to deter most perpetrators. We allow the breach probability to be not only convex, but concave, which means that substantial security investment is needed to deter most perpetrators. Convexity versus concavity depends on the security environment, perpetrators, technology, and law enforcement. A firm strikes a balance between producing and protecting an information set dependent on seven model parameters for production, protection, convexity, concavity, vulnerability, and resource strength.

Journal ArticleDOI
TL;DR: This paper studies the challenges posed by Internet traffic classification using machine learning with multi-class unbalanced data and the ability of some adjusting methods, including resampling (random under-sampling, random over-sampled) and cost-sensitive learning and empirically compares these methods to determine which produces better overall classifier and under what circumstances.
Abstract: Most research of class imbalance is focused on two class problem to date. A multi-class imbalance is so complicated that one has little knowledge and experience in Internet traffic classification. In this paper we study the challenges posed by Internet traffic classification using machine learning with multi-class unbalanced data and the ability of some adjusting methods, including resampling (random under-sampling, random over-sampling) and cost-sensitive learning. Then we empirically compare the effectiveness of these methods for Internet traffic classification and determine which produces better overall classifier and under what circumstances. Main works are as below. (1) Cost-sensitive learning is deduced with MetaCost that incorporates the misclassification costs into the learning algorithm for improving multi-class imbalance based on flow ratio. (2) A new resampling model is presented including under-sampling and over-sampling to make the multi-class training data more balanced. (3) The solution is presented to compare among three methods or to compare three methods with original case. Experiment results are shown on sixteen datasets that flow g-mean and byte g-mean are statistically increased by 8.6 % and 3.7 %; 4.4 % and 2.8 %; 11.1 % and 8.2 % when three methods are compared with original case. Cost-sensitive learning is as the first choice when the sample size is enough, but resampling is more practical in the rest.

Journal ArticleDOI
TL;DR: A scalable image retrieval framework which can efficiently support content similarity search and semantic search in the distributed environment is proposed and it is shown that the approach yields high recall rate with good load balance and only requires a few number of hops.
Abstract: The emergence of cloud datacenters enhances the capability of online data storage. Since massive data is stored in datacenters, it is necessary to effectively locate and access interest data in such a distributed system. However, traditional search techniques only allow users to search images over exact-match keywords through a centralized index. These techniques cannot satisfy the requirements of content based image retrieval (CBIR). In this paper, we propose a scalable image retrieval framework which can efficiently support content similarity search and semantic search in the distributed environment. Its key idea is to integrate image feature vectors into distributed hash tables (DHTs) by exploiting the property of locality sensitive hashing (LSH). Thus, images with similar content are most likely gathered into the same node without the knowledge of any global information. For searching semantically close images, the relevance feedback is adopted in our system to overcome the gap between low-level features and high-level features. We show that our approach yields high recall rate with good load balance and only requires a few number of hops.