scispace - formally typeset
Search or ask a question

Showing papers in "Information Systems and E-business Management in 2015"


Journal ArticleDOI
TL;DR: A method to measure precision of process models, given their event logs by first aligning the logs to the models, is proposed, which is not sensitive to non-fitting executions and more accurate values can be obtained for non- fitting logs.
Abstract: Conformance checking techniques compare observed behavior (i.e., event logs) with modeled behavior for a variety of reasons. For example, discrepancies between a normative process model and recorded behavior may point to fraud or inefficiencies. The resulting diagnostics can be used for auditing and compliance management. Conformance checking can also be used to judge a process model automatically discovered from an event log. Models discovered using different process discovery techniques need to be compared objectively. These examples illustrate just a few of the many use cases for aligning observed and modeled behavior. Thus far, most conformance checking techniques focused on replay fitness, i.e., the ability to reproduce the event log. However, it is easy to construct models that allow for lots of behavior (including the observed behavior) without being precise. In this paper, we propose a method to measure precision of process models, given their event logs by first aligning the logs to the models. This way, the measurement is not sensitive to non-fitting executions and more accurate values can be obtained for non-fitting logs. Furthermore, we introduce several variants of the technique to deal better with incomplete logs and reduce possible bias due to behavioral property of process models. The approach has been implemented in the ProM 6 framework and tested against both artificial and real-life cases. Experiments show that the approach is robust to noise and applicable to handle logs and models of real-life complexity.

132 citations


Journal ArticleDOI
TL;DR: This study uncovers the effect of the length, recency, frequency, monetary, and profit (LRFMP) customer value model in a logistics company to predict customer churn and expands the original LRFMP and RFM models with additional insights.
Abstract: This study uncovers the effect of the length, recency, frequency, monetary, and profit (LRFMP) customer value model in a logistics company to predict customer churn. This unique context has useful business implications compared to the main stream customer churn studies where individual customers (rather than business customers) are the main focus. Our results show the five LRFMP variables had a varying effect on customer churn. Specifically length, recency and monetary variables had a significant effect on churn, while the frequency variable only became a top predictor when the variability of the first three variables was limited. The profit variable had never become a significant predictor. Certain other behavioral variables (such as time between transactions) also had an effect on churn. The resulting set of predictors of churn expands the original LRFMP and RFM models with additional insights. Managerial implications were provided.

55 citations


Journal ArticleDOI
TL;DR: The results of this study indicate that the use of Internet banking services in Vietnam may be motivated by a set of specific factors, which are expected to help banks understand the critical factors influencing Internet banking usage and to contribute to the creation of competitive promotional campaigns in Vietnam.
Abstract: Internet banking is growing faster than other e-commerce sectors and has emerged as an evolution in applied banking technology. This study investigates the factors influencing customer intention regarding Internet banking services in Vietnam using elements of an extended technology acceptance model and the theory of planned behaviors. We use structured equation modeling to evaluate the strength of the hypothesized relationships. The results of this study indicate that the use of Internet banking services in Vietnam may be motivated by a set of specific factors (i.e., perceived usefulness, perceived ease of use, perceived credibility, perceived behavioral control, subjective norms, and attitude toward use). These results are expected to help banks understand the critical factors influencing Internet banking usage and to contribute to the creation of competitive promotional campaigns in Vietnam.

48 citations


Journal ArticleDOI
TL;DR: The MedicalDo (MEDo) approach is introduced, which enables users to create, monitor and share medical tasks based on a mobile and user-friendly platform and puts task acquisition on a level comparable to that of pen and paper.
Abstract: In a hospital, ward rounds are crucial for task coordination and decision-making In the course of knowledge-intensive patient treatment processes, it should be possible to quickly define tasks and to assign them to clinicians in a flexible manner In current practice, however, task management is not properly supported During a ward round, emerging tasks are jotted down using pen and paper and their processing is prone to errors In particular, staff members must manually keep track of the status of their tasks To relieve them from such a manual task management, we introduce the MedicalDo (MEDo) approach It transforms the pen and paper worksheet to a digital user interface on a mobile device Thereby, MEDo integrates process support, task management, and access to the patient record Interviews of medical staff members have revealed that they crave for a mobile process and task support This has been further confirmed in a case study we conducted in four different wards Finally, in user experiments, we have demonstrated that MEDo puts task acquisition on a level comparable to that of pen and paper Overall, MEDo enables users to create, monitor and share medical tasks based on a mobile and user-friendly platform

45 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a way of visualizing the different steps a modeler undertakes to construct a process model, in a so-called process of process modeling chart, which facilitates the research and development of theory, training and tool support for improving model quality.
Abstract: The construction of business process models has become an important requisite in the analysis and optimization of processes. The success of the analysis and optimization efforts heavily depends on the quality of the models. Therefore, a research domain emerged that studies the process of process modeling. This paper contributes to this research by presenting a way of visualizing the different steps a modeler undertakes to construct a process model, in a so-called process of process modeling Chart. The graphical representation lowers the cognitive efforts to discover properties of the modeling process, which facilitates the research and the development of theory, training and tool support for improving model quality. The paper contains an extensive overview of applications of the tool that demonstrate its usefulness for research and practice and discusses the observations from the visualization in relation to other work. The visualization was evaluated through a qualitative study that confirmed its usefulness and added value compared to the Dotted Chart on which the visualization was inspired.

43 citations


Journal ArticleDOI
TL;DR: A text mining approach to automatically classify services to specific domains and identify key concepts inside service textual documentation is proposed, validated on a dataset of 600 web services categorized into 8 fields yielding accuracy up to 90 %.
Abstract: Web services have evolved as a versatile and cost effective solution for exchanging dissimilar data between distributed applications. They have become a fundamental part of service oriented architecture. However one of the major challenges in service oriented architecture is to figure out what a service does and how to use its capabilities without direct negotiation with the service provider. Discovering and exploring web services registered with Universal Description, Discovery and Integration registry or Web Services-Inspection documents requires exact search criteria such as service category, service name and service URL. Web Service Description Language (WSDL) document allows web services clients to learn operations, communication protocols and correct message format of service. Manually analyzing WSDL documents is the best approach but most expensive. This paper proposes a text mining approach to (1) automatically classify services to specific domains and (2) identify key concepts inside service textual documentation. This approach is validated on a dataset of 600 web services categorized into 8 fields yielding accuracy up to 90 %. Our classification approach can be used to focus user queries to a refined set of web service categories.

39 citations


Journal ArticleDOI
TL;DR: Contrary to previous research, group efficacy has proved to be the strongest predictor, indicating that the capabilities of those involved in the ITIL implementation are more important for realising the potential benefits than is senior management involvement.
Abstract: Senior management involvement, organisational commitment and group efficacy are expected to have a positive impact on Information Technology Infrastructure Library (ITIL) implementation benefits. Specifically, more involvement, commitment and efficacy should produce greater achievement. Analysing data from a survey of 446 Nordic ITIL experts, this paper examines the relationships between these predictor factors and benefits, and investigates which is most critical. This study verifies the importance of all factors, but contrary to previous research, which has especially emphasised the role of senior management, in this research, group efficacy has proved to be the strongest predictor, indicating that the capabilities of those involved in the ITIL implementation are more important for realising the potential benefits than is senior management involvement. This work contributes to theorising in an important area of practice by testing and validating measurements and instruments for an empirical-based model of ITIL implementation.

37 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed online word-of-mouth-based sales forecasting method is especially suitable for products with abundant online reviews and outperforms traditional time series forecasting models for most consumer products examined.
Abstract: Sales forecasting is one of the most critical steps of business process. Since the forecasting accuracy of traditional techniques is generally unacceptable for products with irregular or non-seasonal sales trends, it is necessary to construct a new forecasting method. Past research shows that there is a strong relationship between online word-of-mouth and product sales, but that the extent of the impact of word-of-mouth varies with product category. This study aims to provide an understanding of how electronic word-of-mouth affects product sales by analyzing online review properties, reviewer characteristics and review influences. This new electronic word-of-mouth perspective contributes to sales forecasting research in two ways. First, a novel classification model involving polarity mining, intensity mining and influence analysis is proposed with a framework to elucidate the difference between review categories. Second, the influence of online reviews (i.e., electronic word-of-mouth) is estimated and then used to construct a sales forecasting model. The proposed online word-of-mouth-based sales forecasting method is evaluated by using real data from a well-known cosmetic retail chain in Taiwan. The experimental results demonstrate that the proposed method is especially suitable for products with abundant online reviews and outperforms traditional time series forecasting models for most consumer products examined.

34 citations


Journal ArticleDOI
TL;DR: The results suggest that the type of recommender system significantly moderates many relationships of the determinants of customer behavioral intent on behavioral intention to use recommender systems.
Abstract: This study investigates how consumers assess the quality of two types of recommender systems-collaborative filtering and content-based--in the context of e-commerce by using a modified version of the unified theory of acceptance and use of technology (UTAUT) model. Specifically, the concept of trust in the technological artifact is adapted to examine the intention to use recommender systems. Additionally, this study also considers hedonic and utilitarian product characteristics with the goal of presenting a comprehensive picture on recommender systems literature. This study utilized a 2 × 2 crossover within-subjects experimental design involving a total of 80 participants, who all evaluated each recommender system. The results suggest that the type of recommender system significantly moderates many relationships of the determinants of customer behavioral intent on behavioral intention to use recommender systems. Surprisingly, the type of product does not moderate any relationship on behavioral intention. This study holds importance in explaining the factors contributing to the use of recommender systems and understanding the relative influence of the two types of recommender systems on customer behavioral intention to use recommender systems. The finding also sheds light for designers on how to provide more effective recommender systems.

30 citations


Journal ArticleDOI
TL;DR: A systematic literature review studying success factors and their impact on IORs as well as an analysis of the results found is presented, based on 177 publications published between 2000 and 2012.
Abstract: Inter-organizational systems form the basis for successful business collaboration in the Internet and B2B e-commerce era. To properly design and manage such systems one needs to understand the structure and dynamics of the relationships between organizations. The evaluation of such inter-organizational relationships (IORs) is normally conducted using "success factors". These are often referred to as constructs, such as trust and information sharing. In strategic management and performance analysis, different methods are employed for evaluating business performance and strategies, such as the Balanced Scorecard (BSC) method. The BSC utilizes success factors for measuring and monitoring IORs against business strategies. For these reasons, a thorough understanding of success factors, the relationships between them, as well as their relationship to business strategies is required. In other words, understanding success factors allows strategists deriving measurements for success factors as well as aligning these success factors with business strategies. This underpins nowadays close relationship between business strategy, IORs and their realization by means of inter-organizational systems. In this paper, we present (1) a systematic literature review studying success factors and their impact on IORs as well as (2) an analysis of the results found. The review is based on 177 publications, published between 2000 and 2012, dealing with factors influencing IORs. The work presented provides an overview on success factors, influencing relationships between success factors, as well as their influence on the success of IORs. The work is somehow "meta-empirical" as it only looks at published studies and not on own cases. Consequently, it is based on the assumption that studies in scientific literature represent the real-world. The constructs and relationships found in the review are grouped based on their scope and summarized in a cause and effect model. The grouping of constructs results in five groups including Relationship Orientation, Relational Norm, Relational Capital, Atmosphere, and Others. Since the cause and effect model represents a directed graph, different network analysis methods may be applied for analyzing the model. In particular, an in- and out-degree analysis is applied on the cause and effect model for detecting the most influencing as well as the most influenced success factors.

29 citations


Journal ArticleDOI
TL;DR: A valuation calculus is proposed that brings value-based business process management to the operational process level by showing how the risk-adjusted expected net present value of a process can be determined and helps improve the calculation capabilities of an existing process-modeling tool.
Abstract: For years, improving processes has been a prominent business priority for Chief Information Officers. As expressed by the popular saying, "If you can't measure it, you can't manage it," process measures are an important instrument for managing processes and corresponding change projects. Companies have been using a value-based management approach since the 1990s in a constant endeavor to increase their value. Value-based business process management introduces value-based management principles to business process management and uses a risk-adjusted expected net present value as the process measure. However, existing analyses of this issue operate at a high (i.e., corporate) level, hampering the use of value-based business process management at an operational process level in both research and practice. Therefore, this paper proposes a valuation calculus that brings value-based business process management to the operational process level by showing how the risk-adjusted expected net present value of a process can be determined. We demonstrate that the valuation calculus provides insights into the theoretical foundations of processes and helps improve the calculation capabilities of an existing process-modeling tool.

Journal ArticleDOI
TL;DR: This study developed a research model based on the tricomponent attitude model and explored factors influencing e-shoppers’ real purchase behaviors and found that both perceived risk and perceived value indirectly influence affective trust via cognitive trust.
Abstract: This study developed a research model based on the tricomponent attitude model and explored factors influencing e-shoppers' real purchase behaviors. This study showed that these determinants could be divided into positive and negative approaches. The research sample consisted of 385 valid respondents who are experienced users on Books.com.tw. This research study adopted the structural equation modeling to test the proposed model and the alternative models. The proposed model showed a good fit. In the proposed model, we found that both perceived risk and perceived value indirectly influence affective trust via cognitive trust. Both cognitive trust and affective trust enhance two commitment outcomes (calculative commitment and affective commitment). In addition, affective trust mediates the relationship between cognitive trust and affective commitment, and affective commitment mediates the relationship between affective trust and behavioral intention. Besides, satisfaction is the mediator between trust and commitment. In the moderating effect, we find satisfaction also moderates both the effects of cognitive trust on calculative commitment and the affective trust on affective commitment. This study also provides conclusions and practical implications to marketers.

Journal ArticleDOI
TL;DR: The findings show that the role of government social power should not be ignored, as it produced a substantial improvement in the variance explained in intention to use (from 57.1 to 70.8 %), and offers managerial suggestions for the adoption of agricultural information systems.
Abstract: Though many governments enthusiastically supported agricultural information systems, little is known about the role of governments in farmers' acceptance of such systems. The present study examines the influences of government social power on farmers' intention to use government-sponsored agricultural information systems. A research model reflecting the relationships among technology acceptance, government social power, and adoption intention was developed and tested using data collected from 1,504 subjects in the Jiangxi province of China. Our findings show that the role of government social power should not be ignored, as it produced a substantial improvement in the variance explained in intention to use (from 57.1 to 70.8 %). This work also analyzed the influences of gender on the acceptance intention. Based on empirical findings, we offer managerial suggestions for the adoption of agricultural information systems.

Journal ArticleDOI
TL;DR: A system framework is proposed in which all association rules are clustered using a new similarity measure and for the first time, the real satisfaction levels are embedded into the association rules to enrich them in an innovative way.
Abstract: To survive in today's market, decision makers including investors and their managerial teams should continuously attempt to realize the customers' unspoken needs and requirements by discovering their behavioral patterns. Discovering customers' patterns puts these decision makers in a better position in which higher qualified services can be designed and provided. Association rule mining is a well-known approach to discover these patterns. Although extracted rules could express customers' behaviors in an easy-to-understand way, the number of rules in real applications could be problematic. Moreover, the customers' comments are not usually considered for constructing/evaluating the rules. To tackle these issues, a system framework is proposed in this paper in which all association rules are clustered using a new similarity measure. For each cluster, a new type of graph is developed known as sub-graph in this paper. Each sub-graph has unique messages that can partially contribute to designing new services. Furthermore, for the first time, the real satisfaction levels are embedded into the association rules to enrich them in an innovative way. The main interesting point is that the satisfaction levels are only assessed for the overall system, not for current services. We also illustrate how our proposed methodology works through artificial and real datasets and also demonstrate the superiority of our proposed clustering algorithm compared to other popular methods.

Journal ArticleDOI
TL;DR: This article describes how KPIs are modeled and transferred into event rules by a model-driven approach and shows it tackles challenges from business monitoring as well as from compliance monitoring.
Abstract: Today event-driven business process management has matured from a scientific vision to a realizable methodology for companies of all sizes and shapes. This vision can be applied for business monitoring, as well as for compliance monitoring. However, leveraging the power of complex event processing for supporting business process monitoring is cumbersome because of the complicated modeling of rules and alerts as well as key performance indicators (KPIs) in machine readable format using the event languages. However, using a model-driven approach for generating a monitoring infrastructure based on events like the aPro architecture is one possibility to enable companies with various infrastructures to leverage the advantages of business process monitoring. This article describes how KPIs are modeled and transferred into event rules by a model-driven approach. Two use cases are the basis for defining requirements and evaluating the approach, showing it tackles challenges from business monitoring as well as from compliance monitoring.

Journal ArticleDOI
TL;DR: This research attempted to identify leaders in a popular online forum for cancer survivors and caregivers using classification techniques and developed a hybrid approach based on an ensemble classifier that performs better than many traditional metrics.
Abstract: Online health communities (OHCs) are an important source of social support for cancer survivors and their informal caregivers. This research attempted to identify leaders in a popular online forum for cancer survivors and caregivers using classification techniques. We first extracted user features from many different perspectives, including contributions, network centralities, and linguistic features. Based on these features, we leveraged the structure of the social network among users and generated new neighborhood-based and cluster-based features. Classification results revealed that these features are discriminative for leader identification. Using these features, we developed a hybrid approach based on an ensemble classifier that performs better than many traditional metrics. This research has implications for understanding and managing OHCs.

Journal ArticleDOI
TL;DR: A domain-adapted sentiment-classification (DA-SC) technique for inducing a domain-independent base classifiers and using a cotraining mechanism to adapt the base classifier to a specific application domain of interest is proposed.
Abstract: With the success and proliferation of Web 2.0 applications, consumers can use the Internet for shopping, comparing products, and publishing product reviews on various social media sites. Such consumer reviews are valuable assets in applications supporting marketing intelligence. However, the rapidly increasing number of consumer reviews makes it difficult for businesses or consumers to obtain a comprehensive view of consumer opinions pertaining to a product of interest when manual analysis techniques are used. Thus, developing data analysis tools that can automatically analyze consumer reviews to summarize consumer sentiments is both desirable and essential. Accordingly, this study was focused on the sentiment classification of consumer reviews. To address the domain-dependency problem typically encountered in sentiment classification and other sentiment analysis applications, we propose a domain-adapted sentiment-classification (DA-SC) technique for inducing a domain-independent base classifier and using a cotraining mechanism to adapt the base classifier to a specific application domain of interest. Our empirical evaluation results show that the performance of the proposed DA-SC technique is superior or comparable to similar techniques for classifying consumer reviews into appropriate sentiment categories.

Journal ArticleDOI
TL;DR: Experimental results show that the combination of different mining models gives good predictive accuracy and it is a feasible way to diagnose diseases.
Abstract: An approach for disease prediction that combines clustering, Markov models and association analysis techniques is proposed. Patient medical records are first clustered, and then a Markov model is generated for each cluster to perform predictions about illnesses a patient could likely be affected in the future. However, when the probability of the most likely state in the Markov models is not sufficiently high, the framework resorts to the association analysis. High confidence rules generated by recurring to sequential disease patterns are considered, and items induced by these rules are predicted. Experimental results show that the combination of different mining models gives good predictive accuracy and it is a feasible way to diagnose diseases.

Journal ArticleDOI
TL;DR: The capabilities of ROA are shown and details on how to apply it in practice are provided and the kind of data that is needed to carry out the analysis and how ROA can be integrated into the organizational decision process are discussed.
Abstract: We present the application of real options analysis (ROA) to a managerial decision-making problem. A case study was developed to illustrate the mathematical steps required to apply ROA. The results of this model show that a net present value analysis, which is most often used in practice, would have led to a sub-optimal decision, as it does not take into account the value of future options and managerial flexibility. Hospitals rarely use quantitative methods like ROA to address their managerial decision-making problems. Usually, simple cost-benefit analysis and subjective assessment are used instead of sophisticated analysis methods and objective data. This paper aims to show the capabilities of ROA and provides details on how to apply it in practice. We discuss the kind of data that is needed to carry out the analysis and how ROA can be integrated into the organizational decision process. To do this, we propose a data-to-decision (D2D) framework. The D2D framework consists of two components: data-to-information (D2I) and information-to-decisions (I2D). D2I suggests the use of quantitative methods, such as ROA, to extract decision-relevant information from data. Based on the generated information, the second component, I2D, supports executives in selecting the best course of action given multiple organizational objectives.

Journal ArticleDOI
TL;DR: A graph algorithm based model analysis framework that can be accessed by specialized model analysis techniques is introduced and it is proved that basic graph algorithms are feasible to support such a framework.
Abstract: Analysing conceptual models is a frequent task of business process management (BPM), for instance to support comparison or integration of business processes, to check business processes for compliance or weaknesses, or to tailor conceptual models for different audiences. As recently, many companies have started to maintain large model collections and analysing such collections manually may be laborious, practitioners have articulated a demand for automatic model analysis support. Hence, BPM scholars have proposed a plethora of different model analysis techniques. As virtually any conceptual model can be interpreted as a mathematical graph and model analysis techniques often include some kind of graph problem, in this paper, we introduce a graph algorithm based model analysis framework that can be accessed by specialized model analysis techniques. To prove that basic graph algorithms are feasible to support such a framework, we conduct a performance analysis of selected graph algorithms.

Journal ArticleDOI
TL;DR: The proposed e-Business Technology Acceptance Model (EBTAM) model shows reasonable predictions about technology acceptance without requiring expert evaluators or many users experienced using the system under evaluation to make reasonable predictions.
Abstract: e-Business organizations must frequently face changes in their systems to stay competitive. However, it is not guaranteed the new systems will be acceptable for the workers. The e-Business Technology Acceptance Model (EBTAM) model is proposed in this paper as a way to study acceptance before actual deployment of a new system. This model takes into account other models reported in the literature, but it is essentially oriented towards small and medium-sized organizations, which usually have limited human and economic resources. The model was used in three companies, and the evaluation instrument was applied at three stages of a system replacement process: (1) before the new system was deployed, in order to capture the independent variables, (2) after 1.5 months of use, and (3) after 9 months of use. Unlike most models reported in the literature, EBTAM shows reasonable predictions about technology acceptance without requiring expert evaluators or many users experienced using the system under evaluation. This fact makes EBTAM easier to implement and use than others evaluation methods, which is particularly important in small organizations given their relatively scarce resources and expertise for this type of evaluations.

Journal ArticleDOI
TL;DR: This paper proposes two algorithms for identifying the most appropriate (sub-)domain of a concept in the context of a document/query and integrates these methods into a semantic indexing and retrieval framework.
Abstract: With the explosive growth of biomedical information volumes, there is obviously an increasing need for developing effective and efficient tools for indexing and retrieval. Automatic indexing and retrieval in the biomedical domain is faced with several challenges such as recognition of terms denoting concepts and term disambiguation. In this paper, we are interested in identifying (sub-)domains of concepts in ontologies. We propose two algorithms for identifying the most appropriate (sub-)domain of a concept in the context of a document/query. We integrate these methods into a semantic indexing and retrieval framework. The experimental evaluation carried out on the OHSUMED collection shows that our approaches of semantic indexing and retrieval outperform the state-of-the-art approach.

Journal ArticleDOI
TL;DR: Empirical study of the service pattern shows that the use of the proposed model significantly outperforms manual composition in terms of composition time and accuracy, and simulation results demonstrate that the proposed automated instantiation method is efficient.
Abstract: A key feature with service-oriented-architecture is to allow flexible composition of services into a business process. Although previous works related to service composition have paved the way for automatic composition, the techniques have limited applicability when it comes to composing complex workflows based on functional requirements, partly due to the large search space of the available services. In this paper, we propose a novel concept, the prospect service. Unlike existing abstract services which possess fixed service interfaces, a prospect service has a flexible interface to allow functional flexibility. Furthermore, we define a meta-model to specify service patterns with prospect services and adaptable workflow constructs to model flexible and adaptable process templates. An automated instantiation method is introduced to instantiate concrete processes with different functionalities from a service pattern. Since the search space for automatically instantiating a process from a service pattern is greatly reduced compared to that for automatically composing a process from scratch, the proposed approach significantly improve the feasibility of automated composition. Empirical study of the service pattern shows that the use of the proposed model significantly outperforms manual composition in terms of composition time and accuracy, and simulation results demonstrate that the proposed automated instantiation method is efficient.

Journal ArticleDOI
TL;DR: This method performs well, having 79 % predictive accuracy, and an area under the ROC curve of 0.85, and identifies the most aggressive cancers with 82 % accuracy.
Abstract: We propose a method of diagnosing prostate cancer using magnetic resonance imaging data. Logistic regression and nearest neighbor classification are combined to identify the risk of cancer. Our method performs well, having 79 % predictive accuracy, and an area under the ROC curve of 0.85. It identifies the most aggressive cancers with 82 % accuracy.

Journal ArticleDOI
TL;DR: This chapter illustrates how a well-established public health informatics framework provides an integrated information system infrastructure that assures and enhances the efficacy of public health emergency preparedness (PHEP) actions throughout the phases of the health emergency event life cycle.
Abstract: This chapter illustrates how a well-established public health informatics framework provides an integrated information system infrastructure that assures and enhances the efficacy of public health emergency preparedness (PHEP) actions throughout the phases of the health emergency event life cycle. Key PHEP activities involved in supporting this cycle include planning; surveillance; alerting; resource assessment and management; data-driven decision support; and intervention for prevention and control of disease or injury in populations. Information systems supporting these activities are most effective in assuring optimal response to an emergent health event when they are integrated within an informatics framework that supports routine (day to day) information exchange within the health information exchange community. In late April 2009, New York State (NYS) initiated a statewide PHEP response to the emergence of Novel Influenza A (H1N1), culminating in a statewide vaccination campaign during the last quarter of 2009. The established informatics framework of integrated information systems within NYS conveyed significant advantages and flexibility in supporting the range of PHEP activities required for an effective response to this health event. This chapter describes, and provides, performance metrics to illustrate how a public health informatics framework can enhance the efficacy of all phases of a public health emergency response. It also provides informatics lessons learned from the event.

Journal ArticleDOI
TL;DR: The proposed adaptive mechanism for improving the availability efficiency of green component design (GCD) process incorporates a wide range of GCD strategies to increase availability of the recycled/reused/remanufactured components.
Abstract: This paper proposes an adaptive mechanism for improving the availability efficiency of green component design (GCD) process. The proposed approach incorporates a wide range of GCD strategies to increase availability of the recycled/reused/remanufactured components. We have also designed a self-adjusting mechanism to enhance the versatility and generality of a genetic algorithm (GA) to improve GCD availability efficiency. The mechanism allows refinement of the GA parameters for the selections of operators in each generation. Our research contribution includes the development of a novel mechanism for the evaluation of optimal selections of reproduction strategies, adjustment and optimization of the crossover and mutation rates in evolutions, and design of Taguchi Orthogonal Arrays with a GA optimizer. The effectiveness of the proposed algorithms has been examined in a GCD chain. From the experimental results, we can conclude that the proposed approach resulted in better reproduction optimization than the traditional ones.

Journal ArticleDOI
TL;DR: Marketing intelligence provides a road map of current and future trends in customers’ preferences and needs, new market and segmentation opportunities, and major shifts in marketing and distribution in order to improve the firm’s marketing planning, implementation, and control.
Abstract: Marketing intelligence represents a continuous process of understanding, analyzing, and assessing a firm’s internal and external environments associated with customers, competitors, and markets and then using the acquired information and knowledge to support the firm’s marketing-related decisions. Marketing intelligence provides a road map of current and future trends in customers’ preferences and needs, new market and segmentation opportunities, and major shifts in marketing and distribution in order to improve the firm’s marketing planning, implementation, and control. Marketing intelligence has evolved from a creative process into a highly datadriven process. Data sources for marketing intelligence can come from internal and external. With the advances of information technology and widespread diffusion of database and data warehouse systems in firms, large volumes of internal data useful for marketing intelligence have been generated and maintained by firms. At the same time, the proliferation of WWW and Web 2.0 innovations (e.g., product review websites, social networking communities) dramatically explode external data for marketing intelligence, as measured by sheer volume of data and number of data sources. On the other hand, the increases in competition and volatility of markets and customer preferences/needs require firms frequently updating their marketing intelligence or even retargeting their marketing intelligence directions.

Journal ArticleDOI
TL;DR: In this article, the authors present prototype architecture for both a real-time mobile clinical event data capture application and an Artemis-based replay system for retrospective analysis and validation of physiological data analytics, which provide important information for improving the ability of clinical decision support systems and patient monitoring algorithms to detect and adjust for artifacts caused by clinical events.
Abstract: There is a growing trend of developing advanced clinical decision support systems that analyze physiological data streams for early detection of a variety of clinical diagnoses. This paper presents prototype architecture for both a real-time mobile clinical event data capture application and an Artemis-based replay system for retrospective analysis and validation of physiological data analytics. These two components provide important information for improving the ability of clinical decision support systems and patient monitoring algorithms to detect and adjust for artifacts caused by clinical events. A description of the prototypes, as well as results from initial prototype testing is provided. Although the sample size is small for the initial testing, significant information with respect to design principles and infrastructure needs were uncovered. Future research directions are identified to improve the mobile application through increased security, robustness, further integration into data mining analysis, and future clinical decision support algorithms.

Journal ArticleDOI
TL;DR: To effectively reduce the number of PS representing load state transition and aging signals, a feature extraction technique of the PS in the EMIS, Hellinger distance, is proposed in this paper.
Abstract: Coordinating economic load demand response (ELDR) strategy with energy efficiency and information technology (IT) of e-business management multiplies the reduction in electricity usage. The steady-state power signatures (PS) contain plenty of information needed for detecting state transition and aging of loads. On the other hand, adopting the values of PS directly has the drawbacks of taking a longer time and much memory for the datasets of energy management information system (EMIS). To effectively reduce the number of PS representing load state transition and aging signals, a feature extraction technique of the PS in the EMIS, Hellinger distance, is proposed in this paper. The high success rates of identifying state transition and aging of loads from the back-propagation artificial neural network (BP-ANN) have been proved via experiments to be feasible in load operations of EMIS applications.

Journal ArticleDOI
TL;DR: A tentative data analysis assistor, SLinRA2S, which can guide a data analyst through the process of applying simple linear regression analysis on data sets stored as external files or in databases, and which could contribute to the production of high quality business analytics.
Abstract: There is no doubt that acquiring high quality information is crucial to achieving effective decision makings. In general the production of information for a particular decision scenario at hand involves a process of analyzing data collected from various sources, using some statistical methods. The proper application of the chosen statistical methods for analysis in turn relates, in a large portion, to the quality of the information computed for the decision scenario. To ensure a consistent and sound application of statistical methods for data analysis we followed the idea of active support and designed a tentative data analysis assistor, SLinRA2S, which can guide a data analyst through the process of applying simple linear regression analysis on data sets stored as external files or in databases. SLinRA2S is implemented in Java on an open platform and it invokes R for statistical functions. Outputs from SLinRA2S were verified against outputs from SPSS for correctness and validity. The assistor not only promises the relief of a data analyst from computation errands but also contributes to the correct application of statistical methods to a degree. In the end, the assistor could contribute to the production of high quality business analytics.