scispace - formally typeset
Search or ask a question

Showing papers on "Data mart published in 2013"


Patent
04 Jun 2013
TL;DR: In this paper, a system and method for detecting anomalous energy usage of building or household entities is presented, which applies a number of successively stringent anomaly detection techniques to isolate households that are highly suspect for having engaged in electricity theft via meter tampering.
Abstract: A system and method for detecting anomalous energy usage of building or household entities. The method applies a number of successively stringent anomaly detection techniques to isolate households that are highly suspect for having engaged in electricity theft via meter tampering. The system utilizes historical time series data of electricity usage, weather, and household characteristics (e.g., size, age, value) and provides a list of households that are worthy of a formal theft investigation. Generally, raw utility usage data, weather history data, and household characteristics are cleansed, and loaded into an analytics data mart. The data mart feeds four classes of anomaly detection algorithms developed, with each analytic producing a set of households suspected of having engaged in electricity theft. The system allows a user to select households from each list or a set based on the intersection of all individual sets.

27 citations


Patent
26 Jun 2013
TL;DR: In this paper, a privacy-removing processing engine is designed to be arranged inside the extract transform and load (ETL) module, on one hand, existing system resources of the ETL module is sufficiently used and occupation of system resources in a data cabin and a data mart is avoided; on the other hand, due to the fact that the ETL module is an united inlet of all data of an business analysis system, the ETl module carries out all the client privacyremoving process, and therefore uniformity of all operations can be ensured.
Abstract: The invention discloses a privacy-removing processing method and a device using the privacy-removing processing method. The privacy-removing processing method comprises the following steps: extracting an index field of client privacy data obtained from a data source and calling a privacy-removing mapping relation chart; mapping the index field into mapping codes in the privacy-removing mapping chart according to the mapping rules of the privacy-removing mapping relation chart and finishing the privacy-removing process of the client privacy data. According to the privacy-removing processing method and the device using the privacy-removing processing method, a privacy-removing processing engine is designed to be arranged inside the an extract transform and load (ETL) module, on one hand, existing system resources of the ETL module is sufficiently used and occupation of system resources of a data cabin and a data mart is avoided; on the other hand, due to the fact that the ETL module is an united inlet of all data of an business analysis system, the ETL module carries out all the client privacy-removing process, and therefore uniformity of all the operations can be ensured.

10 citations


Journal Article
TL;DR: The results show that the dispatching tool reduces the check time spent in the data dictionary within a logical side of the data warehouse deciding the intended data mart and hence, minimizing execution time.
Abstract: D ata warehousing hastens the process of retrieving information needed for decision making The spider web diversity between both end-users and data marts increases traffic, load, and delay in accessing the requested information In this research, we have developed a query dispatching tool facilitating the access to the information within data marts, eventually data warehouse in fast, and an organized fashionable way The dispatching tool takes the query, analyzes it, and decides which data mart as a destination that query should be forwarded to This research is based on Ralph Kimball's methodology The results show that the dispatching tool reduces the check time spent in the data dictionary within a logical side of the data warehouse deciding the intended data mart and hence, minimizing execution time

7 citations


Book ChapterDOI
19 Jun 2013
TL;DR: The design methods for modeling big dimensions, which include horizontal partitioning, vertical partitioning and their hybrid are presented, and an effective ontology-based tool to automate the modeling process is presented.
Abstract: During data warehouse schema design, designers often encounter how to model big dimensions that typically contain a large number of attributes and records. To investigate effective approaches for modeling big dimensions is necessary in order to achieve better query performance, with respect to response time. In most cases, the big dimension modeling process is complicated since it usually requires accurate description of business semantics, multiple design revisions and comprehensive testings. In this paper, we present the design methods for modeling big dimensions, which include horizontal partitioning, vertical partitioning and their hybrid. We formalize the design methods, and propose an algorithm that describes the modeling process from an OWL ontology to a data warehouse schema. In addition, this paper also presents an effective ontology-based tool to automate the modeling process. The tool can automatically generate the data warehouse schema from the ontology of describing the terms and business semantics for the big dimension. In case of any change in the requirements, we only need to modify the ontology, and re-generate the schema using the tool. This paper also evaluates the proposed methods based on sample sales data mart.

5 citations


Journal ArticleDOI
01 Sep 2013
TL;DR: The creation of a patient-centered data system by connecting and integrating disparate data sources within a large health system allows customized analyses of data and improves capacity for clinical decision making and personalized healthcare.
Abstract: Background: Healthcare organizations have invested in electronic patient data systems, yet use of health data to optimize personalized care has been limited. Primary Study Objective: To develop and pilot an integrated source of health system data related to breast healthcare. Methods/Design: This study is a quality improvement project. Patient-level data from multiple internal sources were identified, mapped to a common data model, linked, and validated to create a breast healthcare-specific data mart. Linkages were based on matching algorithms using patient identifiers to group data from the same patient. Data definitions, a data dictionary, and indicators for quality and benchmarking aligned with standardized measures. Clinical pathways were developed to outline the patient populations, data elements, decision points, and outcomes for specific conditions. Setting: Electronic data sources in a community-based health system in the United States. Participants: Women receiving breast cancer screening, preve...

5 citations


Journal ArticleDOI
TL;DR: The current situation and problems faced by forecasting of passenger flow, then the data warehouse technology is applied to design the data mart of this subject, and a feasible forecast model for the passenger flow forecasting is put forward.
Abstract: This paper through studying the theory of data warehouse and data mining, applies these technologies to deal with the large number data in the Ticket Selling and Reserving System of Chinese Railway (TRS), uses the effective data mining to the passenger flow analysis, builds up the logical forecasting and analysis model. This paper firstly discusses the current situation and problems faced by forecasting of passenger flow, then applies the data warehouse technology to design the data mart of this subject. Next, samples and analyses this data which collecting in data mart adopting neural network method, builds data analysis model carrying out research and the experiment, finally puts forward a feasible forecast model for the passenger flow forecasting.

5 citations


Journal Article
TL;DR: A clinical dimensional model design is proposed which can be used for development of a clinical data mart and is designed keeping in consideration temporal storage of patient's data with respect to all possible clinical parameters which can include both textual and image based data.
Abstract: Current research in the field of Life and Medical Sciences is generating chunk of data on daily basis. It has thus become a necessity to find solutions for efficient storage of this data, trying to correlate and extract knowledge from it. Clinical data generated in Hospitals, Clinics & Diagnostics centers is falling under a similar paradigm. Patient's records in various hospitals are increasing at an exponential rate, thus adding to the problem of data management and storage. Major problem being faced corresponding to storage, is the varied dimensionality of the data, ranging from images to numerical form. Therefore there is a need for development of efficient data model which can handle this multi-dimensionality data issue and store the data with historical aspect. For the stated problem lying in facade of clinical informatics we propose a clinical dimensional model design which can be used for development of a clinical data mart. The model has been designed keeping in consideration temporal storage of patient's data with respect to all possible clinical parameters which can include both textual and image based data. Availability of said data for each patient can be then used for application of data mining techniques for finding the correlation of all the parameters at the level of individual and population.

4 citations


Proceedings ArticleDOI
18 Mar 2013
TL;DR: This paper presents an adaptation of Function Point Analysis approach for estimating the size of Data Mart (DM) systems and validate the proposed approach using real data from finished DM projects and show that the approach could offer better results compared with the original FPA.
Abstract: In order to better manage software projects, we need to estimate adequately the effort of its development, independently of their peculiarities. This paper presents an adaptation of Function Point Analysis (FPA) approach for estimating the size of Data Mart (DM) systems. We validate the proposed approach using real data from finished DM projects and we also show that our approach could offer better results compared with the original FPA.

4 citations


01 Jun 2013
TL;DR: In this paper, the authors explore the data in the Department of Licensing to determine the elements of public service and the appropriateness of treatment is done during this time, and perform data mining using WEKA application with the J48 algorithm.
Abstract: The main goal of the Indonesian bureaucratic reform that began in the collapse of the Orde Baru in 2008 was the creation of good governance and clean government. The main point of the implementation of good governance is to create public satisfaction with the service held. Lately it has become government policy that in each unit in its performance assessment, one through the Public Satisfaction Index (IKM). In DIY, there are still people who upset the post-reform public services. This research intends to explore the data in the Department of Licensing to determine the elements of public service and the appropriateness of treatment is done during this time. The research begins by collecting data followed by analysis of Data mart, and perform data mining using WEKA application with the J48 algorithm. The results of this research could form 6 Data mart. The process of data mining research license revealed the presentage of public service elements that affect the level of Correctly Class ified Instances in about 89.7476% to the IMB permit, 82.4128% to the HO permits, 89.0122% to the Business License permits, 80 517% to the TDP permits, and 93.1412% to the other permits.

4 citations


Book ChapterDOI
01 Jan 2013
TL;DR: This chapter deals with the following topics: multistructured data in the form of social media and audio/video, and the way data is ingested, preprocessed, validated, and/or cleansed and integrated or co-related with nontextual formats.
Abstract: Traditional business intelligence (BI) and data warehouse (DW) solutions use structured data extensively Database platforms such as Oracle, Informatica, and others had limited capabilities to handle and manage unstructured data such as text, media, video, and so forth, although they had a data type called CLOB and BLOB; which were used to store large amounts of text, and accessing data from these platforms was a problem With the advent of multistructured (aka unstructured) data in the form of social media and audio/video, there has to be a change in the way data is ingested, preprocessed, validated, and/or cleansed and integrated or co-related with nontextual formats This chapter deals with the following topics:

3 citations


Patent
21 Aug 2013
TL;DR: In this paper, a data organization method of data warehouse for controlling operation cost of a medicine enterprise is presented, in which an operation cost checking system date base and other parts such as an accounting data base are taken as data sources, and decision support can be provided for operation cost control and warning of the medicine enterprise.
Abstract: The invention discloses a data organization method of data warehouse for controlling operation cost of a medicine enterprise. The data organization method provides decision support for cost control and warning of the enterprise. At present, the field of medicine enterprise operation cost control application is lacking in the data organization technology of the data warehouse. According to the data organization method, an operation cost checking system date base and other parts such as an accounting data base are taken as data sources, and operation cost control and warning service needs of the medicine enterprise are taken as backgrounds, by performing ETL pretreatment on the data warehouse and performing OLAP analyzing on a data mart, a multidimensional data model for controlling the operation cost of the medicine enterprise is set up. By utilization of the data organization method, the multidimensional data warehouse can be accurately organized with high efficiency, and therefore the decision support can be provided for operation cost control and warning of the medicine enterprise.

Journal ArticleDOI
TL;DR: This paper focuses on the two data cleaning algorithms: Alliance Rules and HADCLEAN and their approaches towards the data quality and includes a comparison of the various factors and aspects common to both.
Abstract: Data Cleansing or (data scrubbing) is an activity involving a process of detecting and correcting the errors and inconsistencies in data warehouse. Thus poor quality data i.e.; dirty data present in a data mart can be avoided using various data cleaning strategies, and thus leading to more accurate and hence reliable decision making. The quality data can only be produced by cleaning the data and pre-processing it prior to loading it in the data warehouse. As not all the algorithms address the problems related to every type of dirty data, one has to prioritize the need of its organization and use the algorithm according to their requirements and occurrence of dirty data. This paper focuses on the two data cleaning algorithms: Alliance Rules and HADCLEAN and their approaches towards the data quality. It also includes a comparison of the various factors and aspects common to both. General Terms

16 Nov 2013
TL;DR: This research uses student data from three faculties in Maranatha Christian University and lecturer data from the same university to create data mart schema, which builds lecturer fact constellation schema data warehouse.
Abstract: The growth in the university can be seen with number of student that increase from time to time, which is called student body. The growth in student body results a big data of student's academic. Besides students, other important component in the university is a lecturer. The growth in the number of student should be accompanied by an increase of lecturer, both in quality and quantity. Data set of student and lecturer in such amount contain information or knowledge that can be analyzed. Based on the knowledge which is resulted from the analyze of student data and lecturer data, university can make a strategic plan for the future work plan. Data warehouse is used to analyze that studnet data and lecturer data. As a study case, this research using student data from three faculties in Maranatha Christian University and lecturer data from the same university. As the result, data mart schema are created for two parts, data mart schema for student and data mart schema for lecturer. There are three star schema for student, new student schema, active student schema, and graduated student schema. For lecturer, there are also three schema, lecturer education schema, lecturer research schema, and lecturer community service schema. Integration of three star schema of student builds student fact constellation schema data warehouse. There is also integration of three star schema of lecturer, which builds lecturer fact constellation data warehouse.

Book ChapterDOI
01 Jan 2013
TL;DR: Business Intelligence link to an EDA (Event Driven Architecture) for a “Zero Latency Organization” (ZLO) that automates the very labor-intensive and therefore time-heavy and expensive process of planning and executing an event-driven architecture.
Abstract: Business Intelligence link to an EDA (Event Driven Architecture) for a “Zero Latency Organization”

Journal Article
TL;DR: This work attempts to propose a frame work for integration of criminal databases from different state police databases to form a data warehouse for easy access and analysis of criminal data for necessary actions in Nigeria.
Abstract: Security is presently a major challenge in Nigeria today, Nigerians and non- Nigerians are killed on daily basis and in their numbers. Since the advent of the present democratic dispensation, new forms of violent crimes have become common; these include kidnapping for ransom, pipeline vandalization, Boko Haram bombings, rape, political violence and more. Though the government claims to be on top of the situation, the problem persists. This work attempts to propose a frame work for integration of criminal databases from different state police databases to form a data warehouse for easy access and analysis of criminal data for necessary actions. Data integration involves combining data residing in different sources to form a data warehouse there by providing users with a unified view of these data. Data Integration Technologies have experienced explosive growth in the last few years, and data warehousing has played a major role in the integration process. Key words: National Security, Data integration, Data Warehousing, Star Schema, Data Mart

Proceedings ArticleDOI
06 May 2013
TL;DR: A new algorithm “aK-Mode” is proposed that starts by clustering the OLAP requirements that are represented as schemas, then, it generates the schemas of the data mart one for each cluster, and finally it validates them.
Abstract: Different methods are proposed to generate the schema of data mart. We propose a new one that starts by clustering the OLAP requirements that are represented as schemas, then, it generates the schemas of the data mart one for each cluster, and finally it validates them. In this work, we will focus on the first step and we will propose a new algorithm “aK-Mode” to cluster our schemas. Indeed, many techniques exist, they are applied to numerical and categorical data, but they cannot be used in our case since we have to take into consideration the semantic aspect while making the comparison. The new algorithm extends the k-mode by modifying the dissimilarity measure.

Proceedings ArticleDOI
24 Mar 2013
TL;DR: This work proposes a method to design data mart schema from OLAP requirements that are presented as schemas and grouped according to their domain and applies the data integration technique to merge the different schemas so that they get one data mart schemas by group.
Abstract: Warehousing projects can know cases of failure because they did not take into consideration the users' needs especially if they are not experienced with the technologies of data warehouses. The solution consists on building the data warehouse incrementally by designing and implementing one data mart at a time. In this work we propose a method to design data mart schema from OLAP requirements that are presented as schemas and grouped according to their domain. We apply the data integration technique to merge the different schemas so that we get one data mart schema by group. The integration is composed by schema matching (detecting semantic correspondence and the conflicts) and schema mapping (solving the existing conflict).

Journal ArticleDOI
Rudy Rudy1
01 Dec 2013-ComTech
TL;DR: Dimensional models and dashboard applications developed are related to the calculation of job seeker registration, positions offered, labor required by employers and employer registration.
Abstract: College career unit is equipped with a web-based application that can be used by job seekers and employers. The application results in operational reports in accordance with the existing transactions. Transaction data is stored in an OLTP (online transaction processing) and can be used for analysis. For data analysis needs a unit career of a college can use dashboard application that displays information in graphical forms which are easily understood. As the data sources used dimensional dashboard application model introduced by Kimball that is data mart. The result achieved is a dashboard application that can be used by thecollege management to get quick information and detail that can be viewed from a variety of dimensions. Dimensional models and dashboard applications developed are related to the calculation of job seeker registration, positions offered, labor required by employers and employer registration.

01 Jan 2013
TL;DR: In this article I will review the facilities and the power of using a dedicated integration tool in an environment with multiple data sources and a target data mart.
Abstract: Integrated applications are complex solutions that help build better consolidated and standardized systems from existing (usually transactional) systems. Integrated applications are complex solutions, whose complexity are determined by the economic processes they implement, the amount of data employed (millions of records grouped in hundreds of tables, databases, hundreds of GB) and the number of users [11]. Oracle, once mainly known for his database and e-business solutions has been constantly expanding its product portfolio, providing solutions for SOA, BPA, Warehousing, Big Data and Cloud Computing. In this article I will review the facilities and the power of using a dedicated integration tool in an environment with multiple data sources and a target data mart.

Patent
13 Aug 2013
TL;DR: In this article, a heath prediction related recognition and investigation methodology is provided to build a data mart about health prediction by checking states, recognition, and understanding about the health prediction, to select an investigation target by using the data mart, and to efficiently analyze a problem and a solution during health prediction research.
Abstract: PURPOSE: A heath prediction-related recognition and investigation methodology is provided to build a data mart about health prediction by checking states, recognition, and understanding about the health prediction, to select an investigation target by using the data mart, and to efficiently analyze a problem and a solution during the health prediction research. CONSTITUTION: Survey questions corresponding to a survey study using health prediction are provided to each individual. A system divides characteristics of individuals, the recognition, understanding, application, and application factors of health prediction. The system suggests primary data through structured questions to collect basic data for analyzing, obtaining, and investigating a recognition degree and knowledge. The system checks general characteristics of individuals, recognition about health prediction, knowledge about health prediction, and life habits.

Book ChapterDOI
01 Jan 2013
TL;DR: The aim of this paper is to apply the diagram data warehouse technology and the online analytical processing (OLAP) technology to the library readers’ borrowing analysis, to adopt multi-dimensional modeling techniques and data Warehouse technology, and to design and realize a reader analysis data mart.
Abstract: The aim of this paper is to apply the diagram data warehouse technology and the online analytical processing (OLAP) technology to the library readers’ borrowing analysis, to adopt multi-dimensional modeling techniques and data warehouse technology, to design, and to realize a reader analysis data mart. Through the OLAP online presentation tool reveals the potential law of readers’ information from multi-angles and deep levels. It is the hope of the author that this paper would provide decision basis for the library books procurement and books structural optimization.

Journal ArticleDOI
TL;DR: A novel method to extract theme for marine data marts based on semantic web is proposed, obtaining the similarities between field names in the marine data warehouses and utilizing the spectral clustering algorithm to partition field names and obtain field clusters to extract themes.
Abstract: Due to multi-source, polymorphism and diversity of marine data, making use of existing technology to establish the marine data mart has become a serious problem. In addition, how to automatically and fast create data mart with few users’ requirements and little background knowledge about marine has been paid much attention. This paper proposes a novel method to extract theme for marine data marts based on semantic web. Firstly, we obtain the similarities between field names in the marine data warehouses; then, utilize the spectral clustering algorithm to partition field names and obtain field clusters to extract themes. The proposed method can provide the basis for automatic extraction of themes, so as to assist users to fast create data marts.

01 Jan 2013
TL;DR: This paper proposed a framework for developing a predictive data mining system, which will efficiently work in scenario of forecasting with an efficient flow of work and showed twelve steps and their sequence of working in the form of algorithm.
Abstract: A lot of research has been carried out to study the energy crisis in Pakistan by using the predictive data mining techniques. Many researchers have tried to analyse the situation by using different frameworks but unfortunately the authors of this paper did not find any complete, cost effective and efficient framework in the literature. We therefore proposed a framework for developing a predictive data mining system, which will efficiently work in scenario of forecasting with an efficient flow of work. In this conceptual framework authors have showed twelve steps and their sequence of working in the form of algorithm. We applied the proposed framework on the case study for prediction of natural gas energy production in Pakistan to give a solution of the current energy crises. In this case study efforts were made to collect the historical data by covering different geographical areas of Pakistan and circumstances. Authors designed an “Energy Analytical Data Mart” to store the historical and continuously growing data. In this case study we have presented two approaches of forecasting the level of natural gas energy production in Pakistan using the artificial intelligence field neural network.

01 Jan 2013
TL;DR: In this paper, the authors have used Laplacian method for ranking, which enables to make efficient decision, which makes in order to increase the sales promotion in sales data mart using Hyper ETL (Extract, Transform and Load).
Abstract: The multiplication in the number of corporations lo oking for data mart solutions, with the aim of adding major business gains, has created the nee d for a decision right data mart system. Due to the indistinct concept often represented in decision facilitate decision matrix analysis, with considera tion given to both technical and managerial criteri a. This paper illustrates the item- wise and place Hyper ETL (Extract, Transform and Load) tool. We have used Laplacian method for ranking, which enables to make efficient decision - this paper is t o determine the alternative courses of action (the movement of sales quantity and also the movement particular item in number of places) from which the ultimate choice to be made. This approach supports the business goals and requirements of an organiz appropriate attributes or criteria for evaluation. This improvement communicates information quickly and enlarges the aggregation process and helps to t ake an efficient decision. The multiplication in the number of corporations lo oking for data mart solutions, with the aim of adding major business gains, has created the nee d for a decision -aid has come near in preferring the Due to the indistinct concept often represented in decision - facilitate decision matrix analysis, with considera tion given to both technical and managerial criteri a. wise and place - wise analysis of sa les promotion in sales data mart using Hyper ETL (Extract, Transform and Load) tool. We have used Laplacian method for ranking, which - making in order to increase the sales promotion. The main objective of o determine the alternative courses of action (the movement of sales quantity and also the movement particular item in number of places) from which the ultimate choice to be made. This approach supports the business goals and requirements of an organiz ation and to identify the appropriate attributes or criteria for evaluation. This improvement communicates information quickly and enlarges the aggregation process and helps to t ake an efficient decision. The multiplication in the number of corporations lo oking for data mart solutions, with the aim aid has come near in preferring the -making procedure, to facilitate decision matrix analysis, with considera tion given to both technical and managerial criteri a. les promotion in sales data mart using Hyper ETL (Extract, Transform and Load) tool. We have used Laplacian method for ranking, which making in order to increase the sales promotion. The main objective of o determine the alternative courses of action (the movement of sales quantity and also the movement particular item in number of places) from which the ultimate choice to be made. This ation and to identify the appropriate attributes or criteria for evaluation. This improvement communicates information quickly

Proceedings ArticleDOI
19 Mar 2013
TL;DR: A novel data mart based system for fishery rescue field was designed and implemented, designed to guide users customizing inquiry processes, making it possible for nontechnical staffs to access customized reports.
Abstract: A novel data mart based system for fishery rescue field was designed and implemented. The system runs ETL process to deal with original data from various databases and data warehouses, and then reorganized the data into the fishery rescue data mart. Next, online analytical processing (OLAP) are carried out and statistical reports are generated automatically. Particularly, quick configuration schemes are designed to configure query dimensions and OLAP data sets. The configuration file will be transformed into statistic interfaces automatically through a wizard-style process. The system provides various forms of reporting files, including crystal reports, flash graphical reports, and two-dimensional data grids. In addition, a wizard style interface was designed to guide users customizing inquiry processes, making it possible for nontechnical staffs to access customized reports. Characterized by quick configuration, safeness and flexibility, the system has been successfully applied in city fishery rescue department.

Journal ArticleDOI
06 Jun 2013
TL;DR: This project aims to build a data warehouse in which data from different sources is integrated and processed to form that can be easily used in reporting and analyzing.
Abstract: Universities use many different data systems that store data in different databases. To get maximum benefit from this precious data it is necessary to build a data warehouse. In this building project data from different sources is integrated and processed to form that can be easily used in reporting and analyzing. There are two different widespread data warehouse architectures: Inmon’s and Kimball’s. Metropolia started first with Inmon’s architecture. There were many problems and after many years we could not get much useful data to reporting. Later we changed to Kimball’s architecture and got good results with it.

Journal ArticleDOI
TL;DR: A novel data mart based system for fishery rescue field was designed and implemented, characterized by quick configuration, safeness and flexibility, and has been successfully applied in city fishery rescued department.
Abstract: A novel data mart based system for fishery rescue field was designed and implemented. The system runs ETL process to deal with original data from various databases and data warehouses, and then reorganized the data into the fishery rescue data mart. Next, online analytical processing (OLAP) are carried out and statistical reports are generated automatically. Particularly, quick configuration schemes are designed to configure query dimensions and OLAP data sets. The configuration file will be transformed into statistic interfaces automatically through a wizard-style process. The system provides various forms of reporting files, including crystal reports, flash graphical reports, and two-dimensional data grids. In addition, a wizard style interface was designed to guide users customizing inquiry processes, making it possible for nontechnical staffs to access customized reports. Characterized by quick configuration, safeness and flexibility, the system has been successfully applied in city fishery rescue department

Journal ArticleDOI
TL;DR: The aim is to demonstrate that the use of business intelligence combined with epayment systems may lead to a better understanding of customers and improve the monitoring and analysis of their behavior.
Abstract: This paper describes the concepts and methodologies of business intelligence system that enables users to improve the e-payment service quality and business improvement. The aim is to demonstrate that the use of business intelligence combined with epayment systems may lead to a better understanding of customers and improve the monitoring and analysis of their behavior. These analyzes can be used to improve customer relations and to gain a competitive advantage over other Saracevic M. Model implementacije sistema za E-placanje FBIM Transactions Vol.1 No.2 pp. 136 – 144 Published: July 2013 MESTE | 137 systems. In this paper, we gave the proposal of integration with the advanced tools and technologies (OLAP, Data Mart and CRM analysis) which would help in shaping the processes that collect data and transform it into useful information and knowledge. Practical part of the paper is implementation of the model of e-payment with elements of business intelligence using advanced Java technologies. The advantages of this type of implementation are given in the last segment of the paper.

Patent
02 Sep 2013
TL;DR: In this paper, a nursing care insurance task analysis system includes a database in which the data of the execution result of a task for achieving a nurse care insurance system are stored by using the choices of certification investigation items as evaluation items in the case of executing a dementia measures for insured persons, the number of all insured persons who are pertinent to the choices and the group home users are tabulated for each executor as an evaluation axis for each choice so that a general-purpose data mart can be created from the database.
Abstract: PROBLEM TO BE SOLVED: To provide a nursing care insurance task analysis system for achieving a request based on needs by analyzing data of a result executed in the past about each task of a nursing care insuranceSOLUTION: A nursing care insurance task analysis system includes a database in which the data of the execution result of a task for achieving a nursing care insurance system are stored By using the choices of certification investigation items as evaluation items in the case of executing a dementia measures for insured persons, the number of all insured persons who are pertinent to the choices and the number of group home users are tabulated for each executor as an evaluation axis for each choice so that a general-purpose data mart can be created from the database By using the general-purpose data mart, the choices which reach a certain level or more are tabulated for each qualification investigation item as the evaluation item of dementia measure tasks in a predetermined period, and the group home use factors of the insured persons pertinent to the tabulated choices are tabulated for each executor as an evaluation axis so that a purpose-categorized data mart can be created

Patent
30 Jan 2013
TL;DR: In this paper, a handheld operation analysis system comprising a data storage layer, an enterprise data application portal and a mobile phone client is presented, which adopts a client-to-server-side architecture, and makes full use of the conventional data application function module and EDW/ODS (Enterprise Data Warehouse/Operational Data Store) data.
Abstract: The invention discloses a handheld operation analysis system comprising a data storage layer, an enterprise data application portal and a mobile phone client. The system adopts a client-to-server-side architecture, and makes full use of the conventional data application portal function module and EDW/ODS (Enterprise Data Warehouse/Operational Data Store) data, the data are based on an EDW/ODS system, and a special mobile version data mart is established. The invention provides the handheld operation analysis system, and the system is the expansion of the enterprise data application portal on mobile devices, such as a mobile phone and a PDA (Personal Digital Assistant), provides relatively convenient data application access means and further gives play to the value of IT (Information Technology) data on various mobile device terminals.