scispace - formally typeset
Search or ask a question

Showing papers on "Data mart published in 2012"


Journal Article
TL;DR: The Blink project is working on the next generation of Blink, which will expand the “sweet spot” of the Blink technology to much larger, disk-based warehouses and allow Blink to “own” the data, rather than copies of it.
Abstract: The Blink project’s ambitious goal is to answer all Business Intelligence (BI) queries in mere seconds, regardless of the database size, with an extremely low total cost of ownership. Blink is a new DBMS aimed primarily at read-mostly BI query processing that exploits scale-out of commodity multi-core processors and cheap DRAM to retain a (copy of a) data mart completely in main memory. Additionally, it exploits proprietary compression technology and cache-conscious algorithms that reduce memory bandwidth consumption and allow most SQL query processing to be performed on the compressed data. Blink always scans (portions of) the data mart in parallel on all nodes, without using any indexes or materialized views, and without any query optimizer to choose among them. The Blink technology has thus far been incorporated into two IBM accelerator products generally available since March 2011. We are now working on the next generation of Blink, which will significantly expand the “sweet spot” of the Blink technology to much larger, disk-based warehouses and allow Blink to “own” the data, rather than copies of it.

41 citations


Journal ArticleDOI
01 Mar 2012
TL;DR: This work argues for considering spatiality as a personalization feature within a formal design process so that each decision maker will be able to access its own personalized SMD schema with its required spatial structures and instances, suitable to be properly analyzed at a glance.
Abstract: Spatial data warehouses (SDW) rely on extended multidimensional (MD) models in order to provide decision makers with appropriate structures to intuitively explore spatial data by using different analysis techniques such as OLAP (On-Line Analytical Processing) or data mining. Current development approaches are focused on defining a unique and static Spatial multidimensional (SMD) schema at the conceptual level over which all decision makers fulfill their current spatial information needs. However, considering the required spatiality for each decision maker is likely to derive in a potentially misleading SMD schema (even if a departmental DW or data mart is being defined). Furthermore, spatial needs of each decision maker could change over time or depending on the context, thus requiring the SMD schema to be continuously updated with changes that can hamper decision making. Therefore, if a unique and static SMD schema is designed, acquiring the required spatial information is more costly than expected for decision makers and they may get frustrated during the analysis. To overcome these drawbacks, we argue for considering spatiality as a personalization feature within a formal design process. In this way, each decision maker will be able to access its own personalized SMD schema with its required spatial structures and instances, suitable to be properly analyzed at a glance. Our approach considers several novel artifacts: (i) a UML profile for spatial multidimensional modeling at the conceptual level, (ii) a spatial-aware user model in order to define decision maker profile; and (iii) a spatial personalization language to define spatial needs of decision makers as personalization rules. The definition of personalized SMD schemas by using these artifacts is formally defined using the Software Process Engineering Metamodel Specification (SPEM) standard. Finally, the applicability of our approach is shown through a running example based on our Eclipse-based tool for SDW development.

35 citations


Patent
Akbar Pirani1, Michael Branam1
10 Aug 2012
TL;DR: In this article, the authors provide methods and computer program products for reporting application and device data retrieved from within an Internet Protocol Television (IPTV) network environment, where IPTV system usage data is retrieved from at least one IPTV device and normalized into a predetermined data format.
Abstract: Methods and computer program products for reporting application and device data retrieved from within an Internet Protocol Television (IPTV) network environment are provided IPTV system usage data is retrieved from at least one IPTV device and normalized into a predetermined data format The IPTV system usage data is parsed according to predetermined criteria The parsed IPTV system usage data is delivered to a dedicated data mart for storage, the IPTV system usage data that is stored within at least one dedicated data mart is accessed, and the IPTV system usage data that is stored at the at least one dedicated data mart is used to generate a report

34 citations


Book
12 Oct 2012
TL;DR: In this article, the authors present a step-by-step implementation guide for agile data warehousing project management, which can yield as much as a 3-to-1 speed advantage while cutting project costs in half.
Abstract: You have to make sense of enormous amounts of data, and while the notion of "agile data warehousing" might sound tricky, it can yield as much as a 3-to-1 speed advantage while cutting project costs in half. Bring this highly effective technique to your organization with the wisdom of agile data warehousing expert Ralph Hughes. Agile Data Warehousing Project Management will give you a thorough introduction to the method as you would practice it in the project room to build a serious "data mart." Regardless of where you are today, this step-by-step implementation guide will prepare you to join or even lead a team in visualizing, building, and validating a single component to an enterprise data warehouse. * Provides a thorough grounding on the mechanics of Scrum as well as practical advice on keeping your team on track * Includes strategies for getting accurate and actionable requirements from a team's business partner * Revolutionary estimating techniques that make forecasting labor far more understandable and accurate * Demonstrates a blends of Agile methods to simplify team management and synchronize inputs across IT specialties * Enables you and your teams to start simple and progress steadily to world-class performance levels Table of Contents Part I: A Generic Agile Method Chapter 1. Why Agile? Chapter 2. Agile Development in a Nutshell Chapter 3. Project Management Lite Chapter 4. User Stories for Business Intelligence Applications Part II. Adapting Agile to Data Warehousing Chapter 5. Developer Stories for Data Integration Projects Chapter 6. Agile Estimation for DW/BI Chapter 7. Further Adaptations for Agile Data Warehousing Chapter 8. Starting and Scaling Agile Warehousing Teams Part III. Retrospective Chapter 9. Faster, Better, Cheaper

19 citations


Journal ArticleDOI
TL;DR: Collaboration between clinical researchers and biomedical informatics experts enabled the development and validation of a tool (Regextractor) to parse, abstract and assemble structured data from text data contained in the electronic health record.
Abstract: Background Translational research typically requires data abstracted from medical records as well as data collected specifically for research. Unfortunately, many data within electronic health records are represented as text that is not amenable to aggregation for analyses. We present a scalable open source SQL Server Integration Services package, called Regextractor, for including regular expression parsers into a classic extract, transform, and load workflow. We have used Regextractor to abstract discrete data from textual reports from a number of ‘machine generated’ sources. To validate this package, we created a pulmonary function test data mart and analyzed the quality of the data mart versus manual chart review.

13 citations


Book ChapterDOI
03 Oct 2012
TL;DR: This paper studies the evolution of DM schemas in the case of a new table is added to the DW.
Abstract: Modern Decision Support Systems (DSS) are composed of a Data Warehouse (DW) which stores all data necessary for decisional purposes, and a several Data Marts (DMs) A DM is a subject oriented extract of the DW data; it facilitates evaluating the performances of a business process However, in practice, business processes may evolve as well as new ones may be created, consequently the DW model evolves in order to capture the changes The maintenance of DMs due to the evolution of their DW is time-consuming, expensive and error-prone process This paper studies the evolution of DM schemas in the case of a new table is added to the DW

9 citations


Proceedings ArticleDOI
27 May 2012
TL;DR: The architecture of a Business Intelligence system in an academic organization and the design of a data mart devoted to the evaluation of the Research activities are presented.
Abstract: Data warehousing is an activity that is getting more and more attention in several contexts. Also Universities are adopting data warehousing solutions for business intelligence purpose. In these contexts, there are specific aspects to be considered, such as the Didactics and the Research evaluation. Indeed, these are the main factors affecting the importance and the quality level of every University. In this paper, we present the architecture of a Business Intelligence system in an academic organization and we illustrate the design of a data mart devoted to the evaluation of the Research activities.

6 citations


Proceedings ArticleDOI
14 Sep 2012
TL;DR: This paper emphasizes on integrating heterogeneous data sources to create virtual data warehouses that could be deployed in a cloud environment to handle multiple OLAP data sources.
Abstract: Cloud Computing is an emerging technology that empowers the present day business scenario by providing services on demand instead of an integrated product. Many of applications on cloud deals with huge amount of data and these are often used for analytical processing to exploit the business intelligence. However working with very large scale of data is often time consuming and requires higher processing time. Materialized views are built and maintained to pre-fetch an effective subset of the entire database for current and immediate future usage. The materialized views are constructed on data warehouse, data marts and virtual data warehouse. In a cloud computing scenario, quite often the materialized views for the distributed data centers resides in different data servers. One of the major challenges is to handle multiple OLAP data sources. The data needs to be, integrated and analyzed continually in an efficient manner before the views are built. This paper emphasizes on integrating heterogeneous data sources to create virtual data warehouses that could be deployed in a cloud environment.

6 citations


Patent
23 May 2012
TL;DR: A central data warehouse includes embedded data marts, referred to as workspaces, which allow departments in an enterprise to perform certain actions on their own (like adding new data and building new models) without having to instantiate copies of the centrally managed data in a locally managed data mart as discussed by the authors.
Abstract: A central data warehouse includes embedded data marts. These embedded data marts, referred to as workspaces, are assigned centrally manage data by reference only but rely directly on the centrally managed data and the underlying infrastructure. Workspaces still allow departments in an enterprise to perform certain actions on their own (like adding new data and building new models) without having to instantiate copies of the centrally managed data in a locally managed data mart.

6 citations


Proceedings ArticleDOI
02 Nov 2012
TL;DR: The FedDW Global Schema Architect (GSA) is presented, a visual design tool for federations of autonomous ROLAP data marts that is extensible, intuitive for its users, and supports DW platforms of multiple vendors.
Abstract: Extending analytical decision making beyond the boundaries of a single organization is a key challenge of modern Business Intelligence systems. Federated Data Warehouses (FDWs) are an important cornerstone to this end, offering new opportunities for business collaboration and similar scenarios. The FedDW approach provides such a federated architecture with a mediated multidimensional schema over autonomous data marts---with advantages for both, OLAP users and data warehouse administrators. Users comfortably access the global mediated schema with traditional OLAP applications while administrators retain full schema and data management autonomy. Although the underlying concepts are mature, comprehensive design tools for FDWs remain an open issue. To tackle the challenge, this paper presents FedDW Global Schema Architect (GSA), a visual design tool for federations of autonomous ROLAP data marts. FedDW integrates data marts at the schema level which avoids the laborious and error-prone physical DW integration. GSA manages all metadata--the global mediated schema, the import schemas, and the semantic mappings repairing multidimensional heterogeneity among the data marts--within one and the same tool. Its implementation employs the extension mechanisms of Eclipse and is based on the UML and CWM (Common Warehouse Metamodel) standards. Thus, the tool is extensible, intuitive for its users, and supports DW platforms of multiple vendors.

5 citations


Journal ArticleDOI
TL;DR: The 7tier architecture model is designed with the seven layers with the use of biometric tools and the data of customers are stored in Data Warehouse through data mart to help in decision making process by using OLAP and OLTP tools.
Abstract: Today core banking has become the major part of financial system. Presently, to fulfill the basic financial needs, the customer open different account numbers in different banks, financial institutions, Provident funds, Mutual funds, Insurances, etc. for various transactions. The major drawback of this system is that, there is no unique identification code of a customer to maintain various transactions and different accounts detail in different branches of different banks and financial institutions. In this paper, we have tried to focus to remove the above drawback by introducing the 7-tier architecture to maintain the Bank unique identification code (BUID code). Here the 7tier architecture model is designed with the seven layers with the use of biometric tools and the data of customers are stored in Data Warehouse through data mart. This model will help in decision making process by using OLAP and OLTP tools. The main advantage of this model is that the 7-tier architecture model with BUID can easily blend with current finance system. Thus the 7-tier architectural model can become a robust to perform a role to enhance the present account opening process in financial system.

Journal ArticleDOI
TL;DR: This study shows how automated human talent data mart is developed to get the most important attributes of academic talent from 15 different tables like demographic data, publications, supervision, conferences, research, and others.
Abstract: In higher education such as university, academic is becoming major asset. The performance of academic has become a yardstick of university performance. Therefore it's important to know the talent of academicians in their university, so that the management can plan for enhancing the academic talent using human resource data. Therefore, this research aims to develop an academic talent model using data mining based on several related human resource systems. In the case study, we used 7 human resource systems in one of Government Universities in Malaysia. This study shows how automated human talent data mart is developed to get the most important attributes of academic talent from 15 different tables like demographic data, publications, supervision, conferences, research, and others. Apart from the talent attribute collected, the forecasting talent academician model developed using the classification technique involving 14 classification algorithm in the experiment for example J48, Random Forest, BayesNet, Multilayer perceptron, JRip and others. Several experiments are conducted to get the highest accuracy by applying discretization process, dividing the data set in the different interval year (1,2,3,4, no interval) and also changing the number of classes from 24 to 6 and 4. The best model is obtained 87.47% accuracy using data set interval 4 years and 4 classes with J48 algorithm..

Book ChapterDOI
01 Jan 2012
TL;DR: The problem of Data Mart integration is reviewed, introducing the major types of conflicts considered in the literature and proposing strategies for their reconciliation based on formula manipulation.
Abstract: In the present literature Data Mart integration is typically considered from a dimension point of view. Approaches elaborate upon the hierarchical structure of dimensions to find the minimum common hierarchy where original dimensions can be mapped to. Although the problem of the conformance of measures has been recognized in the literature (see e.g. Kimball R, Ross M (2002) The Data Warehouse Toolkit: The Complete Guide to Dimensional Modeling (2nd Ed.), John Wiley & Sons, p. 87) as a condition for effective Data Mart integration, it is considered as a pre-requisite, so that integration strategies borrowed from the Database domain can be used. Considering the functional structure of a measure, that is the formula used to compute it, method and tools can be developed to support conformance checking and reconciliation. In this paper we review the problem of Data Mart integration, introducing the major types of conflicts considered in the literature. Next, we define novel types of conflicts hindering the conformance of measures and propose strategies for their reconciliation based on formula manipulation.

Book ChapterDOI
01 Jan 2012
TL;DR: This chapter describes the concepts and building blocks of classic business intelligence systems, such as central data warehouse, data mart, operational store, ETL, and replication, and lists the limitations of those classic systems in regard to the support of modern-day user requirements.
Abstract: To clearly explain what the difference is between a business intelligence system that does deploy data virtualization and one that doesn’t, this chapter describes the concepts and building blocks of classic business intelligence systems, such as central data warehouse, data mart, operational store, ETL, and replication. It also lists the limitations of those classic systems in regard to the support of modern-day user requirements. Also, the reasons why some of these data stores were introduced, are being reinvestigated. This is necessary for explaining why data virtualization can be beneficial.

01 Jan 2012
TL;DR: A model data mart developed upon a warehousing system focusing on oncology data is reported to explore optimized system architecture to support enhanced data integration and application capacity.
Abstract: Here we report a model data mart developed upon a warehousing system focusing on oncology data to explore optimized system architecture to support enhanced data integration and application capacity

Journal ArticleDOI
TL;DR: The proposed model is built upon the enhancement of the models in the previous work to support some missing mapping features and is called entity mapping diagram (EMD).
Abstract: During the last few years, researchers and developers had proposed various trials to put a standard conceptual design of ETL processes in data warehouse. These trials try to represent the main mapping activities at the conceptual level. Due to limitations of the previous trials, in this paper 1) We propose a model for conceptual design of ETL processes and we call it entity mapping diagram (EMD). The proposed model is built upon the enhancement of the models in the previous work to support some missing mapping features. 2) We implemented the proposed conceptual model in a prototype called EMD Builder and use it in an illustration scenario.

01 Jan 2012
TL;DR: This study shows how automated human talent data mart is developed to get the most important attribute's of academic talent from 15 different tables like demographic data, publications, supervision, conferences, research, and others.
Abstract: In higher education such as university, academic is becoming major asset. The performance of academic has become a yardstick of university performance. Therefore it's important to know the talent of academicians in their university, so that the management can plan for enhancing the academic talent using human resource data. Therefore, this research aims to develop an academic talent model using data mining based on several related human resource systems. The case study used 7 human resources systems in one of government university in Malaysia. This study shows how automated human talent data mart is developed to get the most important attribute's of academic talent from 15 different tables like demographic data, publications, supervision, conferences, research, and others. Apart from the talent attribute collected, the forecasting talent academician model developed using the classification technique involving 14 classification algorithm in the experiment for example J48, Random Forest, BayesNet, Multilayer perceptron, JRip and others. Several experiments are conducted to get the most highest accuracy by applying discretization process, dividing the data set in the different interval year (1,2,3,4, no interval) and also changing the number of classes from 24 to 6 and 4. The best model is obtained 87.47% accuracy using data set interval 4 years and 4 classes with J48 algorithm

Book ChapterDOI
01 Jan 2012
TL;DR: The tools to extract and save the data in SQL Server 2008, which facilitates the data synthesis not only support doctors, medical and insurance researchers to obtain the final analysis for nationwide epidemiology but also help them to make good decisions in building health care strategies.
Abstract: This study aims to construct a decision support tool for handling the Taiwan’s National Health Insurance data from the researchers’ perspectives. The National Health Research Institute (NHRI) provides data for researchers to conduct health related research by applications. Since the provided insurance data is in unstructured text, it is necessary to transfer those data into a data warehouse or data mart such as SQL Server. However, the data retrieval and synthesis are still very complicated through a number of complex SQL queries to export the final figures and tables. This causes a major problem for doctors, medical and insurance researchers who are not familiar with SQL Server. Therefore, this study proposes the tools to extract and save the data in SQL Server 2008, which facilitates the data synthesis. The tools not only support doctors, medical and insurance researchers to obtain the final analysis for nationwide epidemiology but also help them to make good decisions in building health care strategies.

DOI
08 Oct 2012
TL;DR: Data warehouse merupakan tempat penyimpanan data tunggal yang lengkap dan konsisten dengan karakteristik berorientasi subjek, terintegrasi, tidak volatil, dan bervariasi waktu yang dapat digunakan untuk mendukung kebutuhan informasi dari departemen atau fungsi bisnis tertentu.
Abstract: Data warehouse merupakan tempat penyimpanan data tunggal yang lengkap dan konsisten dengan karakteristik berorientasi subjek, terintegrasi, tidak volatil, dan bervariasi waktu yang dapat digunakan untuk mendukung keputusan, sedangkan Data mart adalah subset dari data warehouse yang mendukung kebutuhan informasi dari departemen atau fungsi bisnis tertentu. Hypermarket XYZ memiliki histori transaksi penjualan yang belum dimanfaatkan secara optimal sehingga akan sangat berguna jika data tersebut dapat dibuat menjadi suatu data mart. Metode pengembangan data mart ini menggunakan metode From Enterprise Models to Dimensional Models sebagai metode perancangannya dan Bottom Up Approach sebagai pendekatan dalam pengembangan data mart. Data mart ini dibangun dengan menggunakan oracle database dan menggunakan bahasa pemrograman PHP dalam membuat user interface. Tujuan dari penelitian ini adalah untuk mengembangkan suatu data mart penjualan dengan mengoptimalkan pemanfaatan data histori transaksi. Hasil dari penelitian ini adalah sebuah data mart penjualan yang menampung data histori transaksi dan menghasilkan suatu informasi yang berguna bagi pihak Top Management untuk mendukung proses pengambilan keputusan. Key words : Data warehouse, data Mart, Bottom Up Approach, Oracle, PHP

Patent
17 Apr 2012
TL;DR: In this article, the authors present a system for associating multiple data sources into a web-accessible framework, where health data is received from multiple sources and is used to populate a framework comprising at least one topic focused data mart.
Abstract: Systems, methods, and computer-readable media for associating multiple data sources into a web-accessible framework. Health data is received from multiple data sources and is used to populate a framework comprising at least one topic focused data mart. Each topic focused data mart has a common structure and is associated with a web service providing standard features supported by each topic focused data mart and custom features specific to a topic associated with each topic focused data mart. In various embodiments, demographic information is received from a clinician and is utilized to present context-specific data derived from the topic focused data mart.

Posted Content
TL;DR: The theoretical approach framework for deploying the data mart will ground a multidimensional analysis on „how the different respondents answered to the questions included into the questionnaire?
Abstract: Beyond the traditional data analysis approaches based on SPSS (or similar statistical software tools), an alternative demarche will be subject of our debate. Performant data analysis can be completed based on a multidimensional view of the collected data. This implies an additional data mart powered with information obtained through an ETL process from the collected data. Measures and dimensions will facilitate a subject-oriented, time-based analysis. The theoretical approach framework for deploying the data mart will ground a multidimensional analysis on „how the different respondents answered to the questions included into the questionnaire?“. In addition, a study case was proposed, a questionnaire built and different analyses presented.

Patent
25 Oct 2012
TL;DR: In this article, a data warehouse that holds a copy of information held in a basic system is used to generate a data mart by extracting predetermined information from the information stored in the data warehouse.
Abstract: PROBLEM TO BE SOLVED: To effectively extract predetermined data from information held by a basic system.SOLUTION: An information processing system 1 includes a data warehouse that holds a copy of information held in a basic system 200, a generation part for generating a data mart by extracting predetermined information from the information held in the data warehouse, an extraction part for extracting information that satisfies a predetermined condition from the information stored in the data mart, and a notification part for making notification of the information extracted by the extraction part.

01 Jan 2012
TL;DR: Data warehouse eGovMon architecture is proposed to get better realize of data by using data warehouse techniques and business intelligence (BI) tools such as ETL, DBMS, OLAP, DM and OLAM to improve the performance for e-government service system.
Abstract: E-Government Monitor (eGovMon) is used to ensure the new online services for e-government (eGov). To effectively address the real needs of the citizens, businesses and governmental agencies. A system to monitor this development can give a better understanding of how to build good online service for citizens and enterprises. Moreover eGovMon is used many tools to help to enhance the quality of public web sites. Thus it needs better understand for e-government’s data and information with quick respond for their queries. But still there are more issues such as distributed data, structure data and design data. However, Meta Data in data warehouse (DW) uses to explain the data from itself because of that it called data about data. This paper proposes data warehouse eGovMon architecture to get better realize of data by using data warehouse techniques and business intelligence (BI) tools such as ETL, DBMS, OLAP, DM and OLAM. eGovMon with BI tools make the information more understandable with good quality of data to improve the performance for e-government service system. Also metadata technique and data mart small storages technique are structured and distributed e-government information properly.

Journal Article
TL;DR: The paper has tried to remove the drawback of current financial system by introducing the financial transaction model will be worked through customer’s BUID ((Bank unique Identification) coded card), which will help in decision making process by using OLAP and OLTP tools.
Abstract: The paper has tried to focus to remove the drawback of current financial system by introducing the financial transaction model will be worked through customer’s BUID ((Bank unique Identification) coded card. The BUID can be easily unified in current financial system. ATM, Kiosk, Funds, Call Center, Internet, Portal and Mobile with BUID will be used to make transaction using financial transaction model in overall financial system. The BUID will be wellbuilt to execute a position to augment the present financial core system for Bank, Insurance, Shares, Bonds, Post Office, Income tax, Import/Export and loans through several channels. Here the financial transaction model is designed to connect all financial institutes in a network and the data of customers are stored in Data Warehouse through data mart. This model will help in decision making process by using OLAP and OLTP tools. The main advantage of financial transaction model with BUID will become dynamic to perform a role to boost up the present financial system.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: The use of CDR files is described but emphasise is on the use of information from this files in dimensioning of effective database by the help of which it is possible to analyze the customers' habits of telecommunication services in details.
Abstract: The CDR files contain data which are recorded at each call made. In this paper the use of CDR files is described but emphasise is on the use of information from this files in dimensioning of effective database by the help of which it is possible to analyze the customers' habits of telecommunication services in details. It is necessary to use database in these systems because the great quantity of data are accessible. One of the models appropriate in such system is dimensional model of database called model star.

Journal ArticleDOI
TL;DR: This paper presents a Fuzzy Data Mart model that imparts the exile interface to the users and also extends the DWs for storing and managing the fuzzy data along with the crisp data records.
Abstract: Data Mart is emerging as one of the hottest areas of growth in global business and the critical components and tool for business intelligence. A data mart is a local data warehouse used for storing business oriented information for future analysis and decision-making. In business scenarios, where some of the data are fuzzy, it may be effective to construct a warehouse that can strengthen the analysis of fuzzy data. This paper presents a Fuzzy Data Mart model that imparts the exile interface to the users and also extends the DWs for storing and managing the fuzzy data along with the crisp data records. We have proposed, an algorithm to design data mart, which improves the decision making processes. To do so, we use Extraction, Transformation and Load (ETL) tools for better performance. In addition to that, the membership function of fuzzy is used for summarization.

Dissertation
01 Jan 2012
TL;DR: The proposed conceptual design model was found to satisfy nine quality model dimensions, which are easy to understand, covers clear steps, is relevant and timeless, demonstrates flexibility, scalability, accuracy, completeness and consistency, and can be referred as guidelines by BI developers.
Abstract: The development of business intelligence (BI) applications, involving of data sources, Data Warehouse (DW), Data Mart (DM) and Operational Data Store (ODS), imposes a major challenge to BI developers. This is mainly due to the lack of established models, guidelines and techniques in the development process as compared to system development in the discipline of software engineering. Furthermore, the present BI applications emphasize on the development of strategic information in contrast to operational and tactical. Therefore, the main aim of this study is to propose a conceptual design model for BI applications using ODS (CoDMODS). Through expert validation, the proposed conceptual design model that was developed by means of design science research approach, was found to satisfy nine quality model dimensions, which are, easy to understand, covers clear steps, is relevant and timeless, demonstrates flexibility, scalability, accuracy, completeness and consistency. Additionally, the two prototypes that were developed based on CoDMODS for water supply service (iUBIS) and telecommunication maintenance (iPMS) recorded a high usability average min value of 5.912 using Computer System Usability Questionnaire (CSUQ) instrument. The outcomes of this study, particularly the proposed model, contribute to the analysis and design method for the development of the operational and tactical information in BI applications. The model can be referred as guidelines by BI developers. Furthermore, the prototypes that were developed in the case studies can assist the organizations in using quality information for business operations.

Patent
17 Dec 2012
TL;DR: In this paper, a method for building a database by using a data warehouse and a system thereof are provided to reduce the possibility of errors by excluding an overlap when making a data mart.
Abstract: PURPOSE: A method for building a database by using a data warehouse and a system thereof are provided to reduce the possibility of errors by excluding an overlap when making a data mart. CONSTITUTION: An ODS(Operational Data Store)(110) processes source data. A DW(Data Warehouse)(120) integrates data of the ODS and generates a reference relationship between data having correlations. A business logic(130) manages business rules for analysis. A data mart(140) forms the data of the ODS or the DW into a multi-dimension model based on one of the business rules. The DW includes a summary table formed based on the data mart according to each analysis subject. [Reference numerals] (100) Database system; (120) DW management unit; (140) Data mart; (160) Meta data; (AA) Management system; (BB) First DB; (CC) Second DB; (DD) File; (EE) First mart; (FF) Second mart; (GG) OLAP server; (HH) Web server; (II) User system; (JJ) OLAP client; (KK) User application; (LL) Web browser

Patent
07 Jun 2012
TL;DR: In this article, a method for building a database using a data warehouse is comprised by generating the data warehouse based on items of source data, which are provided by at least one source system.
Abstract: A method for building a database using a data warehouse is comprised by generating the data warehouse based on items of source data, which are provided by at least one source system. The method for building the database comprises the steps of: configuring an operational data store (ODS) by refining at least a portion of the items of source data; comprising the data warehouse (DW) by integrating the ODS data and generating a reference relationship between the items of data which have an associative relationship; comprising a business logic by enabling at least one business rule for analysis; and comprising a data mart by generating a multi-dimensional model per analysis subject with regard to the ODS or the DW data, based on the at least one business rule.

01 Jan 2012
TL;DR: In this work, the functional behavior of the corporate system is analyzed, based on its operational goal to build layers of data storage repositories with relevant data attributes using functional behavior pattern (FBP), and an experimental evaluation is conducted.
Abstract: The growing need for huge volume of data in enterprise and corporate environment, fuel the demand of data warehousing. Data warehousing collects the data at different levels (i.e., departmental, operational, functional) and stored as a collective data repository with better storage efficiency. Various data warehousing models concentrate on storing the data more efficiently and quickly. In addition accessibility of data from the warehouse needs better understanding of the structure in which the data layers are stored in the repository. However function requirements of users are not easily understood by the data warehouse model. It needs efficient decision support system to extract the required user demanded data from data warehouse. To handle the issue of functional decision support system to extract user relevant data, data marts are introduced. Data marts built separate functional data repository layers based on the departmental decision support requirements in the enterprise and corporate data applications. In our research work, we plan to built a Functional Layer Interfaced Data Mart Architecture (FLIDMA) to provide a better decision support system for larger corporate and enterprise data applications.In this work, the functional behavior of the corporate system is analyzed, based on its operational goal to build layers of data storage repositories with relevant data attributes using functional behavior pattern (FBP). An experimental evaluation is conducted with benchmark datasets from UCI repository data sets and compared with existing multi-functional data warehousing model in terms of number of functional data attributes, attribute relativity, analysis of functional behavior.