scispace - formally typeset
Search or ask a question

Showing papers on "Data mart published in 2007"


Book ChapterDOI
Guotong Xie1, Yang Yang1, Shengping Liu1, Zhaoming Qiu1, Yue Pan1, Xiongzhi Zhou 
11 Nov 2007
TL;DR: A tool to make data warehouses more business-friendly by using Semantic Web technologies and OWL provides an excellent basis for the representation of business semantics in data warehouse, but many necessary extensions are also needed in the real application.
Abstract: Data warehouse is now widely used in business analysis and decision making processes. To adapt the rapidly changing business environment, we develop a tool to make data warehouses more business-friendly by using Semantic Web technologies. The main idea is to make business semantics explicit by uniformly representing the business metadata (i.e. conceptual enterprise data model and multidimensional model) with an extended OWL language. Then a mapping from the business metadata to the schema of the data warehouse is built. When an analysis request is raised, a customized data mart with data populated from the data warehouse can be automatically generated with the help of this built-in knowledge. This tool, called Enterprise Information Asset Workbench (EIAW), is deployed at the Taikang Life Insurance Company, one of the top five insurance companies of China. User feedback shows that OWL provides an excellent basis for the representation of business semantics in data warehouse, but many necessary extensions are also needed in the real application. The user also deemed this tool very helpful because of its flexibility and speeding up data mart deployment in face of business changes.

51 citations


Book
05 Feb 2007
TL;DR: Research and Trends in Data Mining Technologies and Applications as mentioned in this paper focuses on the integration between the fields of data warehousing and data mining, with emphasis on the applicability to real-world problems.
Abstract: Activities in data warehousing and mining are constantly emerging. Data mining methods, algorithms, online analytical processes, data mart and practical issues consistently evolve, providing a challenge for professionals in the field. "Research and Trends in Data Mining Technologies and Applications" focuses on the integration between the fields of data warehousing and data mining, with emphasis on the applicability to real-world problems. This book provides an international perspective, highlighting solutions to some of researchers' toughest challenges. Developments in the knowledge discovery process, data models, structures, and design serve as answers and solutions to these emerging challenges.

51 citations


Journal ArticleDOI
TL;DR: This work presents a semantic cube model, which extends object-oriented technology to data warehouses and which enables users to design the generalization relationship between different cubes, to improve the performance of query integrity and to reduce data duplication in data warehouse.

30 citations


01 Jan 2007
TL;DR: In this article, the authors evaluate the feasibility of applying an option-based risk management (OBRiM) framework, and its accompanying theoretical perspective and methodology, to real-world sequential information technology (IT) investment problems.
Abstract: This field study research evaluates the viability of applying an option-based risk management (OBRiM) framework, and its accompanying theoretical perspective and methodology, to real-world sequential information technology (IT) investment problems. These problems involve alternative investment structures that bear different risk profiles for the firm, and also may improve the payoffs of the associated projects and the organization’s performance. We sought to surface the costs, benefits, and risks associated with a complex sequential investment setting that has the key features that OBRiM treats. We combine traditional, purchased real options that subsequently create strategic flexibility for the decision maker, with implicit or embedded real options that are available with no specific investment required provided the decision maker recognizes them. This combination helps the decision maker to both (1) explicitly surface all of his or her strategic choices and (2) accurately value those choices, including ones that require prior enabling investments. The latter permits senior managers to adjust a project’s investment trajectory in the face of revealed risk. This normally is important when there are uncertain organizational, technological, competitive, and market conditions. The context of our research is a data mart consolidation project, which was conducted by a major airline firm in association with a data warehousing systems vendor. Field study inquiry and data collection were essential elements in the retrospective analysis of the efficacy of OBRiM as a means to control risk in a large-scale project. We learned that OBRiM’s main benefits are (1) the ability to generate meaningful option-bearing investment structures, (2) simplification of the complexities of real options for the business context, (3) accuracy in analyzing the risks of IT investments, and (4) support for more proactive planning. These issues, which we show are more effectively addressed by OBRiM than the other methods, have become crucial as more corporate fi nance-style approaches are applied to IT investment and IT services problems. Our evaluative study shows that OBRiM has the potential to add value for managers looking to structure risky IT investments, although some aspects still require refinements.

26 citations


Proceedings ArticleDOI
14 May 2007
TL;DR: The UFSC and the FGV-RJ jointly propose the use of a data mining tool to support the analysis of trends, students profiles, as well as to estimate or foresee the usability level of courses being offered, via Moodle, in the Education area.
Abstract: In this work the UFSC (Federal University of Santa Catarina) and the FGV-RJ (Fundacao Getulio Vargas do Rio de Janeiro) jointly propose the use of a data mining tool to support the analysis of trends, students profiles, as well as to estimate or foresee the usability level of courses being offered, via Moodle, in the Education area. The study carried out by UFSC on the Moodle database allowed a deep understanding of its database, thus making it easier for the Moodle community to execute important tasks, such as the maintenance of the Moodle database, its adaptation following an institutional customization, and, also, a data mart project by the FGV-Online Program to make the necessary analysis possible. In the end of this paper, an example on its applicability is presented, using the association rules technique. Once a data mart oriented to the analysis of the system's usability is developed, various analyses with different objectives can be executed using the database. Some may use the method proposed here or others, including different data mining approaches, such as clustering, neural networks etc. As such, a new contribution is given to the Moodle community.

24 citations


24 Aug 2007
TL;DR: The architecture of an academic data warehouse is described, providing a centralized source of information accessible across different academic units to quickly analyze problems and get satisfactory solutions.
Abstract: There are several benefits that can be reached by developing an academic data warehouse as providing a centralized source of information accessible across different academic units to quickly analyze problems and get satisfactory solutions, supplying the data necessary for developing the Institution's strategic plan, and enabling administrator to make better business decisions based on historical data available in legacy databases. The paper describes the architecture of an academic data warehouse. Examples of analytic reporting are also reported.

20 citations


Journal ArticleDOI
TL;DR: This chapter has used a novel approach to instantiate and solve four versions of the Materialized View Selection (MVS) problem using three sampling techniques and two databases and compared these solutions with the optimal solutions corresponding to the actual problems.
Abstract: In any online decision support system, the backbone is a data warehouse. In order to facilitate rapid response to complex business decision support queries, it is a common practice to materialize an appropriate set of the views at the data warehouse. However, it typically requires the solution of the Materialized View Selection (MVS) problem to select the right set of views to materialize in order to achieve a certain level of service given a limited amount of resource such as materialization time, storage space, or view maintenance time. Dynamic changes in the source data and the end users requirement necessitate rapid and repetitive instantiation and solution of the MVS problem. In an online decision support context, time is of the essence in finding acceptable solutions to this problem. In this chapter, we have used a novel approach to instantiate and solve four versions of the MVS problem using three sampling techniques and two databases. We compared these solutions with the optimal solutions corresponding to the actual problems. In our experimentation, we found that the sampling approach resulted in substantial savings in time while producing good solutions.

18 citations


Patent
31 Jul 2007
TL;DR: In this paper, a data management architecture for a public institution such as a university is able to extract, transform, and load fact data from disparate data sources and store the information in appropriate data marts.
Abstract: A data management architecture for a public institution such as a university is able to extract, transform, and load fact data from disparate data sources and store the information in appropriate data marts. The fact data is correlated with corresponding dimension data, and is correlated with various metrics of a data model, the metrics defined to be useful to the institution. A reporting tool provides user-selectable objects for each metric whereby a user simply selects an object and adds the object to a report in order to automatically generate reports using data from multiple data systems without any manual calculations or reporting by the user. The metrics and data marts can include information for such needs as grants management, financial aid management, admissions, and recruiting. The data marts also can store data over time to provide historical information that may not otherwise be available from the data sources.

12 citations


Book ChapterDOI
27 Jun 2007
TL;DR: B-Fabric is presented, a system developed and running at the Functional Genomics Center Zurich (FGCZ), which provides a core framework for integrating different analytical technologies and data analysis tools, providing the ground for integrative querying and exploitation of systems biology data.
Abstract: Life sciences research in general and systems biology in particular have evolved from the simple combination of theoretical frameworks and experimental hypothesis validation to combined sciences of biology/medicine, analytical technology/chemistry, and informatics/statistics/modeling. Integrating these multiple threads of a research project at the technical and data level requires tight control and systematic workflows for data generation, data management, and data evaluation. Systems biology research emphasizes the use of multiple approaches at various molecular and functional levels, making the use of complementing technologies and the collaboration of many researchers a prerequisite. This paper presents B-Fabric, a system developed and running at the Functional Genomics Center Zurich (FGCZ), which provides a core framework for integrating different analytical technologies and data analysis tools. In addition to data capturing and management, B-Fabric emphasizes the need for quality-controlled scientific annotation of analytical data, providing the ground for integrative querying and exploitation of systems biology data. Users interact with B-Fabric through a simple Web portal making the framework flexible in terms of local infrastructure.

12 citations


Journal ArticleDOI
TL;DR: In this article, a water and power utility taps non-operational data with a power system data mart project to boost system efficiency, reducing outages, and managing assets more effectively by investing in substation automation projects.
Abstract: This paper presents a water and power utility taps nonoperational data with a power system data mart project. Power utilities have enjoyed predictable success at boosting system efficiency, reducing outages, and managing assets more effectively by investing in substation automation projects. But few utilities seem to realize they are short-changing their investment returns by failing to fully tap into the wealth of information collected by these automated components and delivering it to decision makers throughout the organization. The ubiquity of this information gap is as unfortunate as it is unnecessary. Every utility that has implemented microprocessor-based devices and supervisory control and data acquisition (SCADA) technology has the ability to a mass an incredibly detailed historical record of operational and nonoperational data relating to the performance of its generation

11 citations


Posted Content
TL;DR: The ways in which a data warehouse may be developed and the stages of building it are presented, allowing complex analyses which cannot be properly achieved from operational systems.
Abstract: Data warehouses have been developed to answer the increasing demands of quality information required by the top managers and economic analysts of organizations. Their importance in now a day business area is unanimous recognized, being the foundation for developing business intelligence systems. Data warehouses offer support for decision-making process, allowing complex analyses which cannot be properly achieved from operational systems. This paper presents the ways in which a data warehouse may be developed and the stages of building it.

Journal ArticleDOI
TL;DR: In this article, the authors propose an approach supported by a tool to assist the decisional designer in building data mart star schemes relying on a relational data source, which is independent of the semantic of any source information system; it classifies the relations in Rentities and R-associations and is based on the structural semantics of the relations which is disseminated through the primary keys and referential constraints.
Abstract: To assist the decisional designer in building data mart star schemes relying on a relational data source, we propose an approach supported by a tool. Our approach is independent of the semantic of any source information system; it classifies the relations in Rentities and R-associations and, is based on the structural semantics of the relations which is disseminated through the primary keys and referential constraints. This approach extracts facts, dimensions and hierarchies using a set of appropriate heuristics. A software tool baptized CAME is developed to support our proposed method; it constructs automatically star scheme data marts.

Patent
30 Oct 2007
TL;DR: In this paper, a system and method of warranty insight solution are disclosed, which includes populating a data mart with data from a number of sources, text analyzing and mining the unstructured data of the data mart according to a uniform structure.
Abstract: A system and method of warranty insight solution are disclosed. In one embodiment, a method includes populating a data mart with data from a number of sources, text analyzing and mining the unstructured data of the data mart according to a uniform structure, performing root cause analysis assistance on staged data mart data, generating root cause analysis output from the root cause analysis, merging the root cause analysis output with the data of the data mart, and generating final output based on a portion of the merged data of the data mart. The data may include data selected from a group including warranty claim data, traceability data, supplier data, manufacturer data, retailer data, customer data, component data, service data, failure data, field data, vehicle failure fault codes trough telematics, and collection center data.

Posted Content
01 Jan 2007
TL;DR: In this paper, the authors present the ways in which a data warehouse may be developed and the stages of building it, as well as the requirements of data warehouse for decision-making process.
Abstract: Data warehouses have been developed to answer the increasing demands of quality information required by the top managers and economic analysts of organizations. Their importance in now a day business area is unanimous recognized, being the foundation for developing business intelligence systems. Data warehouses offer support for decision-making process, allowing complex analyses which cannot be properly achieved from operational systems. This paper presents the ways in which a data warehouse may be developed and the stages of building it.

Journal Article
TL;DR: Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator, and complex queries that could not be posed on a text-based database, are now easily implemented.
Abstract: An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.

Book
19 Jun 2007
TL;DR: Economists and managers who want to get a guided overview of the field of Business Intelligence as well as students of business administrations and of business information systems are addressed.
Abstract: Revision with unchanged content. During the past few years, fast changes in the economic environment and massive technological developments in the field of information technology have constituted challenges for many companies. The growing capabilities to collect and store loads of internal and external data make it more and more difficult for a business’s management to find the really important pieces of information. This situation of information overload is tackeled by a variety of analytic concepts and tools which can be subsumed under the opalescent term of “Business Intelligence”. This book introduces these different con­cepts and then presents the most important tools such as data warehouses and data marts, online analytical processing, data mining, and reporting. The book aims at a not information technology focussed kind of presentation and evaluates the relevance of the tools introduced for companies’ daily operations. Therefore it adresses economists and managers (even of small and medium enterprises) who want to get a guided overview of the field of Business Intelligence as well as students of business administrations and of business information systems.

01 Jan 2007
TL;DR: This work uses an adapted concept of k-anonymity for distributed data sources and includes various customisation parameters in the anonymisation process to guarantee that the transformed data is still applicable for further processing.
Abstract: Gene expression profiling is a sophisticated method to discover differences in activation patterns of genes between different patient collectives By reasonably defining patient groups from a medical point of view, subsequent gene expression analysis may reveal disease-related gene expression patterns that are applicable for tumor markers and pharmacological target identification When releasing patient-specific data for medical studies privacy protection has to be guaranteed for ethical and legal reasons k-anonymisation may be used to generate a sufficient number of k data twins in order to ensure that sensitive data used in analyses is protected from being linked to individuals We use an adapted concept of k-anonymity for distributed data sources and include various customisation parameters in the anonymisation process to guarantee that the transformed data is still applicable for further processing We present a real-world medical-relevant use case and show how the related data is materialised, anonymised, and released in a data mart for testing the related hypotheses

Patent
04 Sep 2007
TL;DR: In this article, the authors present a data mart optimized for the auto insurance industry, which contains data pertaining to auto insurance policies, vehicles, operators, coverage, and incident, and corresponding perspectives may be constructed to allow the user to access the data in the data cubes.
Abstract: Methods and systems are disclosed for allowing a user to quickly and easily generate market performance analysis reports. The methods and systems use data mart and on-line analytical processing (OLAP) technology to provide users with summary and detailed information without requiring the user to have specific programming skills. In one implementation, the methods and systems may provide a data mart optimized for the auto insurance industry. Such a data mart may contain data pertaining to auto insurance policies, vehicles, operators, coverage, and incident. Data cubes may be used to organize the data in the data mart according to one or more dimensions. Corresponding perspectives may be constructed to allow the user to access the data in the data cubes. Report templates provide a starting point from which the user may modify for dynamic data exploration or dive deeper into the data “on the fly.”

Journal Article
TL;DR: The capability of DTS as a database solution for automatic data transfer and update in solving business problem is demonstrated to be attractive.
Abstract: Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS (1) as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today's business needs.

Patent
05 Jul 2007
TL;DR: In this paper, a direct mail distribution system processing personal information of FSP data held by a retailer or the like into information not allowing specification of an individual, providing it to a third party as purchase analysis data, and distributing direct mail produced by the third party on the basis of the purchase data to a customer.
Abstract: PROBLEM TO BE SOLVED: To provide a direct mail distribution system processing personal information of FSP data held by a retailer or the like into information not allowing specification of an individual, providing it to a third party as purchase analysis data, and distributing direct mail produced by the third party on the basis of the purchase analysis data to a customer. SOLUTION: The FSP data held by the retailer are processed to the information not allowing the specification of the individual by a personal information filter 8 using an encryption key 9, and are combined with POS data to provide a data mart 11 for analyzing purchase results to the third party, and the direct mail produced by the third party is transmitted as part of sales promotion activity based on the purchase results. COPYRIGHT: (C)2007,JPO&INPIT

Proceedings ArticleDOI
26 Apr 2007
TL;DR: A model of u-CRM on the center of constitution diagram is proposed, comparing the u- CRM utilizing conventional CRM and RFID technique, to solve the problem of the information collected by usingRFID technique might become useless.
Abstract: As the recognition technique by wireless and no contact is, RFID technology has attracted many attentions as the technology to be brought the innovative change in the distribution activity or the system delivering information, overcoming the limitation of slow recognition of conventional bar code, and low recognition rate, and low storing capacity. At the same, this is the core technique undertaking the function of sensor in ubiquitous network. As the u-CRM utilizing RFID technique plays the role managing the customers by collecting the customer data, and storing them under the ubiquitous environment, and may acquire the information of customers in real time, and analyze at once, it can provide ready service. Where after built up data warehouse or data mart by using such RFID technique, this applies into CRM system, it requires the time refining the data. At this time unless the data is refined or processed in real-time, there exists the possibility that the information collected by using RFID technique might become useless. To solve this problem, we would propose a model of u-CRM system displaying powerful efficiency. First, we will propose a model of u-CRM on the center of constitution diagram, comparing the u-CRM utilizing conventional CRM and RFID technique.

Journal ArticleDOI
TL;DR: This study aims to develop the comprehensive GIS-based traffic accident database system through the integration of hospital-based data, police data and the road inventory data through the use of data taxonomy.
Abstract: This study aims to develop the comprehensive GIS-based traffic accident database system through the integration of hospital-based data, police data and the road inventory data. To determine how to integrate the data from three data sources, data taxonomy is utilized. Available data are hierarchically classified based on their share characteristic. Grouping data in this way is useful for understanding, designing, and building integrated data system. Data warehouse, a common data storage approach to integration, is utilized for the data integration. GIS is an enabling technology for the integration as well. The scope of data integration is established by identifying the model of target data (or integrated data) and identifying the disparate data that would be mapped to the target data. During the physical data integration process, data from the three data sources are extracted, transformed, cleaned and finally loaded into an integrated data source, a data mart or data warehouse.

01 Jan 2007
TL;DR: This paper presents an optimized approach to create SAS data sets directly from a traditional RDBMS data warehouse using multiple data loading strategies like loading from a flat file, loading from denormalized and normalized data sources,loading from materialized and non materialized views.
Abstract: Characteristically, the data warehouse is a central repository of vast amounts of disparate data. The process of cleansing, extracting, transforming and organizing this data into an enterprise intelligence platform (data mart) presents unique challenges. An optimized approach towards performance driven data integration should eliminate redundancy and incorporate accuracy, timeliness, automation and quality. Such a process/system brings greater agility, better decision-making and reduced cost and risk to performing insightful statistical analysis and reporting. This paper presents an optimized approach to create SAS data sets directly from a traditional RDBMS data warehouse. The authors compare multiple data loading strategies like loading from a flat file, loading from denormalized and normalized data sources, loading from materialized and non materialized views. Benchmark numbers and significant performance gains of up to 50% are highlighted in conjunction to these strategies. Further, the paper discusses merits of performing QC on non traditional data sources like views. Lastly, several optimization techniques enabling rapid data transfers are also suggested.

Journal ArticleDOI
TL;DR: This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/ data mart solution.
Abstract: Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR) group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

Journal Article
Wang Qing1
TL;DR: The practice of Dainan formation in Jinhu depression convinced that this platform could provide efficient software tool for synthetic geology research of petroleum exploration.
Abstract: Synthetic research platform of petroleum geology must realize functions of data processing such as integ- ration, synthesized visualization and quantitative analysis. It should accept three layer's framework constituted by data mart, data platform and application system. The data mart can be used to realize data integration of multi-subject dataset, and specialist requirements could be realized by using technologies such as computer-aid formation contrast, fold display of map-layer, special topic map, 3-D visualization and fold analyzed method based on space data. The practice of Dainan formation in Jinhu depression convinced that this platform could provide efficient software tool for synthetic geology research of petroleum exploration.

Proceedings ArticleDOI
18 Jun 2007
TL;DR: All of the algorithms developed in this paper were tested using a dataset from the 10 ton dump truck of the family of medium tactical vehicles (FMTV) and measuring everything from GPS position to engine speed to strain rate.
Abstract: VISION (versatile information system - integrated, on-line) represents a comprehensive, holistic, top down approach to information collection, management, and ultimate transformation into knowledge. VISION employs a Web based approach that combines a modular instrumentation suite and a digital library (VDLS) with modern communications technology. VDLS is a Web based collaborative knowledge management system that incorporates associated data marts and additional test information sources such as reports, analyses and documents. One of these data marts contains engineering performance data. A Web based online analytical processing (OLAP) toolbox has been developed to allow querying of this data mart via a Java graphical user interface (GUI). Currently, the OLAP toolbox contains functions to perform metadata searches, view a Global Positioning System (GPS) map of the location of the item under test, view time series traces of all the parameters, generate custom plots of any parameter or download raw data files. The work described in this paper adds additional functions to this toolbox that allows test engineers, data analysts or mechanical engineers to quickly find the data of interest. Tools were developed that use wavelet transforms for both data validation purposes and for data de-noising. Other tools were created that use Google Earth to plot GPS coordinates as markers where each marker contains a balloon of information (e.g., time, date, latitude, longitude, speed, direction). In addition, Google Earth was used in a spatial data mining application where the GPS coordinates of extreme values (i.e., values exceeding a given threshold) are plotted along the track. All of the algorithms developed in this paper were tested using a dataset from the 10 ton dump truck of the family of medium tactical vehicles (FMTV). Up to 35 sensors were used in these tests measuring everything from GPS position to engine speed to strain rate. The tests were conducted at the Aberdeen Test Center on a variety of courses covering everything from smooth paved surfaces to block gravel to dirt.

Book ChapterDOI
01 Jan 2007
TL;DR: The physical design issues that arise relative to these decision support technologies and some of the solutions are illustrated with examples and the use of materialized views for faster query response in the data warehouse environment is discussed.
Abstract: This chapter focusses on two decision support technologies: data warehousing and Online Analytical Processing (OLAP) It reviews the physical design issues that arise relative to these decision support technologies and some of the solutions are illustrated with examples The chapter discusses the use of materialized views for faster query response in the data warehouse environment The different general categories of OLAP storage are described, including relational (ROLAP), multidimensional (MOLAP), and hybrid (HOLAP) based on data density The dimensional design approach is covered briefly, with examples illustrating star and snowflake schemas The usefulness of the data warehouse bus is demonstrated with an example, showing the relationship of conformed dimensions across multiple business processes The data warehouse bus leads to a data warehouse constellation schema with the possibility of developing a data mart for each business process Approaches toward efficient processing are discussed, including some hardware approaches, the appropriate use of bitmap indexes, various materialized view update strategies, and the partitioning of data