scispace - formally typeset
Search or ask a question

Showing papers on "Data warehouse published in 2006"


Book
01 Jan 2006
TL;DR: Decision Support and Business Intelligence Systems 9e provides the only comprehensive, up-to-date guide to today's revolutionary management support system technologies, and showcases how they can be used for better decision-making.
Abstract: Decision Support and Business Intelligence Systems 9e provides the only comprehensive, up-to-date guide to today's revolutionary management support system technologies, and showcases how they can be used for better decision-making. KEY TOPICS: Decision Support Systems and Business Intelligence. Decision Making, Systems, Modeling, and Support. Decision Support Systems Concepts, Methodologies, and Technologies: An Overview. Modeling and Analysis. Data Mining for Business Intelligence. Artificial Neural Networks for Data Mining. Text and Web Mining. Data Warehousing. Collaborative Computer-Supported Technologies and Group Support Systems. Knowledge Management. Artificial Intelligence and Expert Systems. Advanced Intelligent Systems. Management Support Systems: Emerging Trends and Impacts. Ideal for practicing managers interested in the foundations and applications of BI, group support systems (GSS), knowledge management, ES, data mining, intelligent agents, and other intelligent systems.

749 citations


Patent
28 Nov 2006
TL;DR: In this article, the authors describe systems and methods for data classification to facilitate and improve data management within an enterprise and present methods for generating a data structure of metadata that describes system data and storage operations.
Abstract: Systems and methods for data classification to facilitate and improve data management within an enterprise are described. The disclosed systems and methods evaluate and define data management operations based on data characteristics rather than data location, among other things. Also provided are methods for generating a data structure of metadata that describes system data and storage operations. This data structure may be consulted to determine changes in system data rather than scanning the data files themselves.

633 citations



Proceedings Article
01 Jan 2006
TL;DR: The benefits of a research system that gathers medical records from various hospital systems and stores them centrally in one data warehouse are calculated to help justify its establishment in other healthcare entities.
Abstract: The Research Patient Data Repository (RPDR) is a clinical data registry that gathers medical records from various hospital systems and stores them centrally in one data warehouse. Research investigators can obtain aggregate total of patients that meet specific query criteria and can obtain patient identifiers and complete electronic medical records through the RPDR with IRB approval. The existence of the RPDR is a critical resource to the Partners HealthCare System research community and supports many millions of dollars in clinical research. We have calculated the benefits of such a research system to help justify its establishment in other healthcare entities.

236 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe informatics frameworks for ecology, from subject-specific data warehouses, to generic data collections that use detailed metadata descriptions and formal ontologies to catalog and cross-reference information.
Abstract: Bioinformatics, the application of computational tools to the management and analysis of biological data, has stimulated rapid research advances in genomics through the development of data archives such as GenBank, and similar progress is just beginning within ecology. One reason for the belated adoption of informatics approaches in ecology is the breadth of ecologically pertinent data (from genes to the biosphere) and its highly heterogeneous nature. The variety of formats, logical structures, and sampling methods in ecology create significant challenges. Cultural barriers further impede progress, especially for the creation and adoption of data standards. Here we describe informatics frameworks for ecology, from subject-specific data warehouses, to generic data collections that use detailed metadata descriptions and formal ontologies to catalog and cross-reference information. Combining these approaches with automated data integration techniques and scientific workflow systems will maximize the value of data and open new frontiers for research in ecology.

209 citations


Book
01 Jan 2006
TL;DR: Data Mining in Business, Banking and Commercial Applications, and Insurance, as well as major and privacy issues in Data Mining and Knowledge Discovery, and Active Data Mining.
Abstract: to Data Mining Principles- Data Warehousing, Data Mining, and OLAP- Data Marts and Data Warehouse- Evolution and Scaling of Data Mining Algorithms- Emerging Trends and Applications of Data Mining- Data Mining Trends and Knowledge Discovery- Data Mining Tasks, Techniques, and Applications- Data Mining: an Introduction - Case Study- Data Mining & KDD- Statistical Themes and Lessons for Data Mining- Theoretical Frameworks for Data Mining- Major and Privacy Issues in Data Mining and Knowledge Discovery- Active Data Mining- Decomposition in Data Mining - A Case Study- Data Mining System Products and Research Prototypes- Data Mining in Customer Value and Customer Relationship Management- Data Mining in Business- Data Mining in Sales Marketing and Finance- Banking and Commercial Applications- Data Mining for Insurance- Data Mining in Biomedicine and Science- Text and Web Mining- Data Mining in Information Analysis and Delivery- Data Mining in Telecommunications and Control- Data Mining in Security

206 citations


Patent
30 Jun 2006
TL;DR: In this paper, a system and method of making unstructured data available to structured data analysis tools is presented, which includes middleware software that can be used in combination with structured data tools to perform analysis on both structured and unstructural data.
Abstract: A system and method of making unstructured data available to structured data analysis tools. The system includes middleware software that can be used in combination with structured data tools to perform analysis on both structured and unstructured data. Data can be read from a wide variety of unstructured sources. The data may then be transformed with commercial data transformation products that may, for example, extract individual pieces of data and determine relationships between the extracted data. The transformed data and relationships may then be passed through an extraction/transform/load (ETL) layer and placed in a structured schema. The structured schema may then be made available to commercial or proprietary structured data analysis tools.

201 citations


Journal ArticleDOI
TL;DR: BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining.
Abstract: This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

195 citations


Proceedings ArticleDOI
10 Nov 2006
TL;DR: Issues regarding conceptual models, logical models, methods for design, interoperability, and design for new architectures and applications are considered.
Abstract: Multidimensional modeling requires specialized design techniques. Though a lot has been written about how a data warehouse should be designed, there is no consensus on a design method yet. This paper follows from a wide discussion that took place in Dagstuhl, during the Perspectives Workshop "Data Warehousing at the Crossroads", and is aimed at outlining some open issues in modeling and design of data warehouses. More precisely, issues regarding conceptual models, logical models, methods for design, interoperability, and design for new architectures and applications are considered.

189 citations


Book
28 Nov 2006
TL;DR: Making Sense of Data educates readers on the steps and issues that need to be considered in order to successfully complete a data analysis or data mining project and appropriately treats technical topics to accomplish effective decision making from data.
Abstract: A practical, step-by-step approach to making sense out of data Making Sense of Data educates readers on the steps and issues that need to be considered in order to successfully complete a data analysis or data mining project The author provides clear explanations that guide the reader to make timely and accurate decisions from data in almost every field of study A step-by-step approach aids professionals in carefully analyzing data and implementing results, leading to the development of smarter business decisions With a comprehensive collection of methods from both data analysis and data mining disciplines, this book successfully describes the issues that need to be considered, the steps that need to be taken, and appropriately treats technical topics to accomplish effective decision making from data Readers are given a solid foundation in the procedures associated with complex data analysis or data mining projects and are provided with concrete discussions of the most universal tasks and technical solutions related to the analysis of data, including: Problem definitions Data preparation Data visualization Data mining Statistics Grouping methods Predictive modeling Deployment issues and applications Throughout the book, the author examines why these multiple approaches are needed and how these methods will solve different problems Processes, along with methods, are carefully and meticulously outlined for use in any data analysis or data mining project From summarizing and interpreting data, to identifying non-trivial facts, patterns, and relationships in the data, to making predictions from the data, Making Sense of Data addresses the many issues that need to be considered as well as the steps that need to be taken to master data analysis and mining

168 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe how Continental Airlines is a leader in real-time business intelligence, and much can be learned from how they have implemented it and how they use it.
Abstract: Data management for decision support has moved through three generations, with the latest being real-time data warehousing. This latest generation is significant because of its potential for affecting tactical decision making and business processes. Continental Airlines is a leader in real-time business intelligence, and much can be learned from how they have implemented it.

Book
30 Oct 2006
TL;DR: This text provides theoretical frameworks, presents challenges and their possible solutions, and examines the latest empirical research findings in the area of data warehousing.
Abstract: Covering a wide range of technical, technological, and research issues, this text provides theoretical frameworks, presents challenges and their possible solutions, and examines the latest empirical research findings in the area of data warehousing.

Journal ArticleDOI
01 Dec 2006
TL;DR: This paper presents a UML-based data warehouse design method that spans the three design phases (conceptual, logical and physical), and represents all the metamodels using UML, and illustrates the formal specification of the transformations based on OMG's Object Constraint Language (OCL).
Abstract: Data warehouses are a major component of data-driven decision support systems (DSS). They rely on multidimensional models. The latter provide decision makers with a business-oriented view to data, thereby easing data navigation and analysis via On-Line Analytical Processing (OLAP) tools. They also determine how the data are stored in the data warehouse for subsequent use, not only by OLAP tools, but also by other decision support tools. Data warehouse design is a complex task, which requires a systematic method. Few such methods have been proposed to date. This paper presents a UML-based data warehouse design method that spans the three design phases (conceptual, logical and physical). Our method comprises a set of metamodels used at each phase, as well as a set of transformations that can be semi-automated. Following our object orientation, we represent all the metamodels using UML, and illustrate the formal specification of the transformations based on OMG's Object Constraint Language (OCL). Throughout the paper, we illustrate the application of our method to a case study.

Patent
30 Jun 2006
TL;DR: In this paper, a system and method of making unstructured data available to structured data analysis tools is presented, which includes middleware software that can be used in combination with structured data tools to perform analysis on both structured and unstructural data.
Abstract: A system and method of making unstructured data available to structured data analysis tools. The system includes middleware software that can be used in combination with structured data tools to perform analysis on both structured and unstructured data. Data can be read from a wide variety of unstructured sources. The data may then be transformed with commercial data transformation products that may, for example, extract individual pieces of data and determine relationships between the extracted data. The transformed data and relationships may then be passed through an extraction/transform/load (ETL) layer and placed in a structured schema. The structured schema may then be made available to commercial or proprietary structured data analysis tools.

Book
13 Jan 2006
TL;DR: This paper discusses the two main Inference Modes of NKRL, Transformations and Hypotheses, and their applications in information modeling.
Abstract: Generic Relationships in Information Modeling.- EMMA - A Formal Basis for Querying Enhanced Multimedia Meta Objects.- Comparing and Transforming Between Data Models Via an Intermediate Hypergraph Data Model.- iASA: Learning to Annotate the Semantic Web.- A Survey of Schema-Based Matching Approaches.- An Overview and Classification of Adaptive Approaches to Information Extraction.- View Integration and Cooperation in Databases, Data Warehouses and Web Information Systems.- Semantic Integration of Tree-Structured Data Using Dimension Graphs.- KDD Support Services Based on Data Semantics.- Integrating the Two Main Inference Modes of NKRL, Transformations and Hypotheses.

Journal ArticleDOI
01 Nov 2006
TL;DR: The basic concept of document warehousing is discussed and its formal definitions are presented and a general system framework is proposed and some useful applications are elaborate to illustrate the importance of documentWarehousing.
Abstract: During the past decade, data warehousing has been widely adopted in the business community. It provides multi-dimensional analyses on cumulated historical business data for helping contemporary administrative decision-making. Nevertheless, it is believed that only about 20% information can be extracted from data warehouses concerning numeric data only, the other 80% information is hidden in non-numeric data or even in documents. Therefore, many researchers now advocate that it is time to conduct research work on document warehousing to capture complete business intelligence. Document warehouses, unlike traditional document management systems, include extensive semantic information about documents, cross-document feature relations, and document grouping or clustering to provide a more accurate and more efficient access to text-oriented business intelligence. In this paper, we discuss the basic concept of document warehousing and present its formal definitions. Then, we propose a general system framework and elaborate some useful applications to illustrate the importance of document warehousing. The work is essential for establishing an infrastructure to help combine text processing with numeric OLAP processing technologies. The combination of data warehousing and document warehousing will be one of the most important kernels of knowledge management and customer relationship management applications.

Journal Article
TL;DR: This study surveys the literature for the methodologies proposed or developed for entity resolution and record linkage and provides a foundation for solving many problems in data warehousing.
Abstract: A great deal of research is focused on formation of a data warehouse. This is an important area of research as it could save many computation cycles and thus allow accurate information provided to the right people at the right time. Two considerations when forming a data warehouse are data cleansing (including entity resolution) and with schema integration (including record linkage). Uncleansed and fragmented data requires time to decipher and may lead to increased costs for an organization, so data cleansing and schema integration can save a great many (human) computation cycles and can lead to higher organizational efficiency. In this study we survey the literature for the methodologies proposed or developed for entity resolution and record linkage. This survey provides a foundation for solving many problems in data warehousing. For instance, little or no research has been directed at the problem of maintenance of cleansed and linked relations.

Proceedings ArticleDOI
10 Nov 2006
TL;DR: It is argued that ontologies constitute a very suitable model for this purpose and how the usage of ontologies can enable a high degree of automation regarding the construction of an ETL design is shown.
Abstract: One of the most important tasks performed in the early stages of a data warehouse project is the analysis of the structure and content of the existing data sources and their intentional mapping to a common data model. Establishing the appropriate mappings between the attributes of the data sources and the attributes of the data warehouse tables is critical in specifying the required transformations in an ETL workflow. The selected data model should besuitable for facilitating the redefinition and revision efforts, typically occurring during the early phases of a data warehouse project, and serve as the means of communication between the involved parties. In this paper, we argue that ontologies constitute a very suitable model for this purpose and show how the usage of ontologies can enable a high degree of automation regarding the construction of an ETL design.

Journal ArticleDOI
09 Oct 2006
TL;DR: A new outlier detection algorithm is introduced to find small groups of data objects that are exceptional when compared with rest large amount of data to detect spatio-temporal outliers in large databases.
Abstract: Outlier detection is one of the major data mining methods. This paper proposes a three-step approach to detect spatio-temporal outliers in large databases. These steps are clustering, checking spatial neighbors, and checking temporal neighbors. In this paper, we introduce a new outlier detection algorithm to find small groups of data objects that are exceptional when compared with rest large amount of data. In contrast to the existing outlier detection algorithms, new algorithm has the ability of discovering outliers according to the non-spatial, spatial and temporal values of the objects. In order to demonstrate the new algorithm, this paper also presents an example application using a data warehouse

Patent
07 Mar 2006
TL;DR: The eXtensible on-line analytical processing (XOLAP) as discussed by the authors is a scalable client/server platform that allows the multi-dimensional analysis of modern data types, as well as traditional relational data, by bringing them all into an internal common XML-based model, without the time and expense of creating a data warehouse.
Abstract: A system and method for analyzing and reporting data from multiple sources is provided. The system is a foundation for an analytical platform that covers not only traditional relational data, but also a new generation of extensible data formats designed for the web, such as those based on XML (FIXML, FpML, ebXML, XBRL, ACORD, etc.), as well as HTML, E-mail, Excel, PDF, and others. In a preferred embodiment, the eXtensible on-line analytical processing (XOLAP), is a scalable client/server platform that allows the multi-dimensional analysis of modern data types, as well as traditional relational data, by bringing them all into an internal common XML-based model, without the time and expense of creating a data warehouse.

Book ChapterDOI
26 Mar 2006
TL;DR: This work is motivated by the need of keeping track of both confidence and lineage of the information stored in a semi-structured warehouse, based on the use of probabilistic event variables, and presents a new model, namely the fuzzy tree model, which supports both querying and updating over Probabilistic tree data.
Abstract: We present in this paper a new model for representing probabilistic information in a semi-structured (XML) database, based on the use of probabilistic event variables. This work is motivated by the need of keeping track of both confidence and lineage of the information stored in a semi-structured warehouse. For instance, the modules of a (Hidden Web) content warehouse may derive information concerning the semantics of discovered Web services that is by nature not certain. Our model, namely the fuzzy tree model, supports both querying (tree pattern queries with join) and updating (transactions containing an arbitrary set of insertions and deletions) over probabilistic tree data. We highlight its expressive power and discuss implementation issues.

Patent
21 Feb 2006
TL;DR: In this paper, the authors present a method and system for managing remote applications running on devices that acquire, process and store data locally in order to integrate said data with heterogeneous enterprise information systems and business processes.
Abstract: The present invention provides a method and system for managing remote applications running on devices that acquire, process and store data locally in order to integrate said data with heterogeneous enterprise information systems and business processes. The 5 system allows for remotely deploying, running, monitoring and updating of applications embedded within devices. The applications acquire, store and process data about assets that is eventually sent to a centralized data processing infrastructure. The system comprises an information integration framework that integrates the processed data with data that is extracted from heterogeneous data sources, in real-time, in order to create synthesized information.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This article reviews the concept of Business Intelligence and provides a survey, from a comprehensive point of view, on the BI technical framework, process, and enterprise solutions.
Abstract: Business intelligence (BI) has been viewed as sets of powerful tools and approaches to improving business executive decision-making, business operations, and increasing the value of the enterprise. The technology categories of BI mainly encompass data warehousing, OLAP, and data mining. This article reviews the concept of Business Intelligence and provides a survey, from a comprehensive point of view, on the BI technical framework, process, and enterprise solutions. In addition, the conclusions point out the possible reasons for the difficulties of broad deployment of enterprise BI, and the proposals of constructing a better BI system.

Journal Article
TL;DR: In this article, a framework for materialized view selection that exploits a data mining technique (clustering), in order to determine clusters of similar queries is proposed. But it is based on cost models that evaluate the cost of accessing data using views and the costs of storing these views.
Abstract: Materialized view selection is a non-trivial task. Hence, its complexity must be reduced. A judicious choice of views must be cost-driven and influenced by the workload experienced by the system. In this paper, we propose a framework for materialized view selection that exploits a data mining technique (clustering), in order to determine clusters of similar queries. We also propose a view merging algorithm that builds a set of candidate views, as well as a greedy process for selecting a set of views to materialize. This selection is based on cost models that evaluate the cost of accessing data using views and the cost of storing these views. To validate our strategy, we executed a workload of decision-support queries on a test data warehouse, with and without using our strategy. Our experimental results demonstrate its efficiency, even when storage space is limited.

Book
01 Jan 2006
TL;DR: The purpose of this book is to alert IT-MIS-Business professionals to the pervasiveness and criticality of data quality problems and to arm the students with approaches and the commitment to overcome these problems.
Abstract: This is a sound textbook for Information Technology and MIS undergraduate students, and MBA graduate students and all professionals looking to grasp a fundamental understanding of information quality. The authors performed an extensive literature search to determine the Fundamental Topics of Data Quality in Information Systems. They reviewed these topics via a survey of data quality experts at the International Conference on Information Quality held at MIT. The concept of data quality is assuming increased importance. Poor data quality affects operational, tactical and strategic decision-making, and yet error rates of up to 70%, with 30% typical are found in practice (Redman). Data that is deficient leads to misinformed people, who in turn make bad decisions. Poor quality data impedes activities such as re-engineering business processes and implementing business strategies. Poor data quality has contributed to major disasters in the federal government, NASA, Information Systems, Federal Bureau of Investigation, and most busineses. The diverse uses of data and the increased sharing of data that has arisen as a result of the widespread introduction of data warehouses have exacerbated deficiencies with the quality of data (Ballou). In addition, up to half the cost of creating a data warehouse is attributable to poor data quality. The management of data quality so as to ensure the quality of information products is examined in Wang. The purpose of this book is to alert our IT-MIS-Business professionals to the pervasiveness and criticality of data quality problems. The secondary agenda is to begin to arm the students with approaches and the commitment to overcome these problems. The current authors have a combined list of over 200 published papers on data and information quality.

Patent
30 Jun 2006
TL;DR: In this paper, a data warehouse solution system comprises a metadata model, a user interface and an engine, where the metadata model has an information needs model including metadata regarding information needs for building reports by users, and a data information model describing data describing data that is available for generating reports.
Abstract: A data warehouse solution system comprises a metadata model, a user interface and an engine. The metadata model has an information needs model including metadata regarding information needs for building reports by users, and a data information model including metadata describing data that is available for building reports. The user interface has a customer user interface for presenting the information needs model to the users for report generation, and a modeling user interface for presenting the data information model to the users for manipulating data warehouse objects. The engine has a report management service unit for providing report management service using the information needs model, and a data management service unit for providing data management service including generation of a data warehouse using the data information model.

Journal ArticleDOI
TL;DR: It has been shown that the BI concept may contribute towards improving quality of decision-making in any organisation, better customer service and some increase in customers’ loyalty.
Abstract: The paper aims at analysing Business Intelligence Systems (BI) in the context of opportunities for improving decision-making in a contemporary organisation. The authors – taking specifics of a decision-making process together with heterogeneity and dispersion of information sources into consideration – present Business Intelligence Systems as some holistic infrastructure of decisionmaking. It has been shown that the BI concept may contribute towards improving quality of decision-making in any organisation, better customer service and some increase in customers’ loyalty. The paper is focused on three fundamental components of the BI systems, i.e. key information technologies (including ETL tools and data warehouses), potential of key information technologies (OLAP techniques and data mining) and BI applications that support making different decisions in an organisation. A major part of the paper is devoted to discussing basic business analyses that are not only offered by the BI systems but also applied frequently in business practice.

Journal ArticleDOI
TL;DR: The result is not only a roadmap for understanding the integration of data mining in digital library services, but also a template for other cross-discipline data mining researchers to follow for systematic exploration in their own subject domains.
Abstract: Over the past few years, data mining has moved from corporations to other organizations. This paper looks at the integration of data mining in digital library services. First, bibliomining, or the combination of bibliometrics and data mining techniques to understand library services, is defined and the concept explored. Second, the conceptual frameworks for bibliomining from the viewpoint of the library decision-maker and the library researcher are presented and compared. Finally, a research agenda to resolve many of the common bibliomining issues and to move the field forward in a mindful manner is developed. The result is not only a roadmap for understanding the integration of data mining in digital library services, but also a template for other cross-discipline data mining researchers to follow for systematic exploration in their own subject domains.

Journal ArticleDOI
TL;DR: The success of data warehouses depends on the interaction of technology and social context and new insights into the implementation process and interventions can lead to success.
Abstract: The success of data warehouses depends on the interaction of technology and social context. We present new insights into the implementation process and interventions that can lead to success.

Journal ArticleDOI
01 Dec 2006
TL;DR: This paper proposes an Access Control and Audit model for DWs by specifying security rules in the conceptual MD modeling, and extends the Unified Modeling Language (UML) with the authors' ACA model, thereby allowing us to design secure MD models.
Abstract: Due to the sensitive data contained in Data Warehouses (DW), it is essential to specify security measures from the early stages of the DW design and enforce them. Traditional access control models for transactional (relational) databases, based on tables, columns and rows, are not appropriate for DWs. Instead, security and audit rules defined for DWs must be specified based on the multidimensional (MD) modeling used to design data warehouses. Current approaches for the conceptual modeling of DWs do not allow us to specify security and confidentiality constraints in the conceptual modeling phase. In this paper, we propose an Access Control and Audit (ACA) model for DWs by specifying security rules in the conceptual MD modeling. Thus, we define authorization rules for users and objects and we assign sensitive information rules and authorization roles to the main elements of a MD model (e.g., facts or dimensions). Moreover, we also specify certain audit rules allowing us to analyze user behaviors. To be able to include and use our ACA model in the conceptual MD modeling, we extend the Unified Modeling Language (UML) with our ACA model, thereby allowing us to design secure MD models. Finally, to show the benefit of our approach, we apply our approach to a health care case study.