scispace - formally typeset
Search or ask a question
Author

Antoni Olivé

Bio: Antoni Olivé is an academic researcher from Polytechnic University of Catalonia. The author has contributed to research in topics: Conceptual schema & Conceptual model. The author has an hindex of 21, co-authored 98 publications receiving 1762 citations.


Papers
More filters
Book
15 Aug 2007
TL;DR: This brilliant textbook explains in detail the principles of conceptual modeling independently from particular methods and languages and shows how to apply them in real-world projects.
Abstract: This brilliant textbook explains in detail the principles of conceptual modeling independently from particular methods and languages and shows how to apply them in real-world projects. The author covers all aspects of the engineering process from structural modeling over behavioral modeling to meta-modeling, and completes the presentation with an extensive case study based on the osCommerce system. Written for computer science students in classes on information systems modeling as well as for professionals feeling the need to formalize their experiences or to update their knowledge, Oliv delivers here a comprehensive treatment of all aspects of the modeling process. His book is complemented by lots of exercises and additional online teaching material.

271 citations

Book
01 Jan 2009
TL;DR: A conceptual tool to better understand this player arena is proposed and its objective is to provide the researchers with an ontology for analyzing and assessing the business models adopted by these players.

126 citations

BookDOI
01 Jan 2006

104 citations

Book
01 Jan 2006
TL;DR: From Conceptual Modeling to Requirements Engineering, and Towards a Holistic Conceptual Modelling-Based Software Development Process.
Abstract: Keynote Papers.- Suggested Research Directions for a New Frontier - Active Conceptual Modeling.- From Conceptual Modeling to Requirements Engineering.- Web Services.- A Context Model for Semantic Mediation in Web Services Composition.- Modeling Service Compatibility with Pi-calculus for Choreography.- The DeltaGrid Abstract Execution Model: Service Composition and Process Interference Handling.- Quality in Conceptual Modeling.- Evaluating Quality of Conceptual Models Based on User Perceptions.- Representation Theory Versus Workflow Patterns - The Case of BPMN.- Use Case Modeling and Refinement: A Quality-Based Approach.- Aspects of Conceptual Modeling.- Ontology with Likeliness and Typicality of Objects in Concepts.- In Defense of a Trope-Based Ontology for Conceptual Modeling: An Example with the Foundations of Attributes, Weak Entities and Datatypes.- Explicitly Representing Superimposed Information in a Conceptual Model.- Modeling Advanced Applications.- Preference Functional Dependencies for Managing Choices.- Modeling Visibility in Hierarchical Systems.- A Model for Anticipatory Event Detection.- XML.- A Framework for Integrating XML Transformations.- Oxone: A Scalable Solution for Detecting Superior Quality Deltas on Ordered Large XML Documents.- Schema-Mediated Exchange of Temporal XML Data.- A Quantitative Summary of XML Structures.- Semantic Web.- Database to Semantic Web Mapping Using RDF Query Languages.- Representing Transitive Propagation in OWL.- On Generating Content and Structural Annotated Websites Using Conceptual Modeling.- Requirements Modeling.- A More Expressive Softgoal Conceptualization for Quality Requirements Analysis.- Conceptualizing the Co-evolution of Organizations and Information Systems: An Agent-Oriented Perspective.- Towards a Theory of Genericity Based on Government and Binding.- Aspects of Interoperability.- Concept Modeling by the Masses: Folksonomy Structure and Interoperability.- Method Chunks for Interoperability.- Domain Analysis for Supporting Commercial Off-the-Shelf Components Selection.- Metadata Management.- A Formal Framework for Reasoning on Metadata Based on CWM.- A Set of QVT Relations to Assure the Correctness of Data Warehouses by Using Multidimensional Normal Forms.- Design and Use of ER Repositories: Methodologies and Experiences in eGovernment Initiatives.- Human-Computer Interaction.- Notes for the Conceptual Design of Interfaces.- The User Interface Is the Conceptual Model.- Towards a Holistic Conceptual Modelling-Based Software Development Process.- Business Modeling.- A Multi-perspective Framework for Organizational Patterns.- Deriving Concepts for Modeling Business Actions.- Towards a Reference Ontology for Business Models.- Reasoning.- Reasoning on UML Class Diagrams with OCL Constraints.- On the Use of Association Redefinition in UML Class Diagrams.- Optimising Abstract Object-Oriented Database Schemas.- Panels.- Experimental Research on Conceptual Modeling: What Should We Be Doing and Why?.- Eliciting Data Semantics Via Top-Down and Bottom-Up Approaches: Challenges and Opportunities.- Industrial Track.- The ADO.NET Entity Framework: Making the Conceptual Level Real.- XMeta Repository and Services.- IBM Industry Models: Experience, Management and Challenges.- Community Semantics for Ultra-Scale Information Management.- Managing Data in High Throughput Laboratories: An Experience Report from Proteomics.- Policy Models for Data Sharing.- Demos and Posters.- Protocol Analysis for Exploring the Role of Application Domain in Conceptual Schema Understanding.- Auto-completion of Underspecified SQL Queries.- iQL: A Query Language for the Instance-Based Data Model.- Designing Under the Influence of Speech Acts: A Strategy for Composing Enterprise Integration Solutions.- Geometry of Concepts.

90 citations

Journal ArticleDOI
01 Apr 1995
TL;DR: This work proposes a new method for updating knowledge bases while maintaining their consistency based on events and transition rules, an extension of the SLDNF procedure, which allows us to obtain all possible minimal ways of updating a knowledge base without violating any integrity constraint.
Abstract: When updating a knowledge base, several problems may arise. One of the most important problems is that of integrity constraints satisfaction. The classic approach to this problem has been to develop methods for checking whether a given update violates an integrity constraint. An alternative approach consists of trying to repair integrity constraints violations by performing additional updates that maintain knowledge base consistency. Another major problem in knowledge base updating is that of view updating, which determines how an update request should be translated into an update of the underlying base facts. We propose a new method for updating knowledge bases while maintaining their consistency. Our method can be used for both integrity constraints maintenance and view updating. It can also be combined with any integrity checking method for view updating and integrity checking. The kind of updates handled by our method are: updates of base facts, view updates, updates of deductive rules, and updates of integrity constraints. Our method is based on events and transition rules, which explicitly define the insertions and deletions induced by a knowledge base update. Using these rules, an extension of the SLDNF procedure allows us to obtain all possible minimal ways of updating a knowledge base without violating any integrity constraint.

79 citations


Cited by
More filters
Book ChapterDOI
04 Jul 2009
TL;DR: This chapter reviews the state of the art on the treatment of non-functional requirements (hereafter, NFRs), while providing some prospects for future directions.
Abstract: Essentially a software system's utility is determined by both its functionality and its non-functional characteristics, such as usability, flexibility, performance, interoperability and security. Nonetheless, there has been a lop-sided emphasis in the functionality of the software, even though the functionality is not useful or usable without the necessary non-functional characteristics. In this chapter, we review the state of the art on the treatment of non-functional requirements (hereafter, NFRs), while providing some prospects for future directions.

2,443 citations

Book
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.

1,864 citations

Book
01 Sep 2012
TL;DR: This book is to provide an agile and flexible tool to introduce you to the MDSE world, thus allowing you to quickly understand its basic principles and techniques and to choose the right set of MDSE instruments for your needs so that you can start to benefit from MDSE right away.
Abstract: This book discusses how model-based approaches can improve the daily practice of software professionals. This is known as Model-Driven Software Engineering (MDSE) or, simply, Model-Driven Engineering (MDE). MDSE practices have proved to increase efficiency and effectiveness in software development, as demonstrated by various quantitative and qualitative studies. MDSE adoption in the software industry is foreseen to grow exponentially in the near future, e.g., due to the convergence of software development and business analysis. The aim of this book is to provide you with an agile and flexible tool to introduce you to the MDSE world, thus allowing you to quickly understand its basic principles and techniques and to choose the right set of MDSE instruments for your needs so that you can start to benefit from MDSE right away. The book is organized into two main parts. The first part discusses the foundations of MDSE in terms of basic concepts (i.e., models and transformations), driving principles, application scenarios and current standards, like the well-known MDA initiative proposed by OMG (Object Management Group) as well as the practices on how to integrate MDSE in existing development processes. The second part deals with the technical aspects of MDSE, spanning from the basics on when and how to build a domain-specific modeling language, to the description of Model-to-Text and Model-to-Model transformations, and the tools that support the management of MDSE projects. The book is targeted to a diverse set of readers, spanning: professionals, CTOs, CIOs, and team managers that need to have a bird's eye vision on the matter, so as to take the appropriate decisions when it comes to choosing the best development techniques for their company or team; software analysts, developers, or designers that expect to use MDSE for improving everyday work productivity, either by applying the basic modeling techniques and notations or by defining new domain-specific modeling languages and applying end-to-end MDSE practices in the software factory; and academic teachers and students to address undergrad and postgrad courses on MDSE. In addition to the contents of the book, more resources are provided on the book's website http://www.mdse-book.com/, including the examples presented in the book. Table of Contents: Introduction / MDSE Principles / MDSE Use Cases / Model-Driven Architecture (MDA) / Integration of MDSE in your Development Process / Modeling Languages at a Glance / Developing your Own Modeling Language / Model-to-Model Transformations / Model-to-Text Transformations / Managing Models / Summary

829 citations

Proceedings ArticleDOI
01 Jun 1993
TL;DR: A counting algorithm that tracks the number of alternative derivations (counts) for each derived tuple in a view, and shows that the count for a tuple can be computed at little or no cost above the cost of deriving the tuple.
Abstract: We present incremental evaluation algorithms to compute changes to materialized views in relational and deductive database systems, in response to changes (insertions, deletions, and updates) to the relations. The view definitions can be in SQL or Datalog, and may use UNION, negation, aggregation (e.g. SUM, MIN), linear recursion, and general recursion.We first present a counting algorithm that tracks the number of alternative derivations (counts) for each derived tuple in a view. The algorithm works with both set and duplicate semantics. We present the algorithm for nonrecursive views (with negation and aggregation), and show that the count for a tuple can be computed at little or no cost above the cost of deriving the tuple. The algorithm is optimal in that it computes exactly those view tuples that are inserted or deleted. Note that we store only the number of derivations, not the derivations themselves.We then present the Delete and Rederive algorithm, DRed, for incremental maintenance of recursive views (negation and aggregation are permitted). The algorithm works by first deleting a superset of the tuples that need to be deleted, and then rederiving some of them. The algorithm can also be used when the view definition is itself altered.

787 citations