scispace - formally typeset
Search or ask a question

Showing papers in "The Journal of Object Technology in 2006"


Journal ArticleDOI
TL;DR: This paper investigates 22 metrics proposed by various researchers and describes how they are applied on standard projects on the basis of which descriptive statistics, principal component analysis and correlation analysis is presented.
Abstract: The increasing importance of software measurement has led to development of new software measures. Many metrics have been proposed related to various constructs like class, coupling, cohesion, inheritance, information hiding and polymorphism. But there is a little understanding of the empirical hypotheses and application of many of these measures. It is often difficult to determine which metric is more useful in which area. As a consequence, it is very difficult for project managers and practitioners to select measures for object-oriented systems. In this paper we investigate 22 metrics proposed by various researchers. The metrics are first defined and then explained using practical applications. They are applied on standard projects on the basis of which descriptive statistics, principal component analysis and correlation analysis is presented. Finally, a review of the empirical study concerning chosen metrics and subset of these measures that provide sufficient information is given and metrics providing overlapping information are excluded from the set.

135 citations


Journal ArticleDOI
TL;DR: To eliminate this source of unsoundness, axioms that are weaker than the naive axiomatization are used, which require one to prove, by giving a witness, that the specification of a pure method m is satisfiable in order to assume the properties of m and mS.
Abstract: class Inconsistent { /*@ normal_behavior @ ensures \result == 0 && @ \result == 1; @*/ /*@ pure @*/ abstract int wrong(); /*@ normal_behavior @ assignable othing; @ ensures \result == 6 + wrong() && @ \result == 5 + wrong(); @*/ int bar() { return 6; } } Figure 6: The specification of wrong is not satisfiable. theory used to verify methods that use m in their specification. If this background theory is inconsistent, the reasoning is potentially unsound. For instance, the above axiom is part of the background theory used to verify method bar and allows one to verify bar, although its specification is obviously not satisfiable. Note that this unsoundness occurs even though wrong is not called from bar’s implementation. In practice, unsatisfiable specifications are far less obvious than in the example of method wrong, because they typically involve several normal behavior specification cases including inherited specifications. A verification technique has to ensure that unsatisfiable specifications do not lead to unsound reasoning. To eliminate this source of unsoundness, we use axioms that are weaker than the naive axiomatization above. These axioms require one to prove, by giving a witness, that the specification of a pure method m is satisfiable in order to assume the properties of m and mS. That is, the axioms for m and mS are guarded by the following antecedent: (∃ r, OS ′ • specm(t, p, OS, r, OS ′) ) The existence of a witness has to be proven in order to employ the corresponding axiom. For method wrong, one cannot give a witness r that satisfies r = 0 ∧ r = 1. Therefore, the antecedent of the corresponding axiom is false, and the axiom is void. VOL 05, NO. 5 JOURNAL OF OBJECT TECHNOLOGY 75 REASONING ABOUT METHOD CALLS IN INTERFACE SPECIFICATIONS

58 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to review the most important UML profiles for real-time from the academia, the industry and/or standard organizations; and the research activity that revolves around these profiles.
Abstract: Real-time systems (RTS) have strict timing constraints and limited resources. The satisfaction of RTS timing constraints is required for their correction. In order to reduce the cost due to late discovery of design flaws and/or violations of timing constraints of RTS as well as to speed up their development to cope with time-to-market requirements, it is important to validate, at early stages of the development process, the functional and nonfunctional properties of RTS. In addition, RTS complexity is continuously increasing which makes their design very challenging. UML, a graphical object-oriented modeling language, is suitable to deal with this complexity. UML also supports predictive, quantitative analysis through its real-time profiles. The objective of this paper is to review the most important UML profiles for real-time from the academia, the industry and/or standard organizations; and the research activity that revolves around these profiles.

51 citations


Journal ArticleDOI
TL;DR: The paper introduces modeling spaces in order to help software practitioner to understand modeling and illustrates the benefits of that framework for explaining present dilemmas practitioners have regarding models, metamodels, and model transformations.
Abstract: The paper introduces modeling spaces in order to help software practitioner to understand modeling. Usually software engineers often think of a specific kind of models – UML models, but there are many open questions such as: Should we assume that the code we write is a model or not; What are models and metamodels, and why do we need them; What does it mean to transform a model into a programming language. Unlike current research efforts that answer to those questions in rather partial ways, we define a formal encompassing framework (i.e. Modeling spaces) for studying many modeling problems in a more comprehensive way. We illustrate the benefits of that framework for explaining present dilemmas practitioners have regarding models, metamodels, and model transformations.

47 citations


Journal ArticleDOI
TL;DR: There is a mismatch between some of the motivating problems and the proposed solutions for “readonly” style qualifiers for Java, and the principles behind the rules for these new qualifiers are found.
Abstract: In this paper, I examine some of reasons that “readonly” style qualifiers have been proposed for Java, and also the principles behind the rules for these new qualifiers I find that there is a mismatch between some of the motivating problems and the proposed solutions Thus I urge Java designers to proceed with caution when adopting a solution to these sets of problems

46 citations


Journal ArticleDOI
TL;DR: This work presents a tool that can recover five different design patterns from C++ code with high precision and at a speed of 3×106LOC/hr, which makes it suitable for analysis of large (multi-millon LOC) systems.
Abstract: Design Patterns are informal descriptions of tested solutions to recurring problems. Most design tools have little or no support for documenting the presence and usage of patterns in code. Reverse engineering is therefore often required to recover Design Patterns from code in existing projects. Knowledge of what Design Patterns have been used can aid in code comprehension, as well as support research. Since pattern descriptions are abstract and informal, they offer no algorithmic translation into concrete code. Some patterns prescribe class structures that are easy to recognize, while others lead to structures that are difficult or impossible to recognize. This work presents a tool that can recover five different design patterns from C++ code with high precision and at a speed of 3×106LOC/hr. This makes it suitable for analysis of large (multi-millon LOC) systems.

45 citations


Journal ArticleDOI
TL;DR: A new approach for aspect cohesion measurement based on dependencies analysis is proposed and several cohesion criteria taking into account aspects' features and capturing various dependencies between their members are introduced.
Abstract: Aspect-Oriented Software Development is a promising new software engineering paradigm. It promotes, in particular, improved separation of crosscutting concerns into single units called aspects. AspectJ, the most used aspect-oriented programming language, represents an extension of Java. In fact, existing objectoriented programming languages suffer from a serious limitation in modularizing adequately crosscutting concerns. Many concerns crosscut several classes in an object-oriented system. Moreover, several metrics have been proposed in order to assess object-oriented software quality attributes. However, these metrics do not cover the new abstractions and complexity dimensions introduced by the aspect paradigm. As a consequence, new metrics must be developed to assess aspectoriented systems quality attributes. Cohesion is considered as one of the most important software quality attributes. Cohesion refers to the degree of relatedness between members of a software component. We propose, in this paper, a new approach for aspect cohesion measurement based on dependencies analysis. We introduce several cohesion criteria taking into account aspects' features and capturing various dependencies between their members. We also propose a new aspect cohesion metric and compare it, using several case studies, to few existing aspect cohesion metrics.

45 citations


Journal ArticleDOI
TL;DR: A general procedure for synthesizing non-linear formulas which conservatively estimate the quantity of memory explicitly allocated by a method as a function of its parameters is proposed.
Abstract: We present a static analysis for computing a parametric upper-bound of the amount of memory dynamically allocated by (Java-like) imperative object-oriented programs. We propose a general procedure for synthesizing non-linear formulas which conservatively estimate the quantity of memory explicitly allocated by a method as a function of its parameters. We have implemented the procedure and evaluated it on several benchmarks. Experimental results produced exact estimations for most test cases, and quite precise approximations for many of the others. We also apply our technique to compute usage in the context of scoped memory and discuss some open issues.

43 citations


Journal ArticleDOI
TL;DR: This article adopts a relevance theory approach and experiments with building an explanatory framework for translating the implicit information in literary texts, based on a new notion: translation is clues-based interpretive use of language across language boundaries.
Abstract: As one type of cross-cultural communication, the literary translation is more difficult for the translator as he has to deal with a large chunk of implicit information. The implicit information has as its characteristics, such as graded communicability, context-dependence, the correlation among the implicit information, text and context, etc. These characteristics restrict the communicability of the literary texts in another context, so the translator of the literary texts often finds more difficulties in translating. Encouraged by Gutt’s theory and his recent findings, this article adopts a relevance theory approach and attempts to present a cognitive study of the implicit information in literary texts. It experiments with building an explanatory framework for translating the implicit information in literary texts. The framework is based on a new notion: translation is clues-based interpretive use of language across language boundaries.

32 citations


Journal ArticleDOI
TL;DR: This work identifies security weaknesses in J2ME CLDC that may represent sources of security exploits and is valuable for any attempt to test or harden the security of this platform.
Abstract: Java 2 Micro-Edition Connected Limited Device Configuration (J2ME CLDC) is the platform of choice when it comes to running mobile applications on resourceconstrained devices (cell phones, set-top boxes, etc.). The large deployment of this platform makes it a target for security attacks. The intent of this paper is twofold: First, we study and evaluate the security model of J2ME CLDC. Second, we provide a vulnerability analysis of this Java platform. The evaluated components are: Virtual machine, CLDC API and MIDP (Mobile Information Device Profile) API. The analysis covers the specifications, the reference implementation (RI) as well as several other widely-deployed implementations of this platform. The aspects targeted by this security analysis encompass: Networking, record management system, virtual machine, multi-threading and digital rights management. This work identifies security weaknesses in J2ME CLDC that may represent sources of security exploits. Moreover, the results reported in this paper are valuable for any attempt to test or harden the security of this platform.

27 citations


Journal ArticleDOI
TL;DR: The influence that the understanding of metaphor has had on the praxis of translation and more recent insights in human conceptual processes, in particular those of image-schemas, conceptual metaphors and conceptual blends are introduced.
Abstract: I have five goals for this paper. First, I will demonstrate the influence that the understanding of metaphor has had on the praxis of translation. Second, I will introduce and apply more recent insights in human conceptual processes, in particular those of image-schemas, conceptual metaphors and conceptual blends. Third, I will introduce optimality principles and relate them to the suggested conceptual blends. Fourth, I will present some translations of conceptual blends and then suggest optimality principles for translating conceptual blends and evaluate the translations by them. Finally, I will suggest areas that require further research. This study is exploratory and suggestive. Hopefully, readers will wish to broaden their understanding of cognitive linguistics and refine what is presented here.

Journal ArticleDOI
TL;DR: In this paper, the authors propose formal component and method contracts for stack inspection-based sandboxing, and show that formal verification of these contracts is feasible with state-of-the-art program verification tools.
Abstract: Stack inspection-based sandboxing originated as a security mechanism for safely executing partially trusted code. Today, it is widely used for the more general purpose of supporting the principle of least privilege in component-based software development. In this more general setting, the permissions required by a component to run properly, or the permissions needed by other components to successfully call methods in a given component are conceptually part of the interface specification of the component. Hence, correct documentation of this part of the interface is essential. In this paper, we propose formal component and method contracts for stack inspectionbased sandboxing, and we show that formal verification of these contracts is feasible with state-of-the-art program verification tools. Our contracts are significantly more expressive than existing type systems for stack inspection-based sandboxing. We describe our solution in the context of the sandboxing mechanism in the .NET Framework, called Code Access Security. Our system relies on the Spec# programming language and its accompanying static verification tool.

Journal ArticleDOI
TL;DR: This work proposes to extend object-oriented languages, Java in the current implementation, to address the issue of reuse of hierarchies of classes by introducing a new concept called adapter, which enables to specify the composition protocol of a hierarchy of classes independently from the context of use.
Abstract: Object-oriented languages provide insufficient answers regarding reuse of hierarchies of classes especially because mechanisms provided for separating application concerns are not sufficient. We propose to extend object-oriented languages, Java in the current implementation, to address this particular issue. The model, inspired by approaches dedicated to the separation of concerns, introduces a new concept called adapter. It enables to specify the composition protocol of a hierarchy of classes independently from the context of use. This composition protocol allows the programmer to benefit from the necessary guidance and controls when the adapter is customized to be integrated into applications.

Journal ArticleDOI
TL;DR: This work proposes an approach to support pattern matching in mainstream object-oriented languages without language extension, where a pattern is a first-class entity, which can be created, be passed as argument, and receive method invocations, just like any other object.
Abstract: Pattern matching is a powerful programming concept which has proven its merits in declarative programming. The absence of pattern-matching in object-oriented programming languages is felt especially when tackling source code processing problems. But existing proposals for pattern matching in such languages rely on language extension, which makes their adoption overly intrusive. We propose an approach to support pattern matching in mainstream object-oriented languages without language extension. In this approach, a pattern is a first-class entity, which can be created, be passed as argument, and receive method invocations, just like any other object. We demonstrate how our approach can be used in conjunction with existing parser generators to perform pattern matching on various kinds of abstract syntax representation. We elaborate our approach to include concrete syntax patterns, and mixing of patterns and visitors for the construction of sophisticated syntax tree traversals.

Journal ArticleDOI
TL;DR: This column describes the problems and how a language might eliminate them in floating point numbers and floating-point arithmetic.
Abstract: Floating-point numbers and floating-point arithmetic contain some surprising pitfalls. In particular, the widely-adopted IEEE 754 standard contains a number that is “not a number,” and thus has some surprising properties. One has to be extremely careful in writing assertions about floating point numbers, to avoid these pitfalls. This column describes the problems and how a language might eliminate them.

Journal ArticleDOI
TL;DR: This work reuse the method developed for similar work for Java and the Java Virtual Machine (JVM) in [25] with the aim of establishing some important properties of C and CLR by mathematical proofs.
Abstract: This work is part of a larger project [17] which aims at establishing some important properties of C] and CLR by mathematical proofs. Examples are the correctness of the bytecode verifier of CLR [11], the type safety (along the lines of the first author’s correctness proof [14, 15] for the definite assignment rules) of C], the correctness of a general compilation scheme. We reuse the method developed for similar work for Java and the Java Virtual Machine (JVM) in [25]. As part of this effort, in [5, 13, 20] an abstract interpreter has been developed for C], including a thread and memory model [24, 23]; see also [8] for a comparative view of the abstract interpreters for Java and for C].

Journal ArticleDOI
TL;DR: This paper introduces several new design patterns that were discovered through the experience of teaching object-oriented analysis and design in both industry and academia that can be used in almost every application.
Abstract: This paper discusses the use of design patterns during the transition phase from analysis to design of object-oriented systems. Pattern mining, which is the process of finding and documenting design patterns, usually takes place during or after the development of a software system. This paper introduces several new design patterns that were discovered through the experience of teaching object-oriented analysis and design in both industry and academia. These are low level design patterns that can be used in almost every application. Finally, a tool will be proposed to automate the application of these patterns when creating a detailed design model.

Journal ArticleDOI
TL;DR: In this issue of Strategic Software Engineering, multiple views of complexity, sources of complexity and actions that manage complexity are considered.
Abstract: Complexity is a much analyzed, much debated, much measured property of softwareintensive products. From a strategic point of view, complexity has implications for the development and evolution of software-intensive products. In this issue of Strategic Software Engineering I will consider multiple views of complexity, sources of complexity and actions that manage complexity.

Journal Article
TL;DR: Over the last decade the authors have witnessed the maturity of both best practices and worst practices in both the patterns that can lead to successful implementations as well as the anti-patterns that show the common mistakes that they make.
Abstract: Over the last decade we have witnessed the maturity of both best practices (as manifested in the patterns movement http://en.wikipedia.org/wiki/Design_pattern_(computer_ science)) and worst practices (as manifested in the anti-patterns movement http://en.wikipedia.org/wiki/Anti-pattern.) As we evolve through software engineering (moving from functional to objects to components to services) practices, we can see trends emerging in both the patterns that can lead to successful implementations as well as the anti-patterns that show the common mistakes that we make.

Journal ArticleDOI
TL;DR: The article discusses the ability of UML and its Profile for Schedulability, Performance and Time to determine the schedulability of a planned piece of software.
Abstract: UML is the standard visual object modeling language which may be very useful as a system design communication language. However, UML as a real-time modeling language has limitations. It basically provides a lot of syntax, but not enough semantics. The UML Profile for Schedulability, Performance and Time, extends the notation and semantics for the real-time domain. It is supposed to overcome the limitations which are related to and provide suitability for modeling real-time systems. Since we are mainly concerned with the problem of real-time operating system scheduling, the article discusses the ability of UML and its profile to determine the schedulability of a planned piece of software.

Journal ArticleDOI
TL;DR: The Goal-Driven Development Process is presented using a case study and according to BMM, which shows in practice how to develop IT applications on the basis of company's goals then align these applications with the changing business rules and policies.
Abstract: In the previous articles that I wrote for JOT in 2005 and 2003, I presented the 'Goal-Driven Development Process' and its patterns that aim at increasing the business reactivity of the companies in face of changes. In this article, this methodology is presented using a case study and according to BMM (The Business Motivation Model-voted by the OMG in September 2005). As you know, use case driven and object-oriented development processes are widely used in organisations for building their IT systems. This practice allows stakeholders to concentrate their requirements management efforts as well as analysis and design efforts on the usage choices of the systems. However, IT systems developed only with use-case driven and object-oriented development methodologies do not provide their organisations with good levels of reactivity in face of changes. This is because these systems are not structured on the basis of business goals and underlying rules & policies that support the achievement of these goals, so they are unable to capture changes on the business needs and propagate them coherently toward IT applications [Align-IT]. In order to show in practice how to develop IT applications on the basis of company's goals then align these applications with the changing business rules and policies, we present below steps of the Goal-Driven Development Process on a case study using the Enterprise Architect (EA), a UML 2 compliant case tool.

Journal ArticleDOI
TL;DR: These describe the abstract data model and for the CreditCard example would include var l im i t : nat , ba lance : int as well as other declarations needed for the credit card example.
Abstract: ˆ var iable declarations; These describe the abstract data model and for the CreditCard example would include var l im i t : nat , ba lance : int ; 190 JOURNAL OF OBJECT TECHNOLOGY VOL 6, NO. 2


Journal ArticleDOI
TL;DR: I focused on SOA reference architectures which define the key enabling technologies, the realization of key SOA principles, and the details of key architectural layers and components through hardware, software and services.
Abstract: In a previous article (http://www.jot.fm/issues/issue_2005_07/column6) I presented the effort on addressing depth in the practice of SOA through the details of reference architectures, roadmaps, and governance. I focused on SOA reference architectures which define the key enabling technologies (services, web services, open standards, etc.), the realization of key SOA principles (composability, loose coupling, reusability, etc.), the details of key architectural layers and components (service provider/consumer architectures, enterprise service bus, component architecture, etc.), and the realization of these components through hardware, software and services.

Journal ArticleDOI
TL;DR: This paper presents a modification of the types of supportive information that Breeze (1992) identified for hortatory discourses as a basis for bringing out the mismatches that are most likely to occur when translating from a verb-object (VO) language to an object-verb (OV) language.
Abstract: This paper presents a modification of the types of supportive information that Breeze (1992) identified for hortatory discourses as a basis for bringing out the mismatches that are most likely to occur when translating from a verb-object (VO) language to an object-verb (OV) language. Earlier sections review the factors that underlie Longacre’s (1996) classification of texts into four broad categories and outline what characterizes mainline information for each genre. They are followed by illustrations of deductive and inductive reasoning from Koine Greek and Ancient Hebrew, since deductive reasoning tends to correlate with instructional exhortations and inductive reasoning with attempts to persuade.

Journal ArticleDOI
TL;DR: The database transaction concept serves as an interesting test case for this objective, since it is a general concept which can be applied to many different applications.
Abstract: An important aim in the design of the Timor programming language is to provide programmers with features which enable them to build complex systems from components which can be developed in isolation from each other (i.e. without knowledge of each other's existence). The database transaction concept serves as an interesting test case for this objective, since it is a general concept which can be applied to many different applications. The paper discusses those features of Timor which allow this objective to be achieved.

Journal ArticleDOI
TL;DR: Service Oriented Architecture (SOA) continues to entice enterprises by promising flexibility, agility and alignment of IT with business objectives; along with the ever elusive advantages of increased reuse, better security, control over integration expenditures, and reduced IT maintenance costs.
Abstract: Service Oriented Architecture (SOA) continues to entice enterprises by promising flexibility, agility and alignment of IT with business objectives; along with the ever elusive advantages of increased reuse, better security, control over integration expenditures, and reduced IT maintenance costs However, as we have learned over the last two decades achieving these benefits has more to do with behaviors, policies and procedures rather than the quality of the strategy, architecture or code

Journal ArticleDOI
TL;DR: In this paper, the authors show how insights from the work of Simon Dik, Jan Firbas and Knud Lambrecht have contributed to our understanding of the significance of variations in constituent order.
Abstract: At least three discourse-related areas of exegesis tend not to be handled satisfactorily in many commentaries: the order of constituents in the clause and sentence, the presence versus absence of the article with nouns, and the significance of the conjunctions used. This paper first shows how insights from the work of Simon Dik, Jan Firbas and Knud Lambrecht have contributed to our understanding of the significance of variations in constituent order. Other insights that bear on constituent order are the Principle of Natural Information Flow and the distinction between default versus marked ordering. The paper then outlines how recent insights about the presence versus absence of the article may help us to choose between alternative exegeses of the same passage. The final section shows how insights from the work of Diane Blakemore and Reboul and Moeschler have revolutionized our understanding of the most common conjunctions used in the New Testament.

Journal ArticleDOI
TL;DR: This paper extends the concept of qualifying types by describing how their implementations can include not only bracket methods which are applied when a method of a target object is invoked, but also further "call-out" bracket Methods which can be applied to invocations by the target object of the methods of other objects.
Abstract: This paper extends the concept of qualifying types by describing how their implementations can include not only bracket methods which are applied when a method of a target object is invoked, but also further "call-out" bracket methods which can be applied to invocations by the target object of the methods of other objects. This additional technique can be used for example to provide enhanced synchronisation in qualifying types, as an aid to confining the activities of an object, and as a means of providing parallel activities associated with the sending and receiving of information, e.g. encryption and decryption, data compression.

Journal ArticleDOI
TL;DR: While these four RE tasks (not sequential phases!) are commonly performed with varying degrees of completeness, rigor, and success on most projects, a list of tasks containing only these four is far from complete.
Abstract: Many managers and others who are not professional requirements engineers tend to greatly over-simplify requirements engineering (RE). Based on their observations that requirements specifications primarily contain narrative English textual statements of individual requirements and that all members of the engineering team are reasonably literate, there is a common myth that practically anyone with little or no specialized training or expertise can be a requirements engineer. After all, what is there to do but ask a few stakeholders what they want (requirements elicitation), study the resulting requirements to make sure they are understood (requirements analysis), write the requirements down in a document (requirements specification), and then ask the customer if they’re right (requirements validation). Just give the team a short class in use case modeling, and they are ready to go. Unfortunately, the preceding is a misleading, if much too prevalent, myth. While these four RE tasks (not sequential phases!) are commonly performed with varying degrees of completeness, rigor, and success on most projects, a list of tasks containing only these four is far from complete. The purpose of this paper is to provide a brief