scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Software and Data Technologies in 2011"


Proceedings Article
01 Jan 2011
TL;DR: This paper introduces the concepts of hybrid wikis, namely attributes, type tags, attribute suggestions, and attribute definitions with integrity constraints, and presents a novel approach to mitigate these problems.
Abstract: Wikis are increasingly used for collaborative enterprise information management since they are flexibly applicable and encourage the contribution of knowledge. The fact that ordinary wiki pages contain pure text only limits how the information can be processed or made accessible to users. Semantic wikis promise to solve this problem by capturing knowledge in structured form and offering advanced querying capabilites. However, it is not obvious for business users, how they can benefit from providing semantic annotations which are not familiar to them and often difficult to enter. In this paper, we first introduce the concepts of hybrid wikis, namely attributes, type tags, attribute suggestions, and attribute definitions with integrity constraints. Business users interact with these concepts using a familiar user interface based on forms, spreadsheet-like tables, and auto-completion for links and values. We then illustrate these concepts using an example scenario with projects and persons and highlight key implementation aspects of a Java-based hybrid wiki system (Tricia). The paper ends with the description of practical experiences gained in two usage scenarios, a comparison with related work and an outlook on future work. 1 Motivation and Problem Statement To keep pace with the growing amount of digital information that has to be managed, enterprises have to adopt new tools and methods (Edmunds and Morris, 2000). In the recent past, wikis are increasingly used as lightweight shared knowledge repositories that allow to collaboratively gather and consolidate information that was previously scattered across emails, files on personal computers and paper documents (Stocker and Tochtermann, 2009). Having this information integrated in a central place, being able to search it and to connect related pieces of information with hyperlinks is in fact a major advance. However, with a growing knowledge base soon the demand arises to access information in more structured ways that classical wikis do not support. For example it is not possible to query a wiki for a company’s research projects that started in the year 2010 or to export data about these project to a spreadsheet. So even if only rudimentary structured querying functionality is required, the enterprises have to resort to separate applications, usually specialized to manage information of particular domains (like employees, projects or customers) or they have to develop customized solutions. In both cases the advantages of storing information in a central repository are lost. Technically, semantic wikis are promising tools to tackle this problem. They allow to combine the textual content with structured data. Typically, users have to provide this data in the form of semantic annotations to wiki pages or parts thereof. The structured part of the information in the wiki can then be queried similar to the contents of a database. However, in practice they are rarely used as a general purpose tool that dynamically adapts to new needs. In contrast semantic wikis often are pre-configured by experts to solve rather specific problems. Although, from a theoretical point of view, they can be used to structure arbitrary information, there are several barriers users are facing when editing content: • usually, a special syntax has to be used to add semantic annotations which makes it difficult and cumbersome to edit structured content • the modelling concepts are not familiar to the users • it is not obvious for users how they can benefit from providing semantic annotations This paper describes a novel approach to mitigate these problems. In Section 2, our approach of so-called hybrid wikis is presented and illustrated using an example scenario. The term ‘hybrid’ expresses that a subset of the features of semantic wikis are integrated into classic wiki software. The main modelling concepts are described and the limitations of the approach are discussed. Important and interesting technical aspects of the implementation are covered in Section 3. Section 4 contains two case studies demonstrating the practicability of the approach. In section 5, we give an overview of related work and highlight some examples of semantic wikis that use different approaches to facilitate data entry. The paper concludes with a short summary and an outlook on further research and planned improvements of the prototype.

39 citations


Proceedings Article
01 Jul 2011
TL;DR: A synergistic approach is described that extends a process-aware information system with contextual awareness and integrates this in a SEE to show support for improved team coordination, greater situational awareness for developers, and process guidance as well as process navigability for collaborating software engineers.
Abstract: The dynamic nature and high degree of collaboration and communication inherent in software development projects raises various challenges for the automated coordination of tasks in software engineering environments (SEEs). To address these challenges and to enable automated coordination, adaptive process-aware SEEs are required that enhance process quality while not encumbering software development. This paper describes a synergistic approach that extends a process-aware information system with contextual awareness and integrates this in a SEE. Abstract processes and the actually executed workflows are automatically and contextually associated. In particular, intrinsic and extrinsic process activities are considered and a context-based reasoning process is used to automatically derive possible (artifact) activity relations and consequences. Thus, necessary follow-up activities can be automatically governed. Our results show support for improved team coordination, greater situational awareness for developers, and process guidance as well as process navigability for collaborating software engineers.

28 citations


Book ChapterDOI
18 Jul 2011
TL;DR: This paper explains how SADT/IDEF0 domain modeling can bring correct and complete context, to today’s commonplace disciplines of the Unified Modeling Language (UML), Agile System Development, and Usability Engineering methods.
Abstract: Many experts state that: a) specifying "all the small parts of a system" and b) correct expected system usage, can make Agile Software Development more effective. Unified Modeling Method (UML) addresses the former; Usability Engineering addresses the later. Taken together, they create a systems de-velopment framework, capable of: a) specifying functions, data, behavior and usage, b) rapid prototyping, and c) verifying system usability and correctness. All three of these methods focus first on the system, while secondarily trying to ascertain system context. Correct and complete context requires domain modeling. Structured Analysis and Design Technique (SADT/IDEF0) is a proven way to model any kind of domain. Its power and rigor come from: a) a synthesis of graphics, natural language, hierarchical decomposition, and relative context coding, b) distinguishing controls from transformations, c) function activation rules, and d) heuristics for managing model complexity. This paper explains how SADT/IDEF0 domain modeling can bring correct and complete context, to today’s commonplace disciplines of the Unified Modeling Language (UML), Agile System Development, and Usability Engineering methods.

19 citations


Proceedings Article
01 Jan 2011
TL;DR: The size of a large collection of software is measured, and the statistical distribution of its source code file sizes follows a double Pareto distribution, which means that large files are to be found more often than predicted by the lognormal distribution, therefore the previously proposed models underestimate the cost of software.
Abstract: Source code size is an estimator of software effort. Size is also often used to calibrate models and equations to estimate the cost of software. The distribution of source code file sizes has been shown in the literature to be a lognormal distribution. In this paper, we measure the size of a large collection of software (the Debian GNU/Linux distribution version 5.0.2), and we find that the statistical distribution of its source code file sizes follows a double Pareto distribution. This means that large files are to be found more often than predicted by the lognormal distribution, therefore the previously proposed models underestimate the cost of software.

19 citations


Book ChapterDOI
18 Jul 2011
TL;DR: A synergistic approach is described that extends a process-aware information system with contextual awareness and integrates this in a SEE that enables the automatic initiation and governance of follow-up activities caused by changes implied by other activities.
Abstract: Software Engineering (SE) remains an immature discipline and SE projects continue to be challenging due to their dynamic nature. One problematic aspect is the coordination of and collaboration among the many individuals working in such projects. Numerous efforts to establish software engineering environments (SEEs) to address this aspect have been made. However, since SE projects depend on individuals and their intentions, their collaboration is still performed manually to a large degree. Manual tasks are subject to human error in omission or commission that can result in communication breakdowns which are compounded within multi-project environments. This paper describes a synergistic approach that extends a process-aware information system with contextual awareness and integrates this in a SEE. This enables the system to support the users with active and passive information and support collaboration. Context information is presented to the users, providing them with process navigability information relating to their current activities. Additionally, automated information distribution improves awareness about the actions of others. Finally, this approach enables the automatic initiation and governance of follow-up activities caused by changes implied by other activities.

18 citations


Proceedings Article
18 Jul 2011
TL;DR: A telecommunications profile for ArchiMate that offers conformity to standards through the reuse of a recognized Enterprise Architecture modeling language and provides easier adoption by Service Providers due to inclusion of domain specific concepts.
Abstract: From the 90's, the telecommunications service creation industry has undergone radical change. Services have shifted from being based on a switching environment to being mainly based on software. To remain competitive in these new dynamic conditions of an open market, telecommunications organizations need to produce high quality services at low prices within short periods of time. Concerning Service Providers, they need an overall representation of service creation taking in all business, management, and technical activities. To reduce their concept-to-market time for new services, they also need tools specialized for their tasks and domain. In this position paper, we argue that a telecommunications profile for an Enterprise Architecture modeling language answers these needs. We also design a telecommunications profile for ArchiMate that offers conformity to standards through the reuse of a recognized Enterprise Architecture modeling language. Moreover, this profile provides easier adoption by Service Providers due to inclusion of domain specific concepts. The profiling mechanism we propose may be used for defining language extensions specific to other industries as well.

17 citations


Book ChapterDOI
18 Jul 2011
TL;DR: The i* framework is proposed to use for iterative software planning; each of the goals from the i* strategic dependency model are evaluated on the basis of the (high-level) threats they face and the expected quality factors to determine a priority among the model goals and “feed” an iterative template to plan the whole project realization.
Abstract: Organizational modeling with the i* framework has widely been used for model-driven software development adopting a transformational approach, notably within the Tropos process. Its high-level representation elements allow to partition the software problem into adequate and manageable elements (actors, goals, tasks, resources and dependencies) leading to an agent-oriented design, and eventually an implementation with agent technologies (JACK, Jadex, Chimera Agent, ...). This paper proposes to use the i* framework for iterative software planning; each of the goals from the i* strategic dependency model are evaluated on the basis of the (high-level) threats they face and the expected quality factors. This allows to determine a priority among the model goals and “feed” an iterative template to plan the whole project realization. This framework is thus meant to be applied during the first iteration of the project for model-driven software project management. The development of a production management system in the steel industry is used as an example.

15 citations


Proceedings Article
01 Jan 2011
TL;DR: Lamb, a lexical analyzer that captures overlapping tokens caused by lexical ambiguities is presented, a context-sensitive lexical analysis that supports lexically-ambiguous language specifications.
Abstract: Lexical ambiguities may naturally arise in language specifications. We present Lamb, a lexical analyzer that captures overlapping tokens caused by lexical ambiguities. This novel technique scans through the input string and produces a lexical analysis graph that describes all the possible sequences of tokens that can be found within the string. The lexical graph can then be fed as input to a parser, which will discard any sequence of tokens that does not produce a valid syntactic sentence. In summary, our approach allows a context-sensitive lexical analysis that supports lexically-ambiguous language specifications.

14 citations


Proceedings Article
01 Jan 2011
TL;DR: It is claimed that the development of energy-efficient and -aware software systems require a careful re-examination of the many paradigms in software development.
Abstract: Energy efficiency and -awareness are buzzwords in various areas of information and communication technology and are at the core backbone of GreenIT. Computing centers aim for reducing energy consumption in order to save money and carbon dioxide emissions. Furthermore, GreenIT labels are perfect selling points for computer equipment. Especially battey-powered and mobile devices must consider software’s energy consumption in order to prolong their uptime, while keeping the desired or agreed quality of service (QoS). Even if energy awareness regarding hardware has been researched intensively for a couple of years, the analysis of the impact of software on energy consumption is rather novel. We claim that the development of energy-efficient and -aware software systems require a careful re-examination of the many paradigms in software development.

13 citations


Book ChapterDOI
18 Jul 2011
TL;DR: A full MDD approach that covers from requirements engineering to automatic software code generation, and a systematic technique for deriving conceptual models from from business process and requirements models is proposed.
Abstract: Models play a paramount role in model-driven development (MDD): several modelling layers allow defining views of the system under construction at different abstraction levels, and model transformations facilitate the transition from one layer to the other. However, how to effectively integrate requirements engineering within model-driven development is still an open research challenge. This paper shows a full MDD approach that covers from requirements engineering to automatic software code generation. This has been achieved by the integration of two methods: Communication Analysis (a communication-oriented requirements engineering method [1]) and the OO Method (a model-driven object-oriented software development method [2]). For this purpose, we have proposed a systematic technique for deriving conceptual models from from business process and requirements models; it allows deriving class diagrams, state-transition diagrams and specifications of class service behaviour. The approach has been evaluated by means of an ontological evaluation, lab demos and controlled experiments; we are currently planning apply it under conditions of practice in an action research endeavour.

12 citations


Book ChapterDOI
18 Jul 2011
TL;DR: This work presents two different debugging approaches for Java: declarative debugging, which has its origins in the area of functional and logic programming, and omniscient debugging,Which are integrated into a single hybrid debugger called JHyde.
Abstract: Until today the most common technique to debug Java programs is trace debugging. In this work we present two different debugging approaches for Java: declarative debugging, which has its origins in the area of functional and logic programming, and omniscient debugging, which is basically an extension of trace debugging. To benefit from the advantages of both techniques we have integrated them into a single hybrid debugger called JHyde. We use JHyde to debug an erroneous merge sort algorithm and mention important aspects of its implementation. Furthermore, we show that the efficiency of the declarative debugging method can be significantly improved by a new debugging strategy.

Proceedings Article
01 Jan 2011
TL;DR: A novel approach and a tool to automatically derive test cases from bounded algebraic specifications of ADTs, assuring axiom coverage and of all minterms in its full disjunctive normal form (FDNF).
Abstract: Algebraic specification languages have been successfully used for the formal specification of abstract data types (ADTs) and software components, and there are several approaches to automatically derive test cases that check the conformity between the implementation and the algebraic specification of a software component. However, existing approaches do not assure the coverage of conditional axioms and conditions embedded in complex axioms. In this paper, we present a novel approach and a tool to automatically derive test cases from bounded algebraic specifications of ADTs, assuring axiom coverage and of all minterms in its full disjunctive normal form (FDNF). The algebraic specification is first translated into the Alloy modelling language, and the Alloy Analyzer tool is used to find model instances for each test goal (axiom and minterm to cover), from which test cases in JUnit are extracted.

Proceedings Article
01 Jan 2011
TL;DR: The transformation engine ModGraph intends to fill the gap in model-driven software engineering by complements the Eclipse Modeling Framework with graphical transformation rules from which executable code is generated.
Abstract: Model-driven software engineering aims at increasing productivity by replacing conventional programming with the development of high-level executable models. However, current technology focuses on structural models, while behavioral modeling is still neglected. The transformation engine ModGraph intends to fill this gap. ModGraph complements the Eclipse Modeling Framework with graphical transformation rules from which executable code is generated. An operation defined in an Ecore model is specified by a model transformation rule which is compiled into a Java method calling EMF operations. In this way, ModGraph complements the capabilities of EMF which would compile operations into empty Java methods. The net result is an environment which provides comprehensive support for executable models.

Proceedings Article
01 Jan 2011
TL;DR: This paper proposes a novel way of building a modular ontology that may be suitable for this task, composed of interrelated modules focused on specific topics, and uses well-known Web-based semantic relatedness measures to improve the content and structure of the modularOntology.
Abstract: When a Web query is submitted and relevant documents are not found, users are faced with the difficult task of reformulating the query. It may be argued that ontologies can be useful to automate the query reformulation process, taking advantage of the domain knowledge. This paper proposes a novel way of building a modular ontology that may be suitable for this task, composed of interrelated modules focused on specific topics. We propose to use well-known Web-based semantic relatedness measures to improve the content and structure of the modular ontology. Some experiments on query reformulation based on the obtained ontology modules show satisfactory results.

Proceedings Article
01 Jan 2011
TL;DR: By using a dynamic language as an augmentation to MDE’s traditional UML notation, it is possible to create models that are executable, exhibit flexible type checking, and which provide a smaller cognitive gap between business users, modelers and developers.
Abstract: There has been a gradual but steady convergence of dynamic programming languages with modeling languages. Modern dynamic languages such as Groovy and Ruby provide for the creation of domain-specific languages that can provide a level of abstraction comparable to that of modeling languages such as UML. This convergence makes dynamic languages suitable as modeling languages but with benefits that traditional modeling languages do not provide. One area that can benefit from this convergence is model driven engineering. By using a dynamic language as an augmentation to MDE’s traditional UML notation, it is possible to create models that are executable, exhibit flexible type checking, and which provide a smaller cognitive gap between business users, modelers and developers.

Book ChapterDOI
18 Jul 2011
TL;DR: The adaptive approach under development integrating synthesis of Connectors, stochastic model-based analysis performed at design time and run-time monitoring, and a framework to analyse and assess dependability and performance properties are illustrated.
Abstract: The development of next generation Future Internet systems must be capable to address complexity, heterogeneity, interdependency and, especially, evolution of loosely connected networked systems The European project Connect addresses the challenging and ambitious topic of ensuring eternally functioning distributed and heterogeneous systems through on-the-fly synthesis of the Connectors through which they communicate In this paper we focus on the Connect enablers that dynamically derive such connectors ensuring the required non-functional requirements via a framework to analyse and assess dependability and performance properties We illustrate the adaptive approach under development integrating synthesis of Connectors, stochastic model-based analysis performed at design time and run-time monitoring The proposed framework is illustrated on a case study

Proceedings Article
01 Jan 2011
TL;DR: An approach based on the formalization of the implicit instructional design domain language embedded by these Learning Management Systems to identify and formalize this specific language in order to use it as a mean of communication with external design tools without losing the semantic of the designed scenarios.
Abstract: Despite the increasing number of Technology Enhanced Learning platforms (eg. MOODLE) and their wide spreading, the operationalization of learning scenarios is still a problem for teachers. We aim to facilitate their implementation on existent platforms. We propose an approach based on the formalization of the implicit instructional design domain language embedded by these Learning Management Systems. It consists to identify and formalize this specific language in order to use it as a mean of communication with external design tools without losing the semantic of the designed scenarios. The originality of our approach relies in performing the scenarios operationalization by the development of a communication API based on the formalized language of the platform. Our proposal is also based on the application of theories and practices from the Domain Specific Modeling domain in order to formalize the domain language, to specify some graphical languages on top of it, and to help in the development of dedicated graphical editors. This paper details the implementation of our proposal (API and first editor) on the MOODLE platform.

Proceedings Article
01 Jan 2011
TL;DR: A new framework aiming situation-dependent scenario generation for project management skill-up simulator to provide high fidelity of project status and well-configured learning situation towards pedagogical achievement is addressed.
Abstract: This paper addresses a new framework aiming situation-dependent scenario generation for project management skill-up simulator. Project management is inherently human-centric activities, and research work for education has been done by using simulation. Project management covers several aspects on software development such as planning, scheduling, progress management and negotiation. We especially focus on the progress management phase to provide high fidelity of project status and well-configured learning situation towards pedagogical achievement. First three design principles are argued for such viewpoints. Second simple but fully functional project modeling is proposed for simulating essential aspects of Q(uality), C(ost) and D(elivery) criteria. Third situation-dependent scenario generation is described with “Events” and “Trigger control of trouble events”. The proposed framework is implemented and shows effective scenario generation when having a trainee’s interactive operations.

Proceedings Article
01 Jan 2011
TL;DR: A template for agent-oriented iterative development as well as a software project management framework to plan the project iterations is defined and an iterative template is proposed for planning purpose.
Abstract: Iterative development has gained popularity in the software industry notably in the development of enterprise applications where requirements and needs are difficult to express for the users and business processes difficult to understand by analysts. Such a software development life cycle is nevertheless often used in an ad-hoc manner. Even when templates such as the Unified Process are furnished, poor documentation is provided on how to breakdown the project into manageable units and to plan their development. This paper defines a template for agent-oriented iterative development as well as a software project management framework to plan the project iterations. The agent paradigm is used at analysis level with actors’ goals considered as piecing elements, an iterative template is proposed for planning purpose. High-level risk and quality issues focus on prioritizing the project goals so that each element’s “criticality” can be evaluated and a model-driven schedule of the overall software project can be set up. The software process is illustrated with the development of a production planning system for a steel industry.

Proceedings Article
19 Jul 2011
TL;DR: An innovative process framework for emergency management is defined which is described from the methodological and architecture view and the procedures how to deploy processes of emergency management on the specific architecture based on emergency management requirements are described.
Abstract: The paper deals with an effective solving of emergency situations by using the Process Management. Based on current approaches, there is defined an innovative process framework for emergency management which is described from the methodological and architecture view. The methodology describes the procedures how to deploy processes of emergency management on the specific architecture based on emergency management requirements. The overall quality assurance is ensured by continuous verification, validation and optimization. The application of the process framework for emergency management is showed on a case study, which describes an accident of a vehicle transporting dangerous goods. The case study illustrates the deployment of emergency processes up to terrain case studies.

Proceedings Article
01 Jan 2011
TL;DR: Three formal modules allowing reconfigurations of the system’s Petri nets are defined: changer places to dynamically change places of the model, changer transitions to dynamically reconfigure transitions, and changer marking to modify the initial markings of places.
Abstract: The paper deals with dynamic automatic reconfigurations of Control Systems to be classically modelled by Petri nets. Three different forms can be applied at run-time to reconfigure such systems: Addition/Removal of places, Addition/Removal/Update of transitions or finally the simple change of the initial marking. We define three formal modules allowing reconfigurations of the system’s Petri nets: changer places to dynamically change places of the model, changer transitions to dynamically reconfigure transitions, and changer marking to modify the initial markings of places. To guarantee a correct behavior of this architecture according to user requirements, we apply a model checking by using the useful tool SESA for the verification of CTLbased properties of the proposed modules and also of the system. The paper is applied to a Real Benchmark Production System.

Proceedings Article
01 Jan 2011
TL;DR: This paper describes an approach to construct Semantic Business Process Patterns (SB2P) from a set of process models belonging to the same business domain, composed of process fragments that are semantically close but may have structural and/or behavioral differences.
Abstract: Both the academic and industrial communities are increasingly interested in developing methods and tools for automating the design of business process models. In this context, several approaches were proposed to make modeling easier and to enhance the quality of the resulting artifacts. To achieve these objectives, these approaches are based on pattern reuse. Despite the agreed uppon advantages of patterns in accelerating the design process and improving the produced model quality, a few researchers showed how to construct business process patterns. In this paper, we describe an approach to construct Semantic Business Process Patterns (SB2P) from a set of process models. A SB2P is a pattern synthesized from a set of process models belonging to the same business domain. It is composed of process fragments that are semantically close but may have structural and/or behavioral differences.

Proceedings Article
01 Jan 2011
TL;DR: A validation stage is integrated into the suggested process in order to maintain the coherence of the resulting formalized ontology core during this process, and this methodology has been implemented in a rule-based system.
Abstract: The present paper proposes a methodology for generating core domain ontology from LMF standardized dictionary (ISO-24613). It consists in deriving the ontological entities systematically from the explicit information, taking advantage of the LMF dictionary structure. Indeed, such finely-structured source incorporates multi-domain lexical knowledge of morphological, syntactic and semantic levels, lending itself to ontological interpretations. The basic feature of the proposed methodology lies in the proper building of ontologies. To this end, we have integrated a validation stage into the suggested process in order to maintain the coherence of the resulting formalized ontology core during this process. Furthermore, this methodology has been implemented in a rule-based system, whose high-performance is shown through an experiment carried out on the Arabic language. This choice is explained not only by the great deficiency of work on Arabic ontology building, but also by the availability within our research team of an LMF standardized


Proceedings Article
01 Jan 2011
TL;DR: Core of the concept are simulation workflows that enable a distributed execution of former monolithic programs and a resource manager that steers server work load and handles data.
Abstract: Computer simulations play an increasingly important role to explain or predict phenomena of the real world. We recognized during our work with scientific institutes that many simulation programs can be considered legacy applications with low software ergonomics, usability, and hardware support. Often there is no GUI and tedious manual tasks have to be conducted. We are convinced that the information technology and software engineering concepts can help to improve this situation to a great extent. In this poster presentation we therefore propose a concept of a simulation environment for legacy scientific applications. Core of the concept are simulation workflows that enable a distributed execution of former monolithic programs and a resource manager that steers server work load and handles data. As proof of concept we implemented a Monte-Carlo simulation of precipitations in copper-alloyed iron and tested it with real data.

Proceedings Article
14 Sep 2011
TL;DR: This work shows how the MARTE profile can be used for this purpose, define algorithms for computing the required throughput and time limit for each action and study their theoretical and empirical performance.
Abstract: High-quality software needs to meet both functional and non-functional requirements. In some cases, software must accomplish specific performance requirements, but most of the time, only high-level performance requirements are available: it is up to the developer to decide what performance should be expected from each part of the system. In this context, the MARTE profile was proposed by the OMG to extend UML for modeldriven development of real-time and embedded systems, focusing on assisting early performance analysis and scheduling. We propose using the MARTE profile to derive the performance requirements of each action in an UML activity diagram from the requirements of the containing activity and some local annotations. In this work, we show how the MARTE profile can be used for this purpose, define algorithms for computing the required throughput and time limit for each action and study their theoretical and empirical performance. The algorithms have been integrated into the Papyrus UML diagram editor and feed back their results into the original model. Running both algorithms on activities with 225 paths requires 10 seconds on average.

Proceedings Article
01 Jan 2011
TL;DR: The methodology based on personality has revealed appropriate and adequate personality patterns for assignment of best advisable performing roles in software development, according not only to capabilities of people and role demands but also taking into consideration personality traits, thus showing that knowing software engineer’s personality can improve software development process.
Abstract: Nowadays organizations work to improve their software development process, with a purpose to reduce costs, improve quality and increase planning reliability. That is why decision making pertaining to role assignment in software engineering developing projects is one of the most important factors that affect the software development process in organizations. We should not only consider individual’s abilities and capabilities for better team performance but also consider knowing their personality traits to carry out the most suitable role in an effective working team. Through compilation of studies with RAMSET (Role Assignment Methodology for Software Engineering Teams) methodology some personalities and typologies have been identified to perform certain type of roles, thus helping us build a better, cohesive and less conflictive team. Our methodology based on personality has revealed appropriate and adequate personality patterns for assignment of best advisable performing roles in software development, according not only to capabilities of people and role demands but also taking into consideration personality traits, thus showing that knowing software engineer’s personality can improve software development process.

Proceedings Article
01 Jan 2011
Abstract: Visibly Pushdown Languages (VPL) have been proposed as a formalism useful for specifying and verifying complex, recursive systems such as application software. However, VPL turn out to be unsuitable for the compositional specification of concurrent software, as they are not closed under shuffle. Multi-stack Visibly Pushdown Languages (MVPL) express naturally concurrent constructions. We find however that concurrency cannot be expressed compositionally (for indeed MVPL are not closed under shuffle either). Furthermore, MVPL operations must be expressed under rigid restrictions on the input alphabet, that hinder between others the specification of dynamic creation of threads of execution. If we remove the restrictions, then MVPL loose almost all their closure properties; we find however a natural renaming process that yields the notion of disjoint MVPL operations. These operations eliminate the restrictions and also creates closure under shuffle. This effort opens the areaof MVPL-basedcompositional specification and verification of complex systems.

Book ChapterDOI
18 Jul 2011
TL;DR: An automated approach for the system testing of modern, industrial strength dynamic web applications, where a combination of dynamic crawling-based model generation and back-end model checking is used to comprehensively validate the navigation behavior of the web application.
Abstract: Web applications pervade all aspects of human activity today. Rapid growth in the scope, penetration and user-base of web applications, over the past decade, has meant that web applications are substantially bigger, more complex and sophisticated than ever before. This places even more demands on the validation process for web applications. This paper presents an automated approach for the system testing of modern, industrial strength dynamic web applications, where a combination of dynamic crawling-based model generation and back-end model checking is used to comprehensively validate the navigation behavior of the web application. We present several case studies to validate the proposed approach on real-world web applications. Our evaluation demonstrates that the proposed approach is not only practical in the context of applications of such size and complexity but can provide greater automation and better coverage than current industrial validation practices based on manual testing.

Proceedings Article
01 Jan 2011
TL;DR: This work proposes a new approach called COMODE (Context Aware Model Driven Development) which advocates ModelDriven Development to promote reuse, adaptability and interoperability for context-aware application development on service platforms.
Abstract: Context-aware development has been an emergent subject of many research works in ubiquitous computing. Few of them propose Model Driven Development (MDD) as an approach for context-aware application development. Many focus on context capture and adaptation by the use of legacy architectures and others artefacts to bind context with application logic. This work proposes a new approach called COMODE (Context Aware Model Driven Development) which advocates Model Driven Development to promote reuse, adaptability and interoperability for context-aware application development on service platforms. In this paper we focus on the transformation issue and propose a parameterized transformation as a new approach for model driven development of context-aware services.