scispace - formally typeset
Search or ask a question
Author

Dennis B. Smith

Other affiliations: Carnegie Mellon University
Bio: Dennis B. Smith is an academic researcher from Software Engineering Institute. The author has contributed to research in topics: Service-oriented architecture & Software system. The author has an hindex of 27, co-authored 103 publications receiving 2962 citations. Previous affiliations of Dennis B. Smith include Carnegie Mellon University.


Papers
More filters
Book ChapterDOI
01 Jan 2013
TL;DR: In this paper, the authors present the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems, focusing on four essential topics of selfadaptation: design space for selfadaptive solutions, software engineering processes, from centralized to decentralized control, and practical run-time verification & validation.
Abstract: The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.

783 citations

Proceedings ArticleDOI
01 May 2000
TL;DR: This paper presents a roadmap for reverse engineering research for the first decade of the new millennium, building on the program comprehension theories of the 1980s and the reverse engineering technology of the 1990s.
Abstract: By the early 1990s the need for reengineering legacy systems was already acute, but recently the demand has increased significantly with the shift toward web-based user interfaces. The demand by all business sectors to adapt their information systems to the Web has created a tremendous need for methods, tools, and infrastructures to evolve and exploit existing applications efficiently and cost-effectively. Reverse engineering has been heralded as one of the most promising technologies to combat this legacy systems problem. This paper presents a roadmap for reverse engineering research for the first decade of the new millennium, building on the program comprehension theories of the 1980s and the reverse engineering technology of the 1990s.

274 citations

Proceedings ArticleDOI
20 May 2007
TL;DR: This position paper attempts to investigate an initial classification of challenge areas related to service orientation and service-oriented systems, and proposes the notion of Service Strategy as a binding model for these three categories.
Abstract: Service orientation has been touted as one of the most important technologies for designing, implementing and deploying large scale service provision software systems. In this position paper we attempt to investigate an initial classification of challenge areas related to service orientation and service-oriented systems. We start by organizing the research issues related to service orientation in three general categories- business, engineering and operations, plus a set of cross-cutting concerns across domain. We further propose the notion of Service Strategy as a binding model for these three categories. Finally, concluding this position paper, we outline a set of emerging opportunities to be used for further discussion.

114 citations

Proceedings ArticleDOI
24 Sep 2005
TL;DR: An early version of SMART was applied with good success to assist a DoD organization in evaluating the potential for converting components of an existing system into services that would run in a new and tightly constrained SOA environment.
Abstract: This report describes the service-oriented migration and reuse technique (SMART). SMART is a technique that helps organizations analyze legacy systems to determine whether their functionality, or subsets of it, can be reasonably exposed as services in a service-oriented architecture (SOA), and thus to achieve greater interoperability. Converting legacy components to services allows systems to remain largely unchanged while exposing functionality to a large number of clients through well-defined service interfaces. A number of organizations are adopting this approach by defining SOAs that include a set of infrastructure common services on which organizations can build additional domain services or applications. SMART considers the specific interactions that will be required by the target SOA and any changes that must be made to the legacy components. An early version of SMART was applied with good success to assist a DoD organization in evaluating the potential for converting components of an existing system into services that would run in a new and tightly constrained SOA environment

104 citations

Proceedings ArticleDOI
29 Mar 1996
TL;DR: An initial conceptual framework for the classification of reverse engineering tools and techniques that aid program understanding is described and a descriptive model is presented that categorizes important support mechanism features based on a hierarchy of attributes.
Abstract: The paper describes an initial conceptual framework for the classification of reverse engineering tools and techniques that aid program understanding. It is based on a description of the canonical activities that are characteristic of the reverse engineering process. A descriptive model is presented that categorizes important support mechanism features based on a hierarchy of attributes.

102 citations


Cited by
More filters
Book
28 Feb 2002
TL;DR: The authors present an ontology learning framework that extends typical ontology engineering environments by using semiautomatic ontology construction tools and encompasses ontology import, extraction, pruning, refinement and evaluation.
Abstract: The Semantic Web relies heavily on formal ontologies to structure data for comprehensive and transportable machine understanding. Thus, the proliferation of ontologies factors largely in the Semantic Web's success. The authors present an ontology learning framework that extends typical ontology engineering environments by using semiautomatic ontology construction tools. The framework encompasses ontology import, extraction, pruning, refinement and evaluation.

2,061 citations

Journal ArticleDOI
TL;DR: A taxonomy of research in self-adaptive software is presented, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area.
Abstract: Software systems dealing with distributed applications in changing environments normally require human supervision to continue operation in all conditions. These (re-)configuring, troubleshooting, and in general maintenance tasks lead to costly and time-consuming procedures during the operating phase. These problems are primarily due to the open-loop structure often followed in software development. Therefore, there is a high demand for management complexity reduction, management automation, robustness, and achieving all of the desired quality requirements within a reasonable cost and time range during operation. Self-adaptive software is a response to these demands; it is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation. These changes may stem from the software system's self (internal causes, e.g., failure) or context (external events, e.g., increasing requests from users). Such a system is required to monitor itself and its context, detect significant changes, decide how to react, and act to execute such decisions. These processes depend on adaptation properties (called self-a properties), domain characteristics (context information or models), and preferences of stakeholders. Noting these requirements, it is widely believed that new models and frameworks are needed to design self-adaptive software. This survey article presents a taxonomy, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area. Moreover, as adaptive systems are encountered in many disciplines, it is imperative to learn from the theories and models developed in these other areas. This survey article presents a landscape of research in self-adaptive software by highlighting relevant disciplines and some prominent research projects. This landscape helps to identify the underlying research gaps and elaborates on the corresponding challenges.

1,349 citations

01 Jan 1998
TL;DR: Data types Sorting and searching parallel and distributed algorithms 3.0 and 4.0 are presented, covering sorting, searching, and distributing in the context of distributed systems.
Abstract: data types Sorting and searching parallel and distributed algorithms 3. [AR] Computer Architecture

833 citations

Journal ArticleDOI
TL;DR: DETEX is proposed, a method that embodies and defines all the steps necessary for the specification and detection of code and design smells, and a detection technique that instantiates this method, and an empirical validation in terms of precision and recall of DETEX.
Abstract: Code and design smells are poor solutions to recurring implementation and design problems. They may hinder the evolution of a system by making it hard for software engineers to carry out changes. We propose three contributions to the research field related to code and design smells: (1) DECOR, a method that embodies and defines all the steps necessary for the specification and detection of code and design smells, (2) DETEX, a detection technique that instantiates this method, and (3) an empirical validation in terms of precision and recall of DETEX. The originality of DETEX stems from the ability for software engineers to specify smells at a high level of abstraction using a consistent vocabulary and domain-specific language for automatically generating detection algorithms. Using DETEX, we specify four well-known design smells: the antipatterns Blob, Functional Decomposition, Spaghetti Code, and Swiss Army Knife, and their 15 underlying code smells, and we automatically generate their detection algorithms. We apply and validate the detection algorithms in terms of precision and recall on XERCES v2.7.0, and discuss the precision of these algorithms on 11 open-source systems.

710 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examine the role of institutional, social, and political factors in influencing the extent to which complex information technologies are actually assimilated into organizational practice, and the empirical evidence sheds light on the role that institutional forces that influence the rate of assimilation of the technology.
Abstract: The ability to integrate dispersed pockets of expertise and institute an organizational repository of knowledge is considered to be vital for sustained effectiveness in contemporary business environments. Information technologies provide cost-effective functionalities for building knowledge platforms through systematic acquisition, storage, and dissemination of organizational knowledge. However, in order to gain the value-adding potential of organizational knowledge, it is not sufficient to simply adopt and deploy IT-enabled knowledge platforms. These platforms must be assimilated into the ongoing work processes in organizations. Yet, theories of technology innovation and use suggest that a variety of institutional, social, and political factors blend together in influencing the extent to which complex information technologies are actually assimilated into organizational practice. Therefore, this research addresses a significant question: What forces influence the assimilation of knowledge platforms in organization? Given the significant gap between the adoption and actual assimilation of complex technologies into organizations, this is an important question. Empirical evidence is generated by examining the forces influencing the assimilation of CASE technologies in systems development projects in organizations. CASE is considered to be one of the most mature knowledge platforms in contemporary organizations. The empirical evidence sheds light on the role of institutional forces that influence the rate of assimilation of the technology. The findings have significant implications for further research and practice.

647 citations