scispace - formally typeset
Search or ask a question

Showing papers on "Domain (software engineering) published in 1995"


Journal ArticleDOI
Gerald Tesauro1
TL;DR: TD-GAMMON is a neural network that trains itself to be an evaluation function for the game of backgammon by playing against itself and learning from the outcome.
Abstract: Ever since the days of Shannon's proposal for a chess-playing algorithm [12] and Samuel's checkers-learning program [10] the domain of complex board games such as Go, chess, checkers, Othello, and backgammon has been widely regarded as an ideal testing ground for exploring a variety of concepts and approaches in artificial intelligence and machine learning Such board games offer the challenge of tremendous complexity and sophistication required to play at expert level At the same time, the problem inputs and performance measures are clear-cut and well defined, and the game environment is readily automated in that it is easy to simulate the board, the rules of legal play, and the rules regarding when the game is over and determining the outcome

751 citations


Book
Michael Jackson1
01 Sep 1995
TL;DR: Bentley et al. as mentioned in this paper presented a survey of software development, focusing on the following aspects: software engineering, software architecture, software design, software implementation bias, and software architecture.
Abstract: Foreword by Jon Bentley. Preface. Acknowledgements. Introduction. A. Ambiguity. The Application Domain. Arboricide. Aspects of Software Development. B. The Bridges of Konigsberg. Brilliance. C. The Calendar. Classification. Composition. Connection Domains. Connection Frames. Context Diagrams. Critical Reading. D. Dataflow Diagrams--1. Dataflow Diagrams--2. Definitions. Dekker. Descriptions. Designations. Deskilling. Domain Characteristics. Domain Interactions. Domains. E. Entity-Relation Span. Events and Intervals. Existence. F. Frame Diagrams. The Fudge Factor. G. Graphic Notations. H. Hierarchical Structure. I. Identity. Implementation Bias. Individuals. Informal Domains. Is. J. JSP Frame. L. Logical Positivism. M. Machines. Mathematics. Method. Misfits. Models. Mood. A Multi-Frame Problem--1. A Multi-Frame Problem--2. N. The Narrow Bridge. O. Object-Oriented Analysis. P. The Package Router Problem. Partial Descriptions. Phenomena. Poetry. Polya. Predicate Logic. Problem Complexity. The Problem Context. Problem Frames. Problem Sensitivity. Procrustes. R. Raw Materials. Refutable Descriptions. Reification. Requirements. Restaurants. Rough Sketches. S. Scope of Description. Shared Phenomena. Simple Control Frame. Simple IS Frame. Software. Software Engineering. Span of Description. Specifications. T. Top-Down. Tree Diagrams. Trip-lets. W. What and How. Bibliography. Index.

583 citations


Journal ArticleDOI
TL;DR: The Simple Good–Turing estimator is defined, which is straightforward to use and performs well, absolutely and relative both to the approaches just discussed and to other, more sophisticated techniques.
Abstract: Linguists and speech researchers who use statistical methods often need to estimate the frequency of some type of item in a population containing items of various types. A common approach is to divide the number of cases observed in a sample by the size of the sample; sometimes small positive quantities are added to divisor and dividend in order to avoid zero estimates for types missing from the sample. These approaches are obvious and simple, but they lack principled justification, and yield estimates that can be wildly inaccurate. I.J. Good and Alan Turing developed a family of theoretically well-founded techniques appropriate to this domain. Some versions of the Good–Turing approach are very demanding computationally, but we define a version, the Simple Good–Turing estimator, which is straightforward to use. Tested on a variety of natural-language-related data sets, the Simple Good–Turing estimator performs well, absolutely and relative both to the approaches just discussed and to other, more sophisticated techniques.

317 citations


Patent
12 Jun 1995
TL;DR: In this paper, a method for establishing and maintaining virtual network domains in a segmented computer network having a first domain and a second domain is presented, where a packet having the first endstation as a source is received by the first port of the first switching fabric circuit, and a destination for the packet is determined.
Abstract: A method for establishing and maintaining virtual network domains in a segmented computer network having a first domain and a second domain. A first table entry for a first endstation in a first forwarding table of a first switching fabric circuit is created. The first table entry includes domain information specifying that the first endstation is in the first domain and port information specifying that the first endstation is coupled to a first port. A packet having the first endstation as a source is received by the first port of the first switching fabric circuit, and a destination for the packet is determined. If the packet specifies a second endstation of the first domain as the destination, the packet is forwarded to the second endstation. If the destination for the packet specifies more than one endstation, the domain of the source of the packet is determined, and the packet is forwarded to the specified endstations of the first domain. For a second embodiment, source and destination information are compared to determine forwarding information for a packet, and the packet is forwarded as specified by the forwarding information. For a third embodiment, intelligent selection between multiple paths to the same endstation is provided by the comparison of source and destination forwarding information.

279 citations


Proceedings Article
24 May 1995
TL;DR: In this paper, a system that uses decision trees to learn how to classify coreferent phrases in the domain of business joint ventures is presented. But the results show that decision trees achieve higher performance than the rules in two of three evaluation metrics developed for the coreference task.
Abstract: This paper describes RESOLVE, a system that uses decision trees to learn how to classify coreferent phrases in the domain of business joint ventures. An experiment is presented in which the performance of RESOLVE is compared to the performance of a manually engineered set of rules for the same task. The results show that decision trees achieve higher performance than the rules in two of three evaluation metrics developed for the coreference task. In addition to achieving better performance than the rules, RESOLVE provides a framework that facilitates the exploration of the types of knowledge that are useful for solving the coreference problem.

208 citations



Journal ArticleDOI
TL;DR: This work compares configuration of the board-game method to that of a chronological-backtracking problem-solving method for the same application tasks (for example, towers of Hanoi and the Sisyphus room-assignment problem), and examines how method designers can specialize problem-Solving methods by making ontological commitments to certain classes of tasks.

179 citations


Proceedings Article
01 Dec 1995
TL;DR: In this article, the authors used Perspective Based Reading (PBR) to provide operation scenarios where members of a review team read a document from a particular perspective (e.g., tester, developer, user) in order to provide better coverage of the document than the same number of readers using their usual technique.
Abstract: We consider reading techniques a fundamental means of achieving high quality software. Due to lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective Based Reading (PBR) a particular reading technique for requirement documents. The goal of PBR is to provide operation scenarios where members of a review team read a document from a particular perspective (eg., tester, developer, user). Our assumption is that the combination of different perspective provides better coverage of the document than the same number of readers using their usual technique. To test the efficacy of PBR, we conducted two runs of a controlled experiment in the environment of NASA GSFC Software Engineering Laboratory (SEL), using developers from the environment. The subjects read two types of documents, one generic in nature and the other from the NASA Domain, using two reading techniques, PBR and their usual technique. The results from these experiment as well as the experimental design, are presented and analyzed. When there is a statistically significant distinction, PBR performs better than the subjects' usual technique. However, PBR appears to be more effective on the generic documents than on the NASA documents.

177 citations


Journal ArticleDOI
TL;DR: Agentheets as discussed by the authors is a tool for creating domain oriented visual programming languages, and illustrates how it supports collaborative design by examining experiences from a real language design project, and summarizes the contributions of their approach and discuss its viability in industrial design projects.
Abstract: Customized visual representations enable end users to achieve their programming goals. Here, designers work with users to tailor visual programming languages to specific problem domains. We describe a design methodology and a tool for creating domain oriented, end user programming languages that effectively use visualization. We first describe a collaborative design methodology involving end users and designers. We then present Agentsheets, a tool for creating domain oriented visual programming languages, and illustrate how it supports collaborative design by examining experiences from a real language design project. Finally, we summarize the contributions of our approach and discuss its viability in industrial design projects. >

172 citations


Journal ArticleDOI
TL;DR: This paper shows how PROTEGE-II can be applied to the task of providing protocol-based decision support in the domain of treating HIV-infected patients, and shows that the goals of reusability and easy maintenance can be achieved.

170 citations


Book ChapterDOI
01 Jan 1995
TL;DR: Empirical results show that the system learns operators in this domain well enough to solve problems as effectively as human-expert coded operators.
Abstract: This paper describes an approach to automatically learn planning operators by observing expert solution traces and to further refine the operators through practice in a learning-by-doing paradigm. This approach uses the knowledge naturally observable when experts solve problems, without need of explicit instruction or interrogation. The inputs to our learning system are: the description language for the domain, experts' problem solving traces, and practice problems to allow learning-by-doing operator refinement. Given these inputs, our system automatically acquires the preconditions and effects (including conditional effects and preconditions) of the operators. We present empirical results to demonstrate the validity of our approach in the process planning domain. These results show that the system learns operators in this domain well enough to solve problems as effectively as human-expert coded operators. Our approach differs from knowledge acquisition tools in that it does not require a considerable amount of direct interactions with domain experts. It differs from other work on automatically learning operators in that it does not require initial approximate planning operators or strong background knowledge.

Journal ArticleDOI
TL;DR: A Domain-Specific Software Architecture has been defined as an assemblage of software components specialized for a particular type of task, generalized for effective use across that domain, composed in a standardized structure effective for building successful applications.
Abstract: A Domain-Specific Software Architecture (DSSA) has been defined as:• "an assemblage of software components, specialized for a particular type of task (domain), generalized for effective use across that domain, composed in a standardized structure (topology) effective for building successful applications" [Hay94] or, alternately• "a context for patterns of problem elements, solution elements, and situations that define mappings between them [Hid90].The following small example illustrates these definitions as well as provides the reader with some insight into the types of processes and tools needed to support the creation and use of a DSSA.



01 Jan 1995
TL;DR: An architecture that offers a divide-and-conquer approach that separates system-building tasks that require skill in domain analysis and modeling from those that require simple entry of content knowledge, and is used to construct a number of knowledge-based systems.
Abstract: PROTEGE-II is a suite of tools that facilitates the development of intelligent systems. A tool called MAiTRE allows system builders to create and refine abstract models (ontologies) of application domains. A tool called DASH takes as input a modified domain ontology and generates automatically a knowledge-acquisition tool that application specialists can use to enter the detailed content knowledge required to define particular applications. The domain-dependent knowledge entered into the knowledge-acquisition tool is used by assemblies of domain-independent problem-solving methods that provide the computational strategies required to solve particular application tasks. The result is an architecture that offers a divide-and-conquer approach that separates system-building tasks that require skill in domain analysis and modeling from those that require simple entry of content knowledge. At the same time, applications can be constructed from libraries of component--of both domain ontologies and domain-independent problem-solving methods--allowing the reuse of knowledge and facilitating ongoing system maintenance. We have used PROTEGE-II to construct a number of knowledge-based systems, including the reasoning components of T-Helper, which assists physicians in the protocol-based care of patients who have HIV infection.


Proceedings ArticleDOI
20 Apr 1995
TL;DR: The TOVE Quality Ontology is the formal representation (using first-order logic) of terms, relationships, and axioms about quality which are generic beyond any specific quality domain.
Abstract: The TOVE Quality Ontology is the formal representation (using first-order logic) of terms, relationships, and axioms about quality which are generic beyond any specific quality domain. The assumption that quality is "conformance to requirements" is used to decompose the domain of quality into sub-domains of measurement, quality analysis, identification, and traceability. An ontological engineering methodology of posing ontological scope, stating competency questions, constructing data models and axioms, and visualization of the answering of competency questions is demonstrated with an example from the engineering of the traceability ontology.

Patent
01 Dec 1995
TL;DR: In this article, a data processing system that includes a data store, means for archiving files within the data store and a graphical user interface is disclosed that uses a novel query system.
Abstract: A data processing system that includes a data store, means for archiving files within the data store, and a graphical user interface is disclosed that uses a novel query system. The query system includes a domain scope control field, a narrowing search control funnel, a specific item search field, and a broadening search control funnel. The domain scope control field allows a user to perform a hierarchical search within a plurality of topics available in the domain control field. The search query generates a search cell. The narrowing search control allows a user to narrow the scope of the search cell. The specific item search field allows a user to identify specific key words to be searched within the search cell. The broadening search control allows a user to broaden the scope of the search cell.

Proceedings Article
20 Aug 1995
TL;DR: This work describes a goal recognition module that is provably sound and polynomial-time and that performs well in a real domain and reports on experiments on human subjects in the Unix domain that demonstrate the algorithm to be fast in practice.
Abstract: The bulk of previous work on goal and plan recognition may be crudely stereotyped in one of two ways "Neat" theories -- rigorous, justified, but not yet practical "Scruffy" systems -- heuristic, domain specific, but practical In contrast, we describe a goal recognition module that is provably sound and polynomial-time and that performs well in a real domain Our goal recognizer observes actions executed by a human, and repeatedly prunes inconsistent actions and goals from a graph representation of the domain We report on experiments on human subjects in the Unix domain that demonstrate our algorithm to be fast in practice The average time to process an observed action with an initial set of 249 goal schemas and 22 action schemas was 14 cpu seconds on a SPARC-10

Patent
30 May 1995
TL;DR: In this article, a hypertext editor captures general textual descriptions of various aspects of applications in a domain and a scenario editor lists highlevel steps and a description of those steps to be performed by an application program in the domain.
Abstract: A hypertext editor captures general textual descriptions of various aspects of applications in a domain. A reference requirements editor lists (optional and required) input and output parameters for program functions in the domain. A "thing" editor lists services provided and required by "things" in the domain. A scenario editor lists high-level steps and a description of those steps to be performed by an application program in the domain. A domain dictionary records definitions of key terms and phrases used in the lists and descriptions of the reference requirements, "thing", and scenario editors.

Journal ArticleDOI
TL;DR: This paper maintains that there are benefits to extending the scope of student models to include additional information as part of the explicit student model, and describes the student model of an intelligent computer assisted language learning system which was based on research findings on the above five topics.
Abstract: In this paper we maintain that there are benefits to extending the scope of student models to include additional information as part of the explicit student model. We illustrate our argument by describing a student model which focuses on 1. performance in the domain; 2. acquisition order of the target knowledge; 3. analogy; 4. learning strategies; 5. awareness and reflection. The first four of these issues are explicitly represented in the student model. Awareness and reflection should occur as the student model is transparent; it is used to promote learner reflection by encouraging the learner to view, and even negotiate changes to the model. Although the architecture is transferable across domains, each instantiation of the student model will necessarily be domain specific due to the importance of factors such as the relevant background knowledge for analogy, and typical progress through the target material. As an example of this approach we describe the student model of an intelligent computer assisted language learning system which was based on research findings on the above five topics in the field of second language acquisition. Throughout we address the issue of the generality of this model, with particular reference to the possibility of a similar architecture reflecting comparable issues in the domain of learning about electrical circuits.

Journal Article
TL;DR: This work first describes a collaborative design methodology involving end users and designers, then presents Agentsheets, a tool for creating domain oriented visual programming languages, and illustrates how it supports collaborative design by examining experiences from a real language design project.
Abstract: Customized visual representations enable end users to achieve their programming goals. Here, designers work with users to tailor visual programming languages to specific problem domains. We describe a design methodology and a tool for creating domain oriented, end user programming languages that effectively use visualization. We first describe a collaborative design methodology involving end users and designers. We then present Agentsheets, a tool for creating domain oriented visual programming languages, and illustrate how it supports collaborative design by examining experiences from a real language design project. Finally, we summarize the contributions of our approach and discuss its viability in industrial design projects. >

Proceedings ArticleDOI
02 Dec 1995
TL;DR: The advantages of using domain knowledge within the discovery process are highlighted by providing results from the application of the STRIP algorithm in the actuarial domain.
Abstract: The ideal situation for a Data Mining or Knowledge Discovery system would be for the user to be able to pose a query of the form “Give me something interesting that could be useful” and for the system to discover some useful knowledge for the user. But such a system would be unrealistic as databases in the real world are very large and so it would be too inefficient to be workable. So the role of the human within the discovery process is essential. Moreover, the measure of what is meant by “interesting to the user” is dependent on the user as well as the domain within which the Data Mining system is being used. In this paper we discuss the use of domain knowledge within Data Mining. We define three classes of domain knowledge: Hierarchical Generalization Trees ( HG-Trees), Attribute Relationship Rules (AR-rules) and EnvironmentBased Constraints (EBC). We discuss how each one of these types of domain knowledge is incorporated into the discovery process within the EDM (Evidential Data Mining) framework for Data Mining proposed earlier by the authors [ANAN94], and in particular within the STRIP (Strong Rule Induction in Parallel) algorithm [ANAN95] implemented within the EDM framework. We highlight the advantages of using domain knowledge within the discovery process by providing results from the application of the STRIP algorithm in the actuarial domain.

Journal ArticleDOI
TL;DR: Pareto optimality is a domain-independent property that can be used to coordinate distributed engineering agents within a model of design called Redux, which allows existing software tools to communicate with each other and a Redux agent over the Internet.
Abstract: Pareto optimality is a domain-independent property that can be used to coordinate distributed engineering agents. Within a model of design called Redux, some aspects of dependency-directed backtracking can be interpreted as tracking Pareto optimality. These concepts are implemented in a framework, called Next-Link, that coordinates legacy engineering systems. This framework allows existing software tools to communicate with each other and a Redux agent over the Internet. The functionality is illustrated with examples from the domain of electrical cable harness design.

Patent
John Collins1, Elizabeth M Sisley1
06 Jun 1995
TL;DR: In this paper, a system that integrates active and simulated decision-making processes generates decisions in response to events representing changes in a domain model, and updates the domain model according to the decisions.
Abstract: A system that integrates active and simulated decisionmaking processes generates decisions in response to events representing changes in a domain model, and updates the domain model according to the decisions. The system includes a real-time mode for generating recommendations in response to real-time events, and a simulation mode for generating recommendations in response to simulated events. The simulation mode is capable of running on either randomly generated domain events or real-time domain events captured during the real-time mode. In addition, the simulation mode does not require development of a separate domain model for simulation. Rather, the simulation mode may use the contents of a domain model established during the real-time mode. Integration of an active decisionmaking tool with a simulation tool thereby eliminates the cost of constructing a separate simulation model, and avoids invalidation of the contents of the simulation model over time.

Book ChapterDOI
01 Jan 1995
TL;DR: The notion of Relative Importance of Criteria (RIC) is central in the domain of Multiple Criteria Decision Aid (MCDA), which aims at differentiating the role of each criterion in the construction of comprehensive preferences, thus allowing to discriminate among pareto-optimal alternatives as discussed by the authors.
Abstract: The notion of Relative Importance of Criteria (RIC) is central in the domain of Multiple Criteria Decision Aid (MCDA). It aims at differentiating the role of each criterion in the construction of comprehensive preferences, thus allowing to discriminate among pareto-optimal alternatives. In most aggregation procedures, this notion takes the form of importance parameters.

Journal ArticleDOI
TL;DR: An application domain perspective is applied to software reuse by selecting those features required in a target system and then tailoring the domain model, subject to the appropriate feature/object dependency constraints, to generate the target system specification.

Journal ArticleDOI
TL;DR: The prototype system to query medical multimedia distributed databases by both image content and alphanumeric content is validated and rules derived from application and domain knowledge, approximate and conceptual queries may be answered.

Journal ArticleDOI
TL;DR: A new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract is developed.
Abstract: ion is one of the most promising approaches to improve the performance of problem solvers. In several domains abstraction by dropping sentences of a domain description - as used in most hierarchical planners - has proven useful. In this paper we present examples which illustrate significant drawbacks of abstraction by dropping sentences. To overcome these drawbacks, we propose a more general view of abstraction involving the change of representation language. We have developed a new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract. However, to achieve a powerful change of the representation language, the abstract language itself as well as rules which describe admissible ways of abstracting states must be provided in the domain model. This new abstraction approach is the core of PARIS (Plan ion and Refinement in an Integrated System), a system in which abstract planning cases are automatically learned from given concrete cases. An empirical study in the domain of process planning in mechanical engineering shows significant advantages of the proposed reasoning from abstract cases over classical hierarchical planning.

Book ChapterDOI
01 Mar 1995
TL;DR: This paper describes an agent that is directed, not by a conjunction of top level goals, but by a set of motives that is motivated to create and prioritise different goals at different times as a part of an on-going activity under changing circumstances.
Abstract: Goal creation is an important consideration for an agent that is required to behave autonomously in a real-world domain. This paper describes an agent that is directed, not by a conjunction of top level goals, but by a set of motives. The agent is motivated to create and prioritise different goals at different times as a part of an on-going activity under changing circumstances.