scispace - formally typeset
Search or ask a question

Showing papers presented at "Conference on Scientific Computing in 1986"


Proceedings ArticleDOI
01 Feb 1986
TL;DR: The Terregator, short for Terrestrial Navigator, is a mobile robot capable of operating in the real world outdoors equipped with a sonar ring, a color camera, and the ERIM laser range finder.
Abstract: 1 . Introduction This paper provides an overview of the Autonomous Land Vehicle (ALV) Project at CMU. The goal of the CMU ALV Project is to build vision and intelligence for a mobile robot capable of operating in the real world outdoors. We are attacking this on a number of fronts: building appropriate research vehicles, exploiting high. speed experimental computers, and building software for reasoning about the perceived world. Research topics includes: • Construction of research vehicles • Perception systems to perceive the natural outdoor scenes by means of multiple sensors including cameras (color, stereo, and motion), sonar sensors, and a 3D range finder • Path planning for obstacle avoidance Use of topological and terrain map • System architecture to facilitate the system integration • Utilization of parallel computer architectures Our current research vehicle is the Terregator built at CMU which is equipped with a sonar ring, a color camera, and the ERIM laser range finder. Its initial task is to follow roads and sidewalks in the park and on the campus, and avoid obstacles such as trees, humans, and traffic cones. 2. Vehicle, Sensors, and Host Computers The primary vehicle of the AMU ALV Project has been the Terregator, designed and built at CMU. The Terregator, short for Terrestrial Navigator, is designed to provide a clean separation between the vehicle itself and its sensor payload. As shown in

105 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: An architecture for a new task-level system, which is called ATLAS (Automatic Task Level Assembly Synthesizer), to provide a new framework for further research in task planning and present a more unified treatment of some individual pieces of task planning research whose relationships have not previously been described.
Abstract: In this paper we propose an architecture for a new task-level system, which we call ATLAS (Automatic Task Level Assembly Synthesizer). Task-level programming attempts to simplify the robot programming process by requiring that the user specify only goals for the physical relationships among objects, rather than the motions of the robot needed to achieve those goals. A task-level specification is meant to be completely robot independent; no positions or paths that depend on the robot geometry or kinematics are specified by the user. We have two goals for this paper. The first is to present a more unified treatment of some individual pieces of task planning research whose relationships have not previously been described. The second is to provide a new framework for further research in task planning. We stress, however, that ATLAS as a whole has not been implemented and therefore, the description here indicates primarily a direction for future research.

67 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: In [2], an extended relational calculus is defined as the theoreetical basis for a -MNF database query language and two new operators are defined, nest (v) and unnest (~u).
Abstract: There has been a flurry of activity in recent years in the development of databases and database systems to support "high-level" data structures and complex objects. Office forms, computer-aided design, and text retrieval systems are examples of non-traditional applications tha t require specialized database support. One of the stumbling blocks in using traditional relational databases and relational theory is the assumption that all relations are required to he in first-normal-form (1NF); tha t is, all values in the database are non-decomposable. For this reason, non-first-normal-form (-qNF) relations were proposed in which the attributes of a relation can take on values which are sets or even relations themselves. This created a need to reexamine the fundamentals of relational database theory in light of this new assumption and opened the door for the introduction of operators which take advantage of the nested structure of qNF relations. To illustrate this, consider an example employee relation which is in 1NF (Figure la), and a possible -MNF structuring of it (Figure lb). The -qNF relation has two tuples, (Smith, {Sam, Sue}, {typing, filing}) and (Jones, {Joe, Mike}, {typing, dictation, data entry}). The -qNF relation makes clearer the independent associations of employee and skill, and employee and child, and reduces the data redundancy when compared with an equivalent 1NF relation. In [2], we define an extended relational calculus as the theoreetical basis for a -MNF database query language. The calculus has expressions of the form {t I ~b(t)}, where t is a tuple variable of fixed length and t/, is a formula built from atoms and the operators (% A, V, V, 3). The atoms of formulas ~/, are of 4 types. 1. s ~ r, where s is a tuple variable, and r is a relation name. 2. s ~ t[i] where t and s are tuple variables. This says s is a tuple in the relation specified by the i th component of t, whose value must be a set-of-tuples. 3. a 0 s[i], sill 0 a, s[i] 0 t[j], where s and t are tuple variables, a is a constant, and 0 is an arithmetic comparison operator (=,>). 4. s[i]= {u]~b'(U,tl, t2 , . . . , t~)}, where ~b' is a formula with free tuple variables u, t~, t2 , . . . , t~; s is some t i. This says the i th at tr ibute of s is the set of u tuples such that tb ~ holds. Note, if no tuples u satisfy ~ then this atom evaluates to false. An equivalent extended relational algebra consists of the usual operators (projection, selection, cartesian product, union, difference) and two new operators, nest (v) and unnest (~u). Nest

26 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: An object oriented data model called ODM is described which integrates functional data models and the actor model of computation to increase the productivity of the design systems by providing modeling facilities which closely mirror the designers' logical view of the data.
Abstract: We describe an object oriented data model called ODM which integrates functional data models and the actor model of computation. ODM attempts to increase the productivity of the design systems by providing modeling facilities which closely mirror the designers' logical view of the data. Design objects and operations can be easily described by using the structural components and operations in ODM.

14 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: The programming for observability concept for performance/correctness debugging in a parallel programming environment is introduced and an initial implementation of the presented design is outlined and evaluated.
Abstract: The programming for observability concept for performance/correctness debugging in a parallel programming environment is introduced. The design, first implementation, and evaluation of the required language and system support is presented. A two dimensional, multilevel, integrated monitoring system is described. Distinction is made between the monitoring mechanism, monitoring policies and their implementation tradeoffs. PIEMON-1, an initial implementation of the presented design is outlined and evaluated.

14 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: The development methodology for which PLEASE was created is described, an example of development using the language is given, and the methods used to prototype PLEASE specifications are described.
Abstract: PLEASE is an executable specification language which supports program development by incremental refinement. Software components are first specified using a combination of conventional programming languages and mathematics. These abstract components are then incrementally refined into components ill an implementation language. Each refinement is verified before another is applied; therefore, the final components produced by the development satisfy the original specilieations. PLEASE allows a procedure or function to be specified using preand post-conditions written in predicate logic and all abstract data type to have a type invariant. PLEASE specifications may be used in proofs of correctness, and may also be transformed into prototypes which use Prolog to "execute" preand post-conditions. The early production of executable prototypes for experimentation and evaluation may enhance the development process. 1. I n t r o d u c t i o n It is widely acknowledged that producing correct software is both difficult and expensive. To help remedy this situation, methods of specifying[13,19,20,26,29,31] and verifyingll 4,16,19,27,38 ] software have been developed. The SAGA (Software Automation, Gener'ation and Administration) project is investigating both the forreal and practical aspects of providing automated support for the full range of software engineering activities[2,6,8,15,23,35]. PLEASE is a language being developed by the SAGA group to support the specilication, 1)rototyping, and rigorous development of software components. In this paper we describe the development methodology for which PLEASE was created, give air example of development using the language, and describe the methods used to prototype PLEASE specifications. tThis research is supported by NASA grant NAG 1-138. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1986 ACM-0-8979 I 177-6/86/0002/0349 $00.75 A life-cycle model describes the sequence of distinct stages through which a software product passes during its lifetime[10]. There is no single, universally accepted model of the software life-cycle[3,40]. The stages of the life-cycle generate software components, such as code written in programming languages, test data or results, and many types of documentation. In many models, a specification of the system to be built is created early in the lit%-cycle; as components are produced they are verified[lO] for correctness with respect to this specification. The specification is validated[lO] when it is shown to satisfy the customers requirements. 'Producing a valid specification is a difficult task. The users of the system may not really know what they want, and they may be unb.ble to communicate their desires to the development.team. If the specification is in a formal notation it may be an ineffective medium for communication with the customers, but natural language specifications are notoriously ambiguous and incomplete. Prototyping[12,24] and the use of executable specification languages[21,22,29,41] have been suggested as partial solutions to these problems. Providing the customers with prototypes for experimentation and evaluation early in the development process may increase customer/developer communication and enhance the vaildation and design processes. To help manage the complexity of software design and development, methodologies which combine standard representations, intellectual disciplines, and well defined techniques have been proposed[17,19,37,39]. For example, it has been suggested that top-down development can help control the complexity ot ~ program construction. By using stepwise refinement to create a concrete implementation from an abstract specification we divide the decisions necessary into smaller, more comprehensible groups. Methods to support the top-down development of programs have been de.vised[19,32] and put into useI34 ]. It has also been proposed that software development Inay he viewed as a sequence of transtbrmations between specifications written at dillbrent linguistic levels[25t; systems to support similar development methodologies have been constructed[30}. The Vienna Development Method[19,34] supports the top-down development of programs specified in a nots-

13 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: Some of the ways in which academic institutions have responded to the challenges of software engineering education are described and alternatives to academic training are examined, such as in-plant conversion courses and on-the-job training.
Abstract: This paper describes some of the ways in which academic institutions have responded to the challenges of software engineering education. Some of the opportunities and problem areas in software engineering education are explored at the undergraduate, masters, and doctoral levels. Alternatives to academic training are examined, such as in-plant conversion courses and on-the-job training. The paper concludes with some observations on current trends and possible future roles of academe in software engineering education. I N T R O D U C T I O N Software engineering is primarily concerned with the methods, tools, and techniques, both technical and managerial, used to develop and modify software products. For the purposes of this paper, a software product is defined to be production quality software developed by professional software engineers for use by other people. Ideally, a software product is developed on-time and within budget, and satisfies the users' needs in a cost-effective manner. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific pcrraission. © 1986 ACM-0-89791-177-6/86/0002/0039 $00.7.5 39 A software product might be an application package, a utility program, an operating system, or the software component of a hardware-software-people system. Software engineering is not exclusively concerned with the problems of large software systems; however, issues unique to development and modification of large products are topics of interest to software engineers, as are the issues of scaling down to small products. During the past decade, software engineering has emerged as a technological discipline of considerable social, economic, and intellectual importance. The pervasive influence of computers and computing technology throughout modern society has accelerated the need for competent, professional software engineers and the practice of software engineering is increasingly regarded as a legitimate professional activity. Software is now treated (by most people) as a legitimate technological artifact, rather than a necessary but unpleasant appendage to the computing hardware. While some may doubt the legitimacy of computer science as a scientific discipline (DENN85), there can be no doubt that development and modification of software products are legitimate technological endeavors.

11 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: This paper investigates N-Modular Redundancy (NMR) in the form of replicated computations in a concurrent programming model consisting of communicating processes to permit redundant systems to be robust with respect to failures in redundant processors, and to allow software fault tolerance techniques such as N-version programming.
Abstract: This paper investigates N-Modular Redundancy (NMR) in the form of replicated computations in a concurrent programming model consisting of communicating processes. A formal specification of NMR is given to express the correct behaviour of the system in the presence of nondetermJnism. The COSY path expressions formalism is used as a forma] mode]. Then some implementations are proposed which satisfy the given specification. This approach permits redundant systems to be robust with respect to failures in redundant processors, and also permits the use of software fault tolerance techniques such as N-version programming. The need to ensure correct input-output behaviour, and a high level of fault-masking in the case of real time systems. has led designers to consider the application of N-Modular Redundancy (NMR)in the construction of software. The SIFT [9] aircraft control computer system, for example, has demonstrated the possibility of obtaining reliable computations through the replication of programs on multiple computers and the use of majority voting. This solution has many advantages, which include:-the continuity of correct input/output behaviour in the case of real-time control systems is ensured;-reliability is independent of the particular strategy of resource management; transient faults are masked by voting and do not cause reconfiguration. This form of redundancy involves implementation problems if nondeterminism is allowed in the computational model, since it requires that in the absence of faults all the instances of a process have the same behaviour. However adherence to this requirement is problematic in distributed computing systems. The solution adopted by SIFT though simple, is very restrictive [8]. Consistency is maintained through adopting constraints on scheduling and communication. In particular, Permission to copy without ~e all or part of this mamfial is granted provided that the copies am not made or distributed for direct comme~ial advantage, the ACM copyright notice and the title of the publication and i~ dam appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requi~s a ~e and/or specific permission. a planned schedule ensures that task results are available when required by other tasks (tasks are never required to wait for input values). In contrast it has been shown in {7] that one can extend the concept of SIFT-like NMR: to provide faulttolerance support for a wide class of programs;-to allow greater asynchrony between executions of program replications;-to allow software fault tolerance techniques, such as N-version programming …

10 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: This study shows that PRG efficiently captures major clusters and is more general than former inductive inference algorithms in providing probabilistic features.
Abstract: Probabilistic Rule Generator(PRG) generates rules by inductive inference from training examples. It employs information-theoretic entropy as a criterion in building a search tree, but also uses multiple-valued logic to expand a captured complex. This study shows that PRG efficiently captures major clusters and is more general than former inductive inference algorithms in providing probabilistic features. A brief discussion of how PRG can systematically modify initial hypotheses is also presented.

9 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: The design of a high-level language feature that provides the ability to change the linkage between a name and various implementations during the execution of a program is described and incorporated into a Pascal implementation.
Abstract: No currently available language provides a means of binding names to separately compiled implementations under the control of an executing program and statically checking the correctness of such binding. This work describes the design of a high-level language feature that provides this control. The motivating problem is the programming of data base managers in a high-level language without resorting to untyped external storage of the data. This design follows a methodology that prescribes the complete design and modeling of the semantics of the language feature before the final syntactic decisions are made. The result is a language feature that provides the ability to change the linkage between a name and various implementations during the execution of a program. This "dynamic binding" takes place in the context of a strongly typed, block-structured language. This feature has been incorporated into a Pascal implementation. The problems encountered are discussed, and future work is suggested. Appendices specify the syntax of the Pascal extension and present an example data base manager written using this language.

7 citations


Proceedings ArticleDOI
01 Feb 1986
TL;DR: ENCOMPASS supports the specification, design and construction of validated and verified programs in a modular programming language, which supports program development by incremental refinelnent, the use of executable speeifical;ions, a rigorous model of software configurations, and a hierarchical library structure.
Abstract: ENCOMPASSI31 is an example integrated software development environment being constructed by the SAGA project[l,2]. ENCOMPASS supports the specification, design and construction of validated and verified programs in a modular programming language. ENCOMPASS supports program development by incremental refinelnent, the use of executable speeifical;ions, a rigorous model of software configurations, and a hierarchical library structure. In ENCOMPASS, a development passes through the phases planning, requirements detinition, validation, refinement, and system iJJl~egration. This structure is the traditional llfe-cycle model extended to supporl, the Vienna Development Method (VI)M) and the use o1' an execul, able specification language, in VDM, a program is first specified using elements from both c(mventional programmiug languages and mathematics. This speciticatim) is then incrementally transformed into a program in an implementation language. Each transformation is verified I)egore another is applied, therefore the linal program pr,Muced satisfies the original speeifieatio,. In ENCOMPASS, VI)M specifications may be written in the executable speeification language PLEASE[4]. Prototypes produced from PLEASE specifications may be used in the validation phase to enhance customer/developer communication. The use of PIdSASE speeitications also allows .the verificai, ion of syslem components using either testing or proof techniques. In [~NCOMPASS, the objects in a software system are modeled as entities which have relationships between them. Entities are grouped into modules and modules are grouped into projects. An entity, module or project may have different versions. A view of a project may be used to hide or expose entities, provide a different modularization, or emphasize dilferent relationships. The entities considered by ENCOMPASS include: module specifications, bodies, object code, and load modules; test eases, drivers and har'nesses; proofs of correctness; and all forms of documentation. This Research is supported by NASA Cranl; NAG 1-t38 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. ENCOMPASS supports multiple programmers and projects using a hierarchical library system containing a workspace for each programmer, a project library for each project, and a global library common to all project,s. A workspaee contains objeets which are private to a single programmer, and possibly references to objects which are actually in libraries. The project leader controls access to the project library, which contains objects tha t must be available to all the personnel on a particular project. The global library contains objects available for reuse on all projects and is read-only to all but the librarian. The librarian reviews completed projects, decides which objects are suitable for reuse, and caters these objects into the library indexed by key words. A programmer may search the library for objects which are indexed on a number o fkey words and then create a copy of, or reference to, these objects in his workspaee. A prototype implementation of ENCOMPASS is being constructed on a Vax running 4.2BSD Unix using an existing revision control system. The system will incorporate may tools developed by the SAGA project, including a language-oriented editor and a proof management system. The initial prototype of ENCOMPASS will be used by the SAGA group in the development of future versions of ENCOMPASS and other software tools.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: I believe that logic is the least restrictive and most appropriate formalism for knowledge-based systems and this combination of knowledge and formalism accounts for some of the difficulty practicioners have had in explaining what knowledge- based systems are.
Abstract: Feigenbaum C4], commenting on the Fifth Gerieration Project, has said that logic is not important, but knowledge is. I agree that knowledge is more important than logic. But logic is important too. Knowledge-based systems need both knowledge and formalism. Although knowledge is more important than formalism, formalism is important because the use of a poor formalism can interfere with the representation of knowledge and can restrict the uses to which that knowledge can be put. I believe that logic is the least restrictive and most appropriate formalism for knowledge-based systems. Knowledge-based systems combine both complex knowledge and sophisticated formalisms. I believe that this combination of knowledge and formalism accounts for some of the difficulty practicioners have had in explaining what knowledge-based systems are. Problems arise because we confuse knowledge with formalism. Many characterizations of expert systems for example concentrate simply on formalism, on rule-based languages for example and say very little about what makes such formalisms particularly appropriate for expressing and reasoning with knowledge. Logic is strong on formalism but weak on concepts. It contains no knowledge, and is all form and no content. Indeed the significance of the model theoretic semantics of logic is precisely that: Model theory defines as valid precisely those sentences which are true in any interpretation. As a consequence, logic tells us nothing about the actual world itself. To use logic to represent knowledge we have to identify a useful vocabulary of symbols to represent concepts. We have to formulate appropriate sentences, with the aid of that vocabulary, to represent the knowledge itself. Logic can help us to test an initial choice of vocabulary and sentences, by helping us to derive logical consequences and identify the assumptions which participate in the derivation of those consequences. It provides us with no help, however, in identifying the right concepts and knowledge in the first place. A typical AI knowledge representation scheme, such as semantic networks or frames, combines concepts and formalism at the same time. It provides a built-in framework of ready-made concepts to help with the initial representation of knowledge. But it also provides a formalism to go along with the concepts. In the same way that a computer salesman might try to convince us that to run a particular piece of software we need to buy the appropriate hardware, a LISP machine for example, the developer of an AI system typically tries to convince us …

Proceedings ArticleDOI
01 Feb 1986
TL;DR: This talk will survey representative examples of a number of vigorous research streams exploring methods of extending the capabilities and capacities of Prolog, including treatment of concurrency, inclusion of functional programming capabilities, and incorporation of metalevel reasoning techniques.
Abstract: Prolog has established itself as a highly successful example of the logic programming paradigm. Today there are a number of vigorous research streams exploring methods of extending the capabilities and capacities achieved by Prolog. These include treatment of concurrency, inclusion of functional programming capabilities, and incorporation of metalevel reasoning techniques. The paper will survey the goals and current state of these explorations. 1. I n t r o d u c t i o n The logic programming enterprise seeks to utilize formal logical systems as languages for controlling computers. The basic methodology lies in the adaptat ion of theorem-provers for logical systems to create abstract machines which "execute" formulas of the logical system. Prolog is a highly successful example of this approach. It has proven to be very useful in a variety of areas ranging from rapid prototyping of conventional systems to highly speculative AI research. These successes have led a number of researchers to explore methods of extending the capabilities of Prolog. These investigations include incorporation of concurrency constructs and distributed execution, functional programming constructs, and metalevel reasoning constructs, including control. This talk will survey representative examples of these investigations. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1986 ACM-0-89791-177-6/86/0002/0019 $00.75 The survey is in no way intended to be complete, nor does the choice of projects described imply a value judgment vis-a-vie those projects not described. The discussion is simply intended to present a flavor of some of the new directions in Prolog research. 2. C o n c u r r e n c y Major work in this area began with the development of systems leading to PARLOG by Clark and Gregory ([19811, [1984a1, [1984b1, [19847]) at Imperial College in London, and (concurrently) the development of Concurrent Prolog and its descendents by Shapiro [1983] and his students at the Weizmann Institute in Rehovot, Israel. Similar work has been undertaken by L. Pereira, L. Monteiro, and M.J. Wise. All of the efforts are motivated by a desire to exploit the inherent AND-parallelism and OR-parallelism of logic programming languages. Much of the necessary machinery for concurrent programming as already present in ordinary sequential logic programming languages, or is readily obtainable from the present approaches. In this talk, we'll consider Concurrent Prolog and its descendents; the concerns and approaches of the other projects are closely related. Concurrent Prolog introduces two new syntactic constructs to ordinary Prolog: the commit operator (I) and the read-only variable annotation (?). The commit operator appears in the bodies of clauses, either explicitly or implicitly immediately following the clause arrow (:-). Such clauses have the form A : G 1 ..... G k [ B 1 ..... B n. and are called guarded clauses. The portion (A) to the left of the arrow ( : ) is called the head, the portion (G1,...,Gk) between the arrow and the commit operator ([) is called the guard, and the portion (B1,...,Bn) to the right of the commit

Proceedings ArticleDOI
01 Feb 1986
TL;DR: An optimization technique for interpreters of high level interpreter-based applicative languages, taking an implementation of LISP as an example, and it is shown that the combination of these three techniques keeps the extra expense for static scoping small.
Abstract: Within the last years, the interest in efficient implementations of high level interpreter-based applicative languages has increased considerably, both in the areas of software engineering and artificial intelligence. Further, a preference for static scope languages can be observed. This paper presents an optimization technique for interpreters of such languages, taking an implementation of LISP as an example. First the technique of shallow binding, which is used as a fast method of accessing variable bindings in many (dynamic scope !) LISP-systems, is adapted to static scoping . Then a dynamic optimization of simple and formal tail recursive functions is introduced. This technique is extended such that "covered tail recursire" functions are also interpreted very efficiently. Functions of this kind are typical for recursive operations on list structures. Finally, it is shown that the combination of these three techniques keeps the extra expense for static scoping small.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: A new algorithm of minimum time motion planning is proposed for robots operating in the obstacle strewn environment and a merger between two kindred algorithms: A* search algorithm, and dynamic programming is enabled.
Abstract: A new algorithm of minimum time motion planning is proposed for robots operating in the obstacle strewn environment. Structure of the topological passageways is analyzed and represented using a model of slalom situations for which a number of rules is determined. Dynamical system of robot is described in a form of sequential machine. Thls enabled a merger between two kindred algorithms: A* search algorithm, and dynamic programming. An experimental analysis of the simulated mobile robot has confirmed the applicability of results.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: This paper derives cost expressions for query processing for these index placement strategies and studies the performance of composite B-trees in a distr ibuted database system.
Abstract: Composite B-tree indexes f a c i l i t a t e enforcing constraints such as referent ia l in tegr i ty rules, and enhances performing joins in a relat ional database environment. In this paper we study the performance of composite B-trees in a distr ibuted database system. The index i t s e l f could be central ized, distr ibuted (as d is jo in t segments), or fragmented (as non-disjoint segments). We derive cost expressions for query processing for these index placement strategies.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: The m e r i t s of va r i ous knowledge r e p r e s e n t a t i o n s (KRs) have b e e n d e b a t e d s ince t h e ea r l i e s t days of A_I.
Abstract: The m e r i t s of va r i ous knowledge r e p r e s e n t a t i o n s (KRs) have b e e n d e b a t e d s ince t h e ea r l i e s t days of A_I: w h e t h e r knowledge shou ld be r e p r e s e n t e d p r o c e d u r a l l y or dec l a r a tively; wh ich cho ice is b e s t a m o n g f r a m e s , p r o d u c t i o n ru les , s e m a n t i c ne t s , a n d ana log ica l r e p r e s e n t a t i o n s . A KR 0 e c a m ' s r a z o r d e m a n d s : if one r e p r e s e n t a t i o n c a n do t h e job, t h e n only one shou ld be used .

Proceedings ArticleDOI
01 Feb 1986
TL;DR: Two co mpac t s c h e m e s , n a m e l y t h e h a l f and t h r e e p a r t i t i n r e p r e s e n t a t i o n s a r e i n t r o d u c e d f o r c o d i n g t h E(n) p rocesso rs.
Abstract: Two co mpac t s c h e m e s , n a m e l y t h e h a l f and t h r e e p a r t i t i o n r e p r e s e n t a t i o n s a r e i n t r o d u c e d f o r c o d i n g t h e t o p o l o g y o f c u b e c o n n e c t e d n e t w o r k s . B a s e d on t h e s e r e p r e s e n t a t i o n s , r e c u r s i v e a l g o r i t h m s are 8 i v a n to r e a l i z e p a t h s , t r a n s p o s i t i o n s , c y c l e s and, p e r m u t a t i o n s t h r o u g h s u c h n e t w o r k s , I t i s a l s o shown t h a t a cube c o n n e c t e d n e t w o r k can r e a l i z e an a r b i t r a r y t r a n s p o s i t i o n , i . e . the exchange o f da t a be tween any two of i t s i npu t t e r m i n a l s , i n a t mos t two p a s s e s . The a l g o r i t h m fo r r e a l i z i n g p a t h s r u n s in O(IOgsU) t ime and O(n) s p a c e , t h o s e f o r t h e n e x t two t a s k s r e q u i r e O(n) t ime and apace and, t h e a l g o r i t h m f o r t h e l a s t t a s k r u n s i n O ( n l o g i n ) t i m e and O(n) s p a c e on a s e q u e n t i a l c o m p u t e r . T h e t i m e c o m p l e x i t i e s o f t h e a l g o r i t h m s f o r t h e l a s t t h r e e t a s k s c a n be improved as O( logsn) u s i n g a p a r a l l e l c o m p u t e r w i t h O(n) p rocesso rs .

Proceedings ArticleDOI
01 Feb 1986
TL;DR: This paper focuses on the stepwise refinement of a pseudocode program as assessed in terms of a set of partial metrics, which are extensions of Halstead's Software Science, McCabe's Cyclomatic Complexity, and others.
Abstract: This paper describes a software tool, the Partial Metrics System, that supports the metricsdriven design of Ada program modules. In particular, it focuses on the stepwise refinement of a pseudocode program as assessed in terms of a set of partial metrics. These metrics are extensions of Halstead's Software Science, McCabe's Cyclomatic Complexity, and others. It is then demonstrated how these metrics can drive the design process for an individual module. Heuristics are suggested that allow the programmer use of these metrics to produce improved designs.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: A temporal element is a finite union of intervals of time [I] that would allow the result of a query to take a structurally simple form.
Abstract: A temporal element is a finite union of intervals of time [I]. In temporal databases, one is led to the notion of a temporal element in a very natural way. For example, the set of instants when it was below freezing in Chicago during 1984 is a temporal element and not an interval. Moreover, even though the sets {t: P(t)} and {t:Q(t)} may be intervals, {t: not P(t)}, {t: P(t) or Q(t)} and {t: P(t) and Q(t)} would in general be temporal elements. Thus, the notion of a temporal element would allow the result of a query to take a structurally simple form.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: SHAPE, a system that relies on a smart compiler to effectively control a reconfigurable architecture that is able to exploit the concurrency that occurs in programs written in high level languages is described.
Abstract: In this paper we describe SHAPE, a system that relies on a smart compiler to effectively control a reconfigurable architecture. The reconfigurable architecture is able to exploit the concurrency that occurs in programs written in high level languages. The architecture is designed to reduce the data transfer between memory modules and processing elements and the need for storing temporary results in memory, thus avoiding processor-memory bottlenecks. The system relies on compiler technology to effectively utilize the hardware resources. This has the advantage that the hardware need not be overly complex to achieve high performance and also leads to a reduction in the run-time overhead. The compiler in the system generates code based on knowledge acquired from the source as well as from an intermediate representation of the source. An overview of the architecture and a brief discussion of compiler techniques to achieve the appropriate configuration and concurrent execution are presented.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: This paper describes the first "proof of principle" that a well-defined meaning can be assigned to such "graphical programs", and that they can be "compi led" into executable code.
Abstract: The STILE system (STructure Interconnectiou Language and Environment) provides facilities for graphically describing, and executing programs consisting of, communicat ing concurrent activities such as those found in real-time control software. A prototype graphical editor has been developed that allows STILE programs to be created interactively. The editor also can be used to generate a description of the at t r ibutes of the components of the graph and its interconnection structure. Two run-time systems for executing identical STILE programs have been implemented and are also described one for a multiprocessor, and one for a network. In each, processes are distributed over several processors. 1. I n t r o d u c t i o n A long-standing problem in real-time s)s tems is that of constructing control software. This software is inherently difficult because of the need for concurrency as well as the real-time constraints, and nlost existing programming languages and environments do not deal with these issues. The STILE project [Wcidc 83, Weide 84, Brown 84] involves our effort to take an entirely different approach to real-time software: graphical programming. We substi tute drawing pictures of software systems for the tradit ional text-oriented languages in order to express concurrency. This paper describes our first "proof of principle" that a well-defined meaning can be assigned to such "graphical programs", and that they can be "compi led" into executable code. Furthermore, it explains how the STILE virtual machine (model of Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1986 ACM-0-89791 177-6/86/0002/0205 $00.75 computat ion) can be implemented equally well on a tightly-coupled multiprocessor and on a loosely-coupled network system, while hiding non-essential details of the target system from the real-t ime programmer. The graphical editor runs on a Sun Workstat ion whose graphics capabilities include fast manipulat ion of image data, a 1152 x 900 bit mapped display that allows graphics and text to be mixed freely, and a mouse pointing device to select items from a mouse or point at text. Its software support for menus and selections made it a t t ract ive for our purpose. The STILE run-t ime system has been implemented on a multiprocessor, consisting of Intel's 86/30 single board computers connected via a Multibus. This system has been used with robotics application software and with a variety of real-t ime applications simulated by varying the parameters of a synthetic workload generator. The STILE runt ime system has also been implemented on a network of Sun Workstations connected via Ethernet. This was d o n e more as an exercise to evaluate the ease or difficulty of implementing STILE on a general purpose operating system, and not as a potential testbed for real-time applications. Figure 1 illustrates t~he overall architecture of tke prototype system described in this report. The paper is organized as follows. In Section 2, we introduce the STILE "basic model" with the help o f a sample robotics application program. Creat ing and compiling STILE graphs using the graphical editor are discussed in Section 3. Sections 4 and 5 describe the implementat ion of the STILE run-time system on a multiprocessor and in a network environment, respectively. Concluding remarks are given in Section 6. 2. T h e S T I L E B a s i c M o d e l The STILE "basic model" [Weide 831 is an abstract model of concurrency useful for describing parallel programs and for building domain-dependent abstract concurrency models. The main components of this model are: • Blocks or boxes, which are agents of computations, or processes.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: A new technique for comparing runtimes of different search algorithms is presented and it is proposed that regions in a certain parameter space be reported instead of raw timing data, so that time comparisons now depend only on the heuristic function used to measure the distances between two nodes and the structure of the search space graph.
Abstract: Reports in the literature comparing run-times of different search algorithms are highly dependent on parameter values of the domain where measurements are taken. Examples of such parameters are time required to expand a node and time required to calculate the heuristic distance between two problem states. Altering these parameter values can totally reverse the conclusion of a time comparison test. To eliminate this problem a new technique for comparing runtimes of different search algorithms is presented. It is proposed that regions in a certain parameter space be reported instead of raw timing data. The effect is that time comparisons now depend only on the heuristic function used to measure the distances between two nodes and the structure of the search space graph. This, in turn, allows us to study how the Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1986 ACM-0-89791 177-6/86/0002/0301 $00.75 speed of different search algorithms varies with the heuristic-function/searchgraph environmment.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: From this perspective, the university student believes that by following a plan of studies in the computer science curriculum at such an institution, she will be assured of a good job at a reasonable salary performing interesting and challenging tasks.
Abstract: Most universities and colleges prepare students for continued growth and development during a lifetime of employment [Gorsline 82]. Traditionally, a university education has implied better job opportunities for its graduates. From this perspective, the university student believes that by following a plan of studies in the computer science curriculum at such an institution, she 1 will be assured of a good job at a reasonable salary performing interesting and challenging tasks.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: A theory is proposed by which Instructional Expert Systems may acquire a deep understanding of a student by using the indiscernibility relation, the expert's knowledge and the student's knowledge of a particular domain can be expressed in terms of equivalence classes which partition the domain.
Abstract: In order to provide effective instruction (i.e. present the material at the correct level, use the most appropriate teaching strategy, select appropriate remedial action, etc.), an Instructional Expert System must maintain an accurate "student model". We propose a theory (based on the concept of "rough classification") by which Instructional Expert Systems may acquire a deep understanding of a student. By using the indiscernibility relation, the expert's knowledge and the student's knowledge of a particular domain can be expressed in terms of equivalence classes which partition the domain. By comparing both partitions, the computer tutor may ascertain what the student knows, the student's misconceptions, and what can and can't the student learn.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: Here an attempt is made to develop general methods of analysis, using the notion of the discrimlnant of a search graph as the starting point, and it is hoped that this general approach will encourage similar studies on search graphs other than trees.
Abstract: A search graph has the form of an m-ary tree with bidire:tional arcs of unit cost. In general there are a number of solution paths of different lengths. The shortest solution path has length N. It is assumed that the heuristic estimates of all nongoal nodes, after being appropriately normallzed~ are independent and identically distributed random variables. Under what conditions is the expected number of node expansions polynomial in N ? Earlier efforts at answering this question have considered only special cases. Here an attempt is made to develop general methods of analysis, using the notion of the discrimlnant of a search graph as the starting point. It is hoped that this general approach will encourage similar studies on search graphs other than trees. Section I Introduction A worst-case analysis of a heuristic search algorithm gives at best only an imperfect picture about its performance characteristics. Unfortunately~ an average case analysis is far more difficult to achieve. The major problem lies in deciding how exactly the averaging is to be done. No completely satisfactory method is currently known. One way out is to take a graph of simple structure which is representative in some sense, and then see how a search algorithm llke A* fares when run on it. This has been done by Huyn, Dechter and Pearl ~4G, and by Pearl C6G. Their graph is an m-ary tree, with bidirectional arcs of unit cost and one goal node at a distance N from the root. It is assumed that the heuristic estimates of nongoal nodes, after being appropriately normalized, are independent, identically distributed random variables. The expected number of node expansions made by A* is then computed. In this idealized modbl, no node is expanded more than once by A* t and a minimal cost solution is always obtained. Detailed proofs can be found in Pearl DTJ. Bagchi and Srimani L-~J have looked at the same problem, but because of their interest in inadmissible heuristics, they have given the dlscrlmlnant a major role in Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1986 A C M 0 8 9 7 9 1 1 7 7 6 / 8 6 / 0 0 0 2 / 0 3 0 9 $00 .75

Proceedings ArticleDOI
01 Feb 1986
TL;DR: This abstract advocates an approach for expressing and using meta~ knowledge based on the use of meta, interpreters, which is summed up in the slogan Expert System = Knowledge + Meta-Interpreters .
Abstract: Research and development in artificial intelligence over the past several years have shown that expert systems can be built in domains where knowledge can be suitably encapsulated. The research confirms the cliche: "in the knowledge lies the power". Knowledge is the key to making computers more intelligent, and hence more useful. There is a need for tools, techniques and methodology to facilitate both the incorporation of knowledge into computer systems and the utilisation of that knowledge. The first-generation expert systems that are currently being built are limited, failing to capture the whole range of expert skills. Experts are distinguished from novices by the the amount and form of knowledge they possess, and also by how they use it. Experts are able to explain their reasoning, train others in how to use knowledge, and cope with conflicting information, among other skills. For any expert system to model aspects of human expertise such as explanation, meta-knowhdge, that is knowledge about the knowledge , needs to be explicitly present. Meta-knowledge is needed by the inference engine to guide, and augment, the reasoning process. For example, explanation facilities built where such knowledge is lacking are intrinsically limited, as argued in [1] and [3]. This abstract advocates an approach for expressing and using meta~ knowledge based on the use of meta, interpreters. A recta-interpreter for a language is an interpreter for the language written in the language itself. The ability to write a meta-interpreter easily is a very powerful feature a programming language can have. It enables the building of an integrated programming environment for example, and gives access to the computational process. Both these features are required for expert systems. An expert can be described as a collection of meta.interpreters working on a knowledge base. This view seems intuitively to describe expert skills. Separate tasks require separate procedures. Handling uncertain or conflicting evidence is different from explaining the line of reasoning used to reach a conclusion. Despite the difference, we need to be able to uniformly handle the various forms of reasoning necessary for an expert system. The uniform integration of various meta-interpreters, knowledge and rests-knowledge constitutes a methodology for building expert systems. The advantages of so doing are modularity and clarity. Both knowledge and meta.knowledge are encoded explicitly. The methodology is summed up in the slogan Expert System = Knowledge + Meta-Interpreters Permission to copy without fee all or part of this material …

Proceedings ArticleDOI
01 Feb 1986
TL;DR: The goal is to create an intelligent assistant for the graph theory researcher, with emphasis on aiding the conjecturing process, and several software tools have been constructed which support a portion of graph theory investigation.
Abstract: Grapple is a system which automates the construction of graphs and the computation of graphical properties. It is part of a larger project whose end product will be a graph theoretic expert system. Four tasks will be supported: graph construction, graphical property definition and computation, conjecture analysis, and conjecture formulation. We describe the current status of Grapple, present a typical session, and discuss requirements for automating the more creative aspects of graph theory investigation. l . I n t r o d u c t i o n In this paper we report on some initial developments in an expert system for graph theory investigation. Our goal is to create an intelligent assistant for the graph theory researcher, with emphasis on aiding the conjecturing process. Our initial efforts have concentrated on building a system, called Grapple, which addresses graph construction and graphical computation. The mathematical conjecturing process can be defined as everything that a mathematician does up to the instant he is ready to write "Theorem: .... proof: .... " Consider the process specialized to graph theory research. This includes library work, considering results known for a class of graphs on different classes of graphs, specializing general results in order to obtain strengthened conclusions, violating the hypothesis of a result and searching for counterexamples, and other tasks that involve "tampering" with current knowledge. In all of this work, the graph theorist must create and examine useful examples of graphs. Examples serve two functions: Exploratory and evidenciary. To help formulate a conjecture it is necessary to explore the reserach area under consideration. Once a conjecture has been formulated, examples from the full spectrum of pertinent graphs should be checked as evidence of the truth of the conjecture. In Grapple we Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the ptiblication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. want to capture this experimental part of the mathematical conjecturing process. Several software tools have been constructed which support a portion of graph theory investigation. "Graph" [Cvet84] supports the graph theorist by providing a data management system for the storage and retrieval of the graph theory literature, a theorem prover, and a library of algorithms which can perform a wide variety of computations on graphs. INGRID [Dutt83] is oriented toward extremal graph theory and manlpulates known theorems in order to produce bounds on specified graph theoretic invariants given restrictions on other chosen invariants. In addition to the tasks which these systems support, the graph theorist must construct and manipulate particular graphs as well as families of graphs, compute graphical properties and the relationships among them, discover patterns in the computed data, and formulate conjectures based on these patterns. It is this set of tasks on which we focus our efforts. As such we are not interested in supporting literature search or automatic theorem proving. 2 . O v e r v i e w o f G r a p p l e Grapple is an interactive, menu-drlven software tool which supports the creation, manipulation and examination of graphs under the direction of the user. Grapple is implemented in Pascal on an IBM-PC with color graphics board and 512K of memory. Hard-copy output is provided by a HP 7475A color plotter. Figure 1 contains a block diagram of the functional capabilities of Grapple. The system consists of an editor, a librarian, a graphical properties subsystem, and a screen manager. © 1986 A C M 0 8 9 7 9 1 177 -6 /86 /0002 /0327 $00 .75

Proceedings ArticleDOI
01 Feb 1986
TL;DR: This paper considers the problem of query optimization in logicoriented object bases, a database constructed based on object data model and augmented by mathematical logic.
Abstract: It is generally accepted that object-based systems provide "a simple and elegant paradigm for general-purpose programming that meshed well with data models. In this paper we consider the problem of query optimization in logicoriented object bases. A logic-oriented object base is a database constructed based on object data model and augmented by mathematical logic. Problems with query processing in such systems are identified and an expert system approach is proposed. Advantages of such approach are illustrated by examples of combinatorial problems for general graph objects.

Proceedings ArticleDOI
01 Feb 1986
TL;DR: A hierarchical model is developed which utilizes the generalized stochastic petri nets (CSPNs) and product form queueing networks (PFQNs) to model parallel and distributed processing systems.
Abstract: Analytical models of parallel and distributed processing systems are needed to study the key factors affecting performance of proposed systems. Because of the nature of such systems, a model has to handel such phenomena as concurrent synchronous-asynchronous operations, contention for multiple resources, and queueing. The models previously developed, all of which are based on product form queueing networks (PFQNs), are restricted to a certain type of concurrent operations or another. We considered the generalized stochastic petri nets (CSPNs) as an alternative modeling tool to model such systems. Yet a CSPN model rapidly becomes intractable due state space explosion. Therefore, a hierarchical model is developed which utilizes b o t h GSPNs and PFQNs.