scispace - formally typeset
Search or ask a question

Showing papers presented at "Conference on Scientific Computing in 1993"


Proceedings ArticleDOI
01 Mar 1993
TL;DR: The operation of finding an abstract superclass is decompose into a set of refactoring steps, and techniques that can automate or automatically support these steps are discussed.
Abstract: This paper focuses on object-oriented programming and one kind of structure-improving transformation (refactoring) that is unique to object-oriented programming: finding abstract superclasses. We decompose the operation of finding an abstract superclass into a set of refactoring steps, and provide examples. We discuss techniques that can automate or automatically support these steps. We also consider some of the conditions that must be satisfied to perform a refactoring safely; sometimes to satisfy these conditions other refactorings must first be applied.

147 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: A prototype of an intelligent business forecasting system, called IBFS, has been developed and it is shown that a rule-based expert system is integrated to assist the user in selecting appropriate models based on the user's requirements and data patterns.
Abstract: The main objective of this paper is to explore the possibility of the integration of expert systems technology with a forecasting decision support system. A prototype of an intelligent business forecasting system, called IBFS, has been developed by the authors. Like general DSS, IBFS has a relational database, a model base that contains most of the forecasting models provided in the many commercial forecasting packages, and a user interface. The main feature of IBFS is that a rule-based expert system is integrated to assist the user in selecting appropriate models based on the user's requirements and data patterns. The system also assists in performing, graphing, evaluation, combining and monitoring the models in a systematic fashion.

46 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: It shows that the artificial neural network's forecasting is generally superior to time series but it occasionally produces some very wild forecasting values.
Abstract: We have developed a stock-market forecasting system based on artificial neural networks. The system has been trained with the Standard & Poor 500 composite indexes of past twenty years. Meanwhile, the system produces the forecasts and adjusts itself by comparing its forecasts with the actual indexes. Since most of stock-market forecasting systems are based on some kind of statistical models, we have also implemented a statistical system based on Box-Jenkins ARIMA(p,d,q) model of time series. We compare the performance of the these systems. It shows that the artificial neural network's forecasting is generally superior to time series but it occasionally produces some very wild forecasting values. We then developed a transfer function model to forecast based on the indexes and the forecasts by the artificial neural networks.

33 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: The purpose is to introduce computational science as a legitimate interest of computer scientists, and presents a possible foundation for computational science, and indicates areas of mutual interests between computational science and computer science.
Abstract: We describe computational science as an interdisciplinary approach to doing science on computers. Our purpose is to introduce computational science as a legitimate interest of computer scientists.We present a possible foundation for computational science. These foundations show that there is a need to consider computational aspects of science at the scientific level. We next present some obstacles to computer scientists' participation in computational science. We see a cultural bias in computer science that inhibits participation. Finally, we indicate areas of mutual interests between computational science and computer science.

24 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: Analysis techniques used to quantitatively assess the software maintenance process of a large military contractor, and the results obtained.
Abstract: This paper describes analysis techniques used to quantitatively assess the software maintenance process of a large military contractor, and the results obtained. The analysis techniques make use of basic data collected throughout the maintenance process. The data collected are extensive and allow a set of functional enhancements to be traced to process activities and product impact. Simple nonparametric statistical techniques are then applied to test relationships between data items. The results provide valuable information for predicting process and product characteristics, and assessing the maintenance process.

24 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: A generalized framework based on Genetic Algorithm is developed which is applicable to wide range of network design problems and several topology design problems are solved to demonstrate the generality of this solution approach.
Abstract: One of the important features of computer networks is the potential for high reliability. The reliability of a network depends on many parameters such as connectivity, degree of each node, and average distance between any pair of nodes. The main focus of the problem considered in this paper is to design reliable computer network topologies. A generalized framework based on Genetic Algorithm is developed which is applicable to wide range of network design problems. Several topology design problems are solved to demonstrate the generality of this solution approach. The results obtained from genetic algorithm based solution approach are compared with the optimal solutions to illustrate the effectiveness of the proposed approach.

22 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: This paper presents the HCV algorithm in detail and provides a performance comparison of HCV with other inductive algorithms such as ID3 and AQ11.
Abstract: HCV is a heuristic attribute-based induction algorithm based on the newly-developed extension matrix approach. By dividing the positive examples (PE) of a specific class in a given example set into intersecting groups and adopting a set of strategies to find a heuristic conjunctive formula in each group which covers all the group's positive examples and none of the negative examples (NE), it can find a covering formula in form of variable-valued logic for PE against NE in low-order polynomial time. This paper presents the HCV algorithm in detail and provides a performance comparison of HCV with other inductive algorithms such as ID3 and AQ11.

21 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: This paper demonstrates the effectiveness of a trigram based index for morphologically based retrievals from a full text document retrieval system and proposes a method for increasing the average precision to 100%.
Abstract: A trigram is a three element sequence of characters. In this paper we demonstrate the effectiveness of a trigram based index for morphologically based retrievals from a full text document retrieval system. Retrieved documents are considered relevant if they contain exact matches for each of the query terms. Using this definition of relevance we consistently achieve a recall rate of 100%. In the experiments described here, we used sets of 100 anded three term queries, and the average precision per set varied from 47% to 87%. We propose a method for increasing the average precision to 100%. Using overlapping trigrams extracted from the Brown Corpus [KUCE67] and a character set of 45 elements, we found a horizontal asymptote near 11,000 for the number of entries in a trigram based index. Finally we show that a trigram based system provides a reasonable alternative to a word based one and is superior to it in retrievals of word fragments.

18 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: A 4-level Documentation Process Maturity model based on the Software Engineering Institute (SEI) Software Development Mat maturity model is proposed, which is believed to be a useful tool in assessing an organization's documentation level and identifying improvement areas to move the organization to the next level.
Abstract: Low quality or missing documentation is a major cause of errors in software development and maintenance. We address this problem by proposing a 4-level Documentation Process Maturity model based on the Software Engineering Institute (SEI) Software Development Maturity model. We believe our model will be a useful tool in assessing an organization's documentation level and identifying improvement areas to move the organization to the next level.

18 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: A data flow visual programming language in which the first class citizenship of programs have been taken a step further, and programs can be manipulated as data with the same kind of flexibility that LISP offers in manipulating programs as data.
Abstract: Data flow based visual programming languages are an active area of research in visual programming languages. Some recent data flow visual programming languages have implemented higher order functions, allowing functions to be passed to/from functions. This paper describes a data flow visual programming language in which the first class citizenship of programs have been taken a step further, and programs can be manipulated as data with the same kind of flexibility that LISP offers in manipulating programs as data.

16 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: A new type of crossover operator, called edge-type crossover, with a heuristically selected initial population, is used in the genetic search, and experiments indicate that the heuristic initialization speeds the geneticsearch process.
Abstract: We describe the application of Genetic Algorithms to the traveling salesman problem with time windows. A new type of crossover operator, called edge-type crossover, with a heuristically selected initial population, is used in the genetic search. When compared with alternative methods from the literature, experiments indicate that the heuristic initialization speeds the genetic search process.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: Project Envision as discussed by the authors is an ongoing effort to build a user-centered database from the computer science literature from the Computer Science Journal, which is used for the Eclipse Open Access project.
Abstract: What if there were an electronic computer science library? Consider the possibilities of having your favorite publications available within finger's reach Consider Project Envision, an ongoing effort to build a user-centered database from the computer science literature This paper describes the project's first year progress, stressing the underlying motivation, user-centered development, and the overall design

Proceedings ArticleDOI
01 Mar 1993
TL;DR: This paper describes the concept of interactions in a virtual environment and two experimental tools for the one shared space in this virtual environment in order to support the formation and maintenance of personal relationships.
Abstract: In this paper, we describe a new concept about a virtual environment on networked computers to support distributed collaborative work. We focus explicitly on tools to enable informal (casual) communications in contrast to most of the existing approaches of groupware applications. The establishment and maintenance of personal relationships is as indispensable in a virtual environment as in a physical environment, because the personal relationships lead the acquirement of new communication channels and the probability of future joints. We describe the concept of interactions in a virtual environment and two experimental tools for the one shared space in this virtual environment. These tools provide the opportunities for spontaneous, informal encounters and interactions with other people who have logged in the same networked computers, in order to support the formation and maintenance of personal relationships.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: The object-oriented design model offers a wide variety of design constructs and analysis techniques so that designers can develop more accurate representations of their applications and critique and understand their applications by first identifying potential problems and inconsistencies, and then initiating corrections.
Abstract: As we approach the mid-1990s, we have continued to see an increase in the power and capacities of workstations, allowing us to design and develop advanced applications that transcend the limitations of even five years ago. If we are to develop these applications, we must find design techniques that can support their intricate and complex requirements. Additionally, if we can improve the design process, we can more precisely characterize these applications with respect to their requirements. To answer these ifs, this paper presents our ongoing research efforts on an object-oriented design model. The object-oriented design model offers a wide variety of design constructs and analysis techniques so that designers can: develop more accurate representations of their applications; and, critique and understand their applications by first identifying potential problems and inconsistencies, and then initiating corrections.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: A Software Landscape is used as a mechanism that allows the developer to navigate around the entities created during the software development process, much the way a flight simulator allows one to “fly” and optionally to dive down to entities of interest.
Abstract: Large scale software development is an intrinsically difficult task. Developers use a set of specialized tools to alleviate some of this difficulty. The problem is that most of these tools are not integrated and do little to help developers and managers maintain an overall view of the development by organizing the software entities, created by tools, in a consistent fashion.Our solution, called the Software Landscape, provides developers with a conceptual framework of integrated tools while providing a metaphor for managing the complexities of large-scale software development.The Software Landscape is a metaphor of a country-side viewed from above in which each major entity, such as a software project, appears as a large plot of land, and each minor entity, such as a source C module, is contained within a plot. Plots can be libraries of reusable software as well as ongoing developments.A Software Landscape can be used as a mechanism that allows the developer to navigate around the entities created during the software development process, much the way a flight simulator allows one to “fly” and optionally to dive down to entities of interest. During this flight, and controls their level of visible detail. This model is constructive, allowing the developer to manipulate, as well as view, the entities of the Landscape.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: A definition of computer science is suggested that distinguishes it from all other sciences and from mathematics and that the notion of the algorithm is and has been an inappropriate and ineffective paradigm for computer science.
Abstract: We first review the development of the notion of the algorithm as a fundamental paradigm of mathematics. We then suggest a definition of computer science that distinguishes it from all other sciences and from mathematics. Finally we argue that the conceptual concerns of computer science are quite different from the conceptual concerns of mathematics and that the notion of the algorithm is and has been an inappropriate and ineffective paradigm for computer science.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: Several abstractions introduced at Northeastern demonstrate that an abstract approach to thinking and designing is vital and what principles each helps to elucidate.
Abstract: ion which demonstrate that an abstract approach to thinking and designing is vital. In this section, we will describe several abstractions we introduce at Northeastern and explain what principles each helps to elucidate. Loops, Decisions, and the Swimming Fish Lab The Swimming Fish laboratory exercise is designed to require students to program a loop with decisions in which the progress of the loop cannot be predicted prior to runtime. The situation of the exercise is a large underwater maze-like cave in which a large fish searches for food consisting of a school of small fish { see Figure 2 }. Figure 2: Typical Initial State of the Swimming Fish Laboratory The large fish is initially positioned at the left side of the cave and the school of small fish at the right. The cave is randomly generated but is designed so that the large fish can find the food using only moves up or down or to the right. The large fish never needs to backtrack to the left. The Swimming Fish laboratory is introduced to the students about seven weeks into the first course before array data structures have been discussed. The students are able to solve the exercise because the critical tools are presented as abstractions. The solution is based on a shell program which the students must complete, on four of the basic tools modules, and on a file which contains the picture resources for the large fish and the school of fish. The four key abstractions which the students use to program the search of the large fish for the food are: type directions = (up, down, right); function freetomove (d: directions):

Proceedings ArticleDOI
01 Mar 1993
TL;DR: The four possible genres of input devices that can be attached to personal workstations are compared as different types of man-machine communication channel to investigate potentials and limitations of pen-based computers.
Abstract: There are four possible genres of input devices that can be attached to personal workstations; keyboard, mouse, pen, and voice. For investigating potentials and limitations of pen-based computers, we propose to compare those four categories as different types of man-machine communication channel.Even though arrow keys allow a limited scope of 2D capability in keyboard usage, the primary use of a keyboard is typing. Typing generates a linear sequence of discrete characters. The maximum speed of typing is roughly 10 characters per second, hence the bandwidth of 100 (10×10) bps.A mouse is used primarily for pointing (menu item/object selection) and then for dragging (moving and re-sizing objects). Mouse pointing generates a discrete information of a point location on a 2D (planer) plane with the maximum speed of 2 clicking per second. Each pointing may generate 20 bits of information with the bandwidth of 40 (2×20) bps. Mouse dragging generates a continuous geometric pattern in 2D. Assuming the maximum rate of 40 selections per second in the eight (3 bits) possible directions of dragging, the bandwidth of mouse dragging peaks at 120 (40×3) bps.The usage of pen on a planer flat surface of LCD unit can be divided into scribing and tapping. By scribing we mean a generation of continuous pen strokes forming a character, a gesture, or a picture. Scribing includes drawing and gesturing. Tapping corresponds to a mouse clicking. Tapping may be considered as a special kind of gesture as in the PenPoint operating system. The bandwidth of scribing can be calculated in the same manner as for mouse dragging. With the maximum rate of 100 selections of direction per second for pen, scribing may produce strokes with the speed of 300 (100×3) bps. The bandwidth of pen tapping is almost the same as that of a mouse clicking except that a selection of a point is easier by a pen (3 tappings per second) than by a mouse (2 clickings per second).Talking through a microphone generates a linear sequence of continuous speech with a high degree of redundancy. Using the CELP speech compression algorithm, the bandwidth of normal speech can be reduced to 4800 bps. By vocal signaling we mean a generation of a sequence of discrete messages each of which consists of different pitches and loudness levels. The maximum rate of signaling could be 5 messages per second with 10 differentiable pitches and 10 levels of loudness producing the bandwidth of 35 (5×7) bps.Note that mouse dragging, pen scribing, and voice talking, each produces continuous data objects. Only after quantization by sampling, the data objects can be represented by discrete data structure.By precision we mean the degree of ease in duplicating the identical information using the same input technology. Keyboard is a high precision device because there is no difficulty in generating the same character over and over again. Drawing by a mouse is more difficult than drawing by a pen, because pen is easier to control than mouse. Precision of voice is low because it is difficult to duplicate the sound of the same pitch and the same volume.By latency we mean the set-up time necessary to start generating a stream of information. The latency of using keyboard and mouse is larger than the latency of using pen and voice.By translation we mean a process of converting the information generated by the input device into a sequence of discrete symbols, i.e., a transduction of a continuous data type to a discrete data type. Translation for a keyboard is not necessary. Translation of a mouse click on a menu item requires a finite table look-up which is rather simple operation. Translation of pen scribing and mouse dragging involves a handwriting recognition algorithm which is still a difficult problem at present time. Voice recognition is a very difficult problem.With the assumption that real-time translation is feasible for handwriting recognition and speech recognition, the efficiency of input device for text entry can be measured by how many characters can be entered in a second (cps). A simulated keyboard on CRT is used for entering text by a mouse. Long handwriting on a pen computer is used for a pen.When personal workstations become down-sized, the physical dimension of an input/output device becomes a dominant factor for the mobility of workstations. Keyboard and mouse are portable but intrusive. Wireless pen is mobile and less intrusive. Voice can be ubiquitous but intrusive.One conclusion we can draw from the above analysis is that pen is mightier than mouse. A pen can replace a mouse any time any place. However, keyboard, pen, and voice have different strong points and weak points. They compensate with each other. Therefore, we predict that future workstations will carry multi-modal user-interface with any combination of keyboard, pen, and voice.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: This paper presents a polynomial time algorithm for finding a minimal path cover for a set of arcs in a circular-arc model that takes n time.
Abstract: Whether there exists a polynomial algorithm for the minimal path cover problem in circular-arc graphs remains open. In this paper, we present a polynomial time algorithm for finding a minimal path cover for a set of n arcs in a circular-arc model. Our algorithm takes O(nlogn) time.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: A modified version of the PR-Tree is also developed to minimize space usage, which can be easily extended to multi-dimensional domains.
Abstract: In this paper, we propose a data structure, the Point-Range Tree (PR-Tree), specifically designed for indexing intervals. With the PR-Tree, a point data can be queried against a set of intervals to determine which of those intervals overlap the point. The PR-tree allows dynamic insertions and deletions while it maintains itself balanced. A balanced PR-Tree takes O(log n) time for search. Insertion, deletion, and storage space have worst case requirements of O(n log n + m), O(n log2n + m), and O(n log n), respectively, where n is the total number of intervals in the tree, and m the number of nodes visited during insertion and deletion. A modified version of the PR-Tree is also developed to minimize space usage. An additional advantage of the PR-Tree is that it can be easily extended to multi-dimensional domains.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: This paper presents a discussion of why languages that support multiple paradigms have the potential to be good pedagogical tools for teaching programming skills and suggests that the potential role of multiparadigm languages as teaching tools is promising.
Abstract: This paper presents a discussion of why languages that support multiple paradigms (i.e. multiparadigm languages) have the potential to be good pedagogical tools for teaching programming skills. Several examples are given that demonstrate how different programming paradigms are expressed in a working multiparadigm language. The examples, though brief, provide a glimpse of how much expressiveness a simple multiparadigm design can embody and they suggest that the potential role of multiparadigm languages as teaching tools is promising.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: Results are presented of using genetic algorithms, which do not use exhaustive search, to generate Steiner systems, and a specialized mutation operator was effective in generating Steiner triple systems.
Abstract: Steiner systems, particularly triple systems, are usually generated by mathematicians using techniques from the theory of groups and quasi-groups. When pencil-and-paper enumeration becomes infeasible, mathematicians have used computers to carry out exhaustive searches. This paper presents some results of using genetic algorithms, which do not use exhaustive search, to generate Steiner systems. A specialized mutation operator was effective in generating Steiner triple systems. Future research will focus on improving the genetic algorithm to generate higher order Steiner systems whose existence is not currently known.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: This work solves the programming problem of implementing a help command for children to telephone their instructor or parents using voice communication hardware (modem, microphone, speaker, and clock) in Hyperflow, a dataflow-based graphical language.
Abstract: We propose a visual language, Hyperflow, for system programming as well as for end user shell programming. Hyperflow is designed for a multimedia pen computer system for children. It is a dataflow-based graphical language. In order to demonstrate the capability of Hyperflow, we solve the programming problem of implementing a help command for children to telephone their instructor or parents using voice communication hardware (modem, microphone, speaker, and clock). The resulting program includes visual programs to implement device drivers for the modem and clock hardware.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: This paper applies parallel processing technology to database security technology and vice versa and describes security constraint processing in trusted database management systems and shows how parallel processing could enhance the performance of this function.
Abstract: This paper applies parallel processing technology to database security technology and vice versa. We first describe the issues involved in incorporating multilevel security into parallel database management systems. In particular, we describe how multilevel security could be incorporated into the GAMMA architecture. Then we describe the use of parallel architectures to perform trusted database management system functions. In particular, we describe security constraint processing in trusted database management systems and show how parallel processing could enhance the performance of this function.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: A data modeling framework that extends the object-oriented paradigm with modeling constructs necessary to represent semantics present in performance models, and two examples of semantic constructs required by the computation structure model are presented.
Abstract: Software Performance Engineering (SPE) is a modeling methodology that incorporates both functional and performance requirements into the development of high-performance, parallel, distributed, or real-time software. To aide SPE in achieving a framework suitable for modeling performance data, we present a data modeling framework that extends the object-oriented paradigm with modeling constructs necessary to represent semantics present in performance models. Two examples of semantic constructs required by the computation structure model which are presented in this paper include the temporal and alternate relationships. Temporal relationships are those that relate events by time, that is, sequence or concurrency. Alternate relationships are those that relate events by the result of a condition. The concomitance of these modeling constructs makes the object-oriented paradigm a more robust, complete, and comprehensive data model for supporting advanced applications like performance modeling.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: This method is applied here to the double-step algorithm presented in [15] and later used in [14], resulting in up to a thirty-three percent reduction in the number of iterations and a sixteen percent increase in speed.
Abstract: A method of increasing the efficiency of line drawing algorithms by setting additional pixels during loop iterations is presented in this paper. This method adds no additional costs to the loop. It is applied here to the double-step algorithm presented in [15] and later used in [14], resulting in up to a thirty-three percent reduction in the number of iterations and a sixteen percent increase in speed. In addition, the code complexity and initialization costs of the resulting algorithm remain the same.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: This study found that person to group attraction, person to person attraction, people to leader attraction, years of college education, training, and programmer's experience at the organization do not correlate to productivity measures in a statistically significant way.
Abstract: This paper reports on an empirical study that explores the impact of individual and group factors on programmer productivity. Programmer productivity is modeled as a function of Individual Characteristics, Group Cohesiveness and Leader Behavior. Individual Characteristics are measured in terms of years of college education, training, and months of experience in a language at the site. Group Cohesiveness is measured in terms of person to group attraction, person to person attraction, and person to leader attraction. Leader Behavior is measured in terms of production emphasis which is the application of pressure for productive output. Programmer productivity is measured in terms of lines of code (LOC), executable lines of code (ELOC), and Halstead's effort. This study found that person to group attraction, person to person attraction, person to leader attraction, years of college education, training, and programmer's experience at the organization do not correlate to productivity measures in a statistically significant way. The implications of these finding are explored.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: A model of the document-hypermedia world which shows the bi-directional exchanges between the two worlds which are facilitated by international standards and a widely accepted hypertext reference model is presented.
Abstract: This paper presents a model of the document-hypermedia world which shows the bi-directional exchanges between the two worlds which are facilitated by international standards and a widely accepted hypertext reference model. In particular we show the role of the Dexter Model and the SGML/HyTime standards in supporting document-hypermedia transformations and hypermedia interchange. A first prototype — the SGML-MUCH (SM) system — has been developed which follows the principles of the model. The SM system can accept different text and hypertext markup documents by going through a standard SGML representation, which can represent both text and hypermedia structure, and the SM system can then import these documents into a collaborative hypermedia system — the MUCH system. On the other hand, a part of the MUCH database can be exported into a markup document which retains all the hypermedia information. This exported document then can be processed by a conventional document editor or used to interchange information with other hypermedia systems.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: The question is, “What impact can machine learning technologies have on knowledge acquisition in the large?” the true test will be on prospective industrial applications in areas such as biology, education, geology, medicine, and scientific discovery.
Abstract: Expert systems are a well-known and well-received technology. It was thought that the performance of a domain expert could not be duplicated by a machine. Expert systems technologies have shown this to be a false belief, and indeed have demonstrated how experts themselves can come to depend on expert systems. Expert systems enjoy widespread use in industrial domains and further uses are planned. The growth in acceptance has been explosive since about 1986. Continued rampant growth appears to depend on cracking the so-called knowledge acquisition bottleneck.The knowledge acquisition bottleneck limits the scalability of expert systems. While it is relatively straightforward to populate a small-scale knowledge base, it becomes more difficult to maintain consistency and validity as the knowledge base grows. Thus, it is important to automate the knowledge acquisition process. A by-product of this process is that any failure of the expert system will be “soft.”The question is, “What impact can machine learning technologies have on knowledge acquisition in the large?” The true test will be on prospective industrial applications in areas such as biology, education, geology, medicine, and scientific discovery. Machine learning technologies include expert systems, genetic algorithms, neural networks, random seeded crystal learning, or any effective combinations.Relevant subtopics include: Second generation expert systems — progress and prognosisRepertory GridsThe importance of symbolic and qualitative reasoningThe acquisition of fuzzy rulesThe best learning paradigm or combination of paradigmsImpact of machine learning on explanation systemsThe role of toy domains such as chessAutomatic programming revisitedApplications to computer vision, decision support systems, diagnosis, helpdesks, optimization, planning, scheduling, et al.Implementation issues using SIMD and MIMD platformsSources for joint sponsorshipForming industrial partnershipsForming alliances abroad

Proceedings ArticleDOI
01 Mar 1993
TL;DR: A general approach for collecting data flow information for shared memory parallel languages and builds on top of that to find new techniques and equations for collecting reaching definition, available expression, and live variables sets for parallel programs.
Abstract: In this paper, we present a general approach for collecting data flow information for shared memory parallel languages. This work can be used for any language that supports concurrent execution of threads, and consumer-producer synchronization or barrier synchronization between the threads. We assume that the traditional serial data flow information for each thread is available. We build on top of that to find new techniques and equations for collecting reaching definition, available expression, and live variables sets for parallel programs.