scispace - formally typeset
Search or ask a question
Author

H. N. Mahabala

Bio: H. N. Mahabala is an academic researcher from Indian Institute of Technology Madras. The author has contributed to research in topics: Applications of artificial intelligence & Fault tree analysis. The author has an hindex of 2, co-authored 4 publications receiving 13 citations.

Papers
More filters
Proceedings Article
24 Aug 1991
TL;DR: This article focuses on OPS5-based AI applications and presents a methodology for verification which is based on compile-time analysis, based on the principle of converting the antecedent and action-parts of productions into a linear system of inequalities and equalities and testing them for a feasible solution.
Abstract: One of the critical problems in putting AI applications into use in the real world is the lack of sufficient formal theories and practical took that aid the process of reliability assessment. Adhoc testing, which is widely used as a means of verification, serves limited purpose. A need for systematic verification by compile-time analysis exists. In this article, we focus our attention on OPS5-based AI applications and present a methodology for verification which is based on compile-time analysis. The methodology is based on the principle of converting the antecedent and action-parts of productions into a linear system of inequalities and equalities and testing them for a feasible solution. The implemented system, called SVEPOA, supports interactive and incremental analysis.

7 citations

Journal ArticleDOI
TL;DR: A tool which allows one to browse through the knowledgement corresponding to a knowledge-base, a tool which extracts rules from natural language text; a tool to convert decision-tables into rules; a machine learning tool which tutors/trains novices by presenting the knowledge and setting up quizzes.
Abstract: Many successful expert systems like MYCIN, R1 etc took considerable man-years for their development mainly because there was a paucity of tools to help the expert and the knowledge engineer. The only tools available then normally consisted of the knowledge-base editor and the inference engine. This paper describes a few more tools: a tool which allows one to browse through the knowledgement corresponding to a knowledge-base; a tool which extracts rules from natural language text; a tool to convert decision-tables into rules; a tool which checks for certain types of inconsistencies within a knowledge-base and finally a tool which tutors/trains novices by presenting the knowledge and setting up quizzes.
Journal ArticleDOI
TL;DR: Four major approaches for diagnosing machine faults are presented and how the knowledge is represented and what diagnosis technique is to be adopted, and their relative advantages and disadvantages are discussed.
Abstract: This paper presents four major approaches for diagnosing machine faults. Given the description of a system to be diagnosed and the observations on the system when it works, the need for diagnosis arises when the observations are different from those expected. The objective of diagnosis is to identify the malfunctioning components in a systematic and efficient way. The four approaches discussed are based on fault-tree, rule, model, and qualitative model. Early diagnosis systems used fault-tree and rule-based approaches. These are efficient in situations where an expert is able to provide the knowledge in the form of associations between symptoms and faults. Model-based and qualitative model-based approaches overcome many of the deficiencies of the earlier approaches. Model-based approaches can take care of situations (faults) not envisageda priori. Also, one can cater to minor variations in design using the same set of components and their interconnections. This paper discusses in each case, how the knowledge is represented and what diagnosis technique is to be adopted, and their relative advantages and disadvantages. Implementation of each method is also discussed.

Cited by
More filters
Book ChapterDOI
TL;DR: There are enough alternative readily-available methods that enable the V&V of AI software, as it is shown in this article.
Abstract: Artifical Intelligence (AI) is useful. AI can deliver more functionality for reduced cost. AI should be used more widely but won’t be unless developers can trust adapative, nondeterministic, or complex AI systems. Verification and validation is one method used by software analysts to gain that trust. AI systems have features that make them hard to check using conventional V&V methods. Nevertheless, as we show in this article, there are enough alternative readily-available methods that enable the V&V of AI software.

76 citations

Journal ArticleDOI
TL;DR: This article is a comprehensive survey of the developments and trends in the validation and verification of expert systems or knowledge-based systems.
Abstract: Validation and verification of expert systems or knowledge-based systems is a critical issue in the development and deployment of robust systems. This article is a comprehensive survey of the developments and trends in this field. More than 300 references are included in the References and Additional Readings at the end of article.

33 citations

Journal ArticleDOI
TL;DR: The paper describes Commander, a prototype computer program designed to help verify the completeness of a computer-based clinical practice guideline built using if then rules, and the application of Commander to a guideline for childhood immunization.

21 citations

Journal ArticleDOI
01 Jun 2001
TL;DR: The peski project is examined, which is concerned with assisting a human expert to build knowledge-based systems under uncertainty, and how verification and validation are currently achieved in peski is examined.
Abstract: Knowledge-base V&V primarily addresses the question: “Does my knowledge-base contain the right answer and can I arrive at it?” One of the main goals of our work is to properly encapsulate the knowledge representation and allow the expert to work with manageable-sized chunks of the knowledge-base. This work develops a new methodology for the verification and validation of Bayesian knowledge-bases that assists in constructing and testing such knowledge-bases. Assistance takes the form of ensuring that the knowledge is syntactically correct, correcting “imperfect” knowledge, and also identifying when the current knowledge-base is insufficient as well as suggesting ways to resolve this insufficiency. The basis of our approach is the use of probabilistic network models of knowledge. This provides a framework for formally defining and working on the problems of uncertainty in the knowledge-base. In this paper, we examine the peski project which is concerned with assisting a human expert to build knowledge-based systems under uncertainty. We focus on how verification and validation are currently achieved in peski .

15 citations

Journal ArticleDOI
01 Sep 1998
TL;DR: This work takes the view that a rule represents a concept of the domain and in the scenario of Formal Concept Analysis, works on objects and attribute-value space, and presents a mechanism to measure the level of accuracy using the Rough Set Theory.
Abstract: Verification of Rule Based Systems has largely concentrated on checking the consistency, conciseness and completeness of the rulebase However, the accuracy of rules vis-a-vis the knowledge that they represent, is not addressed, with the result that a large amount of testing has to be done to validate the system For any reasonably-sized rulebase it becomes difficult to know the adequacy and completeness of the test-cases In case a particular test-case is omitted the chances of an inaccurate rule remaining undetected increases We discuss this issue and define a notion of accuracy of rules We take the view that a rule represents a concept of the domain and in the scenario of Formal Concept Analysis, works on objects and attribute-value space We then present a mechanism to measure the level of accuracy using the Rough Set Theory In this framework, accuracy can be computed as a ratio of the objects definitely selected by the rule (the lower approximation) to the objects possibly selected by the rule (the upper approximation) with respect to the concept that it encodes Our algorithm and its implementation for PROLOG clauses is discussed

10 citations