scispace - formally typeset
Search or ask a question

Showing papers on "Rough set published in 1989"




Journal ArticleDOI
01 Oct 1989
TL;DR: This paper investigates the case of incomplete information systems, and presents a generalization of the rough sets approach which deals with missing and imprecise descriptors.
Abstract: The paper is concerned with the problems of rough sets theory and rough classification of objects. It is a new approach to problems from the field of decision-making, data analysis, knowledge representation, expert systems etc. Several applications (particularly in medical diagnosis and engineering control) confirm the usefulness of the rough sets idea. Rough classification concerns objects described by multiple attributes in a so-called information system. Traditionally, the information system is assumed to be complete, i.e. the descriptors are not missing and are supposed to be precise. In this paper we investigate the case of incomplete information systems, and present a generalization of the rough sets approach which deals with missing and imprecise descriptors.

134 citations


Journal ArticleDOI
TL;DR: Any rough set can be expressed by means of a membership function U → {0, 0.5, 1}.

89 citations


Journal ArticleDOI
TL;DR: It is proposed, in the case when no proper mathematical model is obtainable, to use human experts' inference models in computer control algorithms using PROLOG as the language of the model implementation.
Abstract: It is proposed, in the case when no proper mathematical model is obtainable, to use human experts' inference models in computer control algorithms. The notion of inference model is introduced and it is demonstrated that the formal apparatus of rough sets theory can be used to identify, to analyse and to evaluate this model. A method of computer implementation of such inference models is presented. The method is based on the analysis of dependencies among decision, measurable and observable attributes. PROLOG is proposed as the language of the model implementation. Formal considerations, the proposed approach and notions introduced are illustrated with a real-life example. It concerns a computer implementation of inference model of a rotary clinker kiln stoker. The model was used to control the process and an analysis of control results is presented.

59 citations


Journal ArticleDOI
TL;DR: A new concept of Floating Approximation is introduced based on Pawlaks theory of Rough Sets and the existence of ‘hidden attributes’ in knowledge representation systems and a simplified algorithm developed and implemented by the author is described.

54 citations


Journal ArticleDOI
TL;DR: The theory of rough sets, which allows us to classify objects into sets of equivalent members based on their attributes, is applied to the design of information retrieval systems accessing collections of documents.
Abstract: The theory of rough sets was introduced in 1982. It allows us to classify objects into sets of equivalent members based on their attributes. We may then examine any combination of the same objects (or even their attributes) using the resultant classification. The theory has direct applications in the design and evaluation of classification schemes and the selection of discriminating attributes. Introductory papers discuss its application in the domain of medical diagnostic systems. Here we apply it to the design of information retrieval systems accessing collections of documents. Advantages offered by the theory are: the implicit inclusion of Boolean logic; term weighting; and the ability to rank retrieved documents. In the first section, we describe the theory. This is derived from the work by others in the field and includes only the most relevant aspects of the theory. In the second, we apply it to information retrieval. Specifically, we design the approximation space, search strategies, and illustrate the application of relevance feedback to improve document indexing. In Section 3, we compare the rough set formalism to the Boolean, Vector, and Fuzzy models of information retrieval. Finally, we present a small-scale evaluation of rough sets that indicates its potential in information retrieval.

19 citations


Journal ArticleDOI
TL;DR: This work deals with a model which is the basis of a rough set investigations and points out problems which can be solved using the automatic syntactic methods within this model.
Abstract: There are a number of algebraic models of information systems. They have been proposed by Codd (1972) , Salton (1968) , Scott (1970) and others. We deal here with a model which is the basis of a rough set investigations ( Orlowska, 1984 ; Pawlak, 1982 ; Pawlak, 1984 ). This model was proved in ( Marek, 1985 ) to be equivalent with the Codd's model of relational database with one schema. We focus here on purely syntactical problems within this model. In particular we point out problems which can be solved using the automatic syntactic methods. We do it by first constructing, for a given system S its description language ℒ S . Then we define a set of a Gentzen-like ( Gentzen, 1934 ) transformation rules for its terms and describe an easy programmable procedure which generates the answers for queries submitted to the system. We show how to extend this procedure to a procedure for generating the equivalent normal form of a given term. This leads to a method of constructing not only definable sets within a given system, but also all its elementary components.

11 citations




Journal Article
TL;DR: W: Bulletin of the European Association for Theoretical Computer Science (EATCS), 38:199-210, 1989.
Abstract: W: Bulletin of the European Association for Theoretical Computer Science (EATCS), 38:199-210, 1989.


01 Jan 1989
TL;DR: A simple strategy LERS/C for learning minimal production rules from both consistent and not necessarily consistent examples is proposed, which can avoid the problem of attribute redundancies in ID3 and attribute-value pair redundancy in PRISM by incorporating proper search strategy.
Abstract: This research is about learning rules from examples. One contribution of this work is a simple strategy LERS/C for learning minimal production rules from both consistent and not necessarily consistent examples. In the proposed strategy, the difference between learning rules from consistent and inconsistent examples lies only in the inputs to the strategy. By incorporating proper search strategy, LERS/C can avoid the problem of attribute redundancies in ID3 and attribute-value pair redundancies in PRISM. In the study, the problem of learning rules from examples is formulated by using the concepts of rough sets and information systems introduced by Pawlak. The notion of non-redundant attributes is formulated by the concept of covers of attributes, and the notion of non-redundant attribute-value pairs is formulated by the concept of minimal conjuncts. The induction of minimal production rules from examples is treated as a problem of finding covers of attribute-value pairs, which is a minimal set of minimal conjuncts. The basis of finding covers of attributes and minimal conjuncts is the checking of dependencies of attributes and attribute-value pairs, respectively. We introduce the concept of rough-set boundaries to facilitate the checking of attribute dependency, and the checking of attribute-value pair dependency is done on the basis of difference of sets. In general, the time for finding all covers of attributes (attribute-value pairs) is exponential, and it takes only polynomial time to find one cover of attributes (attribute-value pairs). Heuristics, based on rough-set boundaries and entropy function, for finding one cover of attributes and one cover of attribute-value pairs have been discussed.


01 Jan 1989
TL;DR: In this article, the effects of quality factors on prices paid producers for long and medium grain rough rice in Louisiana were analyzed in a hedonic price framework, and the monetary value of the quality factors were calculated for head rice, red rice, and heat damage were the most important monetarily.
Abstract: This dissertation analyzes the effects of quality factors on prices paid producers for long and medium grain rough rice in Louisiana. Rough rice prices, and other information surrounding quality, were collected for the study for the 1986/87 and 1987/88 marketing years from the Louisiana Farm Bureau Marketing Association in Crowley, Louisiana. The relationship between the price of rough rice, and its quality attributes or characteristics, was analyzed in a hedonic price framework. A conceptual model for the Louisiana rough rice market was constructed, and estimated premiums and discounts reported for a set of quality factors believed to influence producer prices. Premiums and discounts were calculated for long and medium grain markets for the 1986/87 and 1987/88 marketing years and for marketing seasons within marketing years. The hedonic model was tested for structural differences across marketing years, marketing seasons, and classes of rough rice. Structural differences were found in all cases. A linear specification was chosen for the base model. However, the Box-Cox transformation indicated a semi-logarithmic specification for the 1987/88 marketing year. In the set of quality factors studied head rice, red rice, and heat damage were the most important monetarily. The monetary value of the quality factors were calculated for

Journal ArticleDOI
TL;DR: The revised method to obtain the two interval regression models being consistent with each other is proposed by combining Min problem with Max problem and the concept of the interval regression is explained in the context of rough sets and fuzzy sets.
Abstract: The main purpose of interval regression is to obtain two interval regression models by which the given interval data are explained in two ways. Formerly the two models were obtained by solving two LP problems called Min and Max problems respectively. Since the two LP problems were independent of each other, the consistency of the two models was not assured. In this paper, we intend to propose the revised method to obtain the two interval regression models being consistent with each other. Our intention is achieved by combining Min problem with Max problem. For the application of the proposed method, two interval regression models representing the relation of the feed speed and the surface roughness in grinding experiments are calculated. Lastly the concept of the interval regression is explained in the context of rough sets and fuzzy sets.