scispace - formally typeset
Search or ask a question

Showing papers on "Interpretability published in 1998"


Journal ArticleDOI
TL;DR: A new learning algorithm is proposed that integrates global learning and local learning in a single algorithmic framework, which uses the idea of local weighed regression and local approximation in nonparametric statistics, but remains the component of global fitting in the existing learning algorithms.
Abstract: The fuzzy inference system proposed by Takagi, Sugeno, and Kang, known as the TSK model in fuzzy system literature, provides a powerful tool for modeling complex nonlinear systems. Unlike conventional modeling where a single model is used to describe the global behavior of a system, TSK modeling is essentially a multimodel approach in which simple submodels (typically linear models) are combined to describe the global behavior of the system. Most existing learning algorithms for identifying the TSK model are based on minimizing the square of the residual between the overall outputs of the real system and the identified model. Although these algorithms can generate a TSK model with good global performance (i.e., the model is capable of approximating the given system with arbitrary accuracy, provided that sufficient rules are used and sufficient training data are available), they cannot guarantee the resulting model to have a good local performance. Often, the submodels in the TSK model may exhibit an erratic local behavior, which is difficult to interpret. Since one of the important motivations of using the TSK model (also other fuzzy models) is to gain insights into the model, it is important to investigate the interpretability issue of the TSK model. We propose a new learning algorithm that integrates global learning and local learning in a single algorithmic framework. This algorithm uses the idea of local weighed regression and local approximation in nonparametric statistics, but remains the component of global fitting in the existing learning algorithms. The algorithm is capable of adjusting its parameters based on the user's preference, generating models with good tradeoff in terms of global fitting and local interpretation. We illustrate the performance of the proposed algorithm using a motorcycle crash modeling example.

305 citations


Journal ArticleDOI
TL;DR: Increased standardization of both the expectations for public release on measures of quality and the criteria by which such measures will be evaluated should contribute to improvements in the larger field of quality assessment.
Abstract: Article-at-a-Glance Background The importance and utility of routine externally reported assessments of the quality of health care delivered in managed care organizations and hospitals have become widely accepted. Because externally reported measures of quality are intended to inform or lead to action, proposers of such measures have a responsibility to ensure that the results of the measures are meaningful, scientifically sound, and interpretable. Criteria for selecting meaningful assessment areas In choosing clinical performance measures to distinguish among health plans, the condition should have a significant impact on morbidity and/or mortality; the link between the measured processes and outcomes of care should have been established empirically; quality in this area should be variable or substandard currently; and health plans and/or providers should be able to take clinically sensible actions to enhance performance on the measure. Criteria for assessing scientific soundness Scientific soundness—the likelihood that a clinical performance measure will produce consistent and credible results when implemented—involves precision of specifications, adaptability, and adequacy of risk adjustment. Interpretability of results Interpretability is affected by the content of the measure and the audience. Measures that are clinically detailed and specific may be presented more generally to a consumer audience and in full detail to a clinical audience, but measures that are general by nature cannot be made more clinically detailed. Interpretability entails statistical analysis, calibration of measures, modeling, and presentation of information. Conclusions Increased standardization of both the expectations for public release on measures of quality and the criteria by which such measures will be evaluated should contribute to improvements in the larger field of quality assessment.

117 citations


Journal ArticleDOI
01 May 1998
TL;DR: An unsupervised discovery method with biases geared toward partitioning objects into clusters that improve interpretability is described, and it is demonstrated that interpretability, from a problem-solving viewpoint, is addressed by the intraclass and interclass measures.
Abstract: The data exploration task can be divided into three interrelated subtasks: 1) feature selection, 2) discovery, and 3) interpretation. This paper describes an unsupervised discovery method with biases geared toward partitioning objects into clusters that improve interpretability. The algorithm ITERATE employs: 1) a data ordering scheme and 2) an iterative redistribution operator to produce maximally cohesive and distinct clusters. Cohesion or intraclass similarity is measured in terms of the match between individual objects and their assigned cluster prototype. Distinctness or interclass dissimilarity is measured by an average of the variance of the distribution match between clusters. The authors demonstrate that interpretability, from a problem-solving viewpoint, is addressed by the intraclass and interclass measures. Empirical results demonstrate the properties of the discovery algorithm and its applications to problem solving.

93 citations


Journal Article
TL;DR: In this paper, a neuro-fuzzy approach for classification problems is described and a readable fuzzy classifier is obtained by a learning process and interactive strategies for pruning rules and variables from a trained classifier can enhance its interpretability.
Abstract: Neuro-fuzzy classification systems offer a means of obtaining fuzzy classification rules by a learning algorithm. Although it is usually no problem to find a suitable fuzzy classifier by learning from data, it can, however, be hard to obtain a classifier that can be interpreted conveniently. There is usually a trade-off between accuracy and readability. This paper discusses NEFCLASS - a neuro-fuzzy approach for classification problems - and its implementation NEFCLASS-X. It is shown how a readable fuzzy classifier can be obtained by a learning process and how interactive strategies for pruning rules and variables from a trained classifier can enhance its interpretability.

45 citations


Journal ArticleDOI
TL;DR: Methods based on similarity analysis that, without performing additional knowledge or data acquisition, allow for the generation of fuzzy models of varying complexity are discussed.

26 citations


Proceedings ArticleDOI
11 Oct 1998
TL;DR: An interpolation technique is proposed that is based on the interpolation of the semantics and interrelation of rules that guarantees the direct interpretability of the conclusion.
Abstract: Sometimes it is not possible to have a full dense rule base as there are gaps in the information. Furthermore, it is often necessary to deal with a sparse rule base to reduce the size and the inference/control time. In such sparse rule bases classic algorithms such as the CRI of Zadeh (1973) and the Mamdani method do not function for observations hitting gaps between rules. A linear fuzzy rule interpolation technique (KH-interpolation) has been introduced that is suitable for dealing with sparse bases. However, this method often results in conclusions which are not directly interpretable. In this paper an interpolation technique is proposed that is based on the interpolation of the semantics and interrelation of rules. This method guarantees the direct interpretability of the conclusion.

13 citations


Proceedings Article
01 Jan 1998
Abstract: In this article we study interpolation properties for the minimal system of interpretability logic IL. We prove that arrow interpolation holds for IL and that turnstile interpolation and interpolation for the -modality easily follow from this result. Furthermore, these properties are extended to the system ILP. Failure of arrow interpolation for ILW is established by providing an explicit counterexample. The related issues of Beth definability and fixed points are also addressed. It will be shown that for a general class of logics the Beth property and the fixed point property are interderivable. This in particular yields alternative proofs for the fixed point theorem for IL (cf. de Jongh and Visser 1991) and the Beth theorem for all provability logics (cf. Maksimova 1989). Moreover, it entails that all extensions of IL have the Beth property.

9 citations


01 Jan 1998
TL;DR: The concept of interpretability is introduced as a quality of computer applications and Wittgenstein's concept of language games is used as a 'figure of thought' to relate practice, language, and the use of symbolic machines.
Abstract: In the context of CSCW ? especially through ethnomethodological work place studies - the stability of particular work practices and therefore the ability to design software that fits with continually evolving work practices is questioned. This challenge for software development has been called 'design for unanticipated use'. Using the concept of interpretability, I attempt to answer this challenge. A semiotic perspective on computer applications as formal symbol manipulation systems is introduced. A case study involving three alternative ways of using a computer application shows how users make sense of such symbolic machines. Wittgenstein's concept of language games is used as a 'figure of thought' to relate practice, language, and the use of symbolic machines. The development of an interpretation, fitting the implemented symbol manipulation and supporting the specific understanding of the task, remains crucial for competent use. Interpretability is introduced as a quality of computer applications. In order how to support the user in developing her own interpretation, a concept for help systems is described.

8 citations


Book ChapterDOI
23 Sep 1998
TL;DR: This work introduces a derivation of the Naive-Bayes classifier based on the idea of Absolute Order of Magnitude, which it thinks can be useful for the Data Mining step of the Knowledge Discovery process.
Abstract: We review some approaches to qualitative uncertainty and propose a new one based on the idea of Absolute Order of Magnitude. We show that our ideas can be useful for Knowledge Discovery by introducing a derivation of the Naive-Bayes classifier based on them: the Qualitative Bayes Classifier. This classification method keeps Naive-Bayes accuracy while gaining interpretability, so we think it can be useful for the Data Mining step of the Knowledge Discovery process.

6 citations


Journal ArticleDOI
TL;DR: Model selection criteria explicitly incorporate both model misfit in the population and sampling error to evaluate the set of models, ensuring interpretability of modelparameters and goodness-of-fit are enhanced simultaneously.
Abstract: Covariance structures analysis is often used in nursing research to appraise statistical models reflecting complex human health processes. The model selection approach in covariance structures analysis is designed to select the "best" model from a specified set of theoretically defensible, competing alternatives, all of which are viewed as approximations. Model selection criteria explicitly incorporate both model misfit in the population and sampling error to evaluate the set of models. The result is that interpretability of model parameters and goodness-of-fit are enhanced simultaneously. Relative merits of the model selection approach are identified in light of technical concerns, parsimony, and use of scientific theory in nursing.

6 citations


Journal Article
TL;DR: This work investigates the modal logic of interpretability over Peano arithmetic and obtains a uniform arithmetical completeness theorem for the interpretability logic ILM and a theorem answering a question of Orey from 1961.
Abstract: We investigate the modal logic of interpretability over Peano arithmetic (PA). Our main result is an extension of the arithmetical completeness theorem for the interpretability logic ILMω. This extension concerns recursively enumerable sets of formulas of interpretability logic (rather than single formulas). As corollaries we obtain a uniform arithmetical completeness theorem for the interpretability logic ILM and a theorem answering a question of Orey from 1961. All these results also hold for Zermelo-Fraenkel set theory (ZF).

01 Jan 1998
TL;DR: In this article, a modification of FHM tree height, basal area measurement and estimation procedures is proposed to increase interpretability of forest health monitoring data, and a multiple regression model for assessment of potential productivity of trees based on primary and composite crown indicators is presented.
Abstract: Growth and productivity of forests are important indicators for understanding the general condition and health of forests. It is very important that indicators detected during monitoring procedures afford an opportunity for direct or indirect evaluation of forest productivity and its natural and anthropogenic changes. Analysis of the U.S. Forest Health Monitoring (FHM) crown indicators showed that primary crown indicators are not enough for biological interpretation of collected data and estimation of potential tree growth. The length of crown is necessary for calculating meaningful crown indicators; consequently, measurements of tree height should be included into a FHM sampling design. Statistically reliable tree height measurements as well as basal area estimation are necessary for evaluation of tree volume and biomass. Results of a pilot study on relationships between tree growth and crown indicators, and a multiple regression model for assessment of potential productivity of trees based on primary and composite crown indicators are presented in this paper. New composite crown indicators are developed to increase interpretability of FHM data. A modification of FHM tree height, basal area measurement and estimation procedures is proposed.

Journal ArticleDOI
TL;DR: One of the frustrating issues facing the regulatory toxicologist is characterized by the candidate agent found to be devoid of either structural or genotoxic liabilities but capable of eliciting an increased incidence in frequently observed spontaneous tumors in a rodent species, strain, or sex.
Abstract: A half century of experience notwithstanding, the chemical, pharmaceutical, and regulatory communities continue to experience difficulty in quickly, economically, and most importantly unequivocally differentiating carcinogens from noncarcinogens. Consider the seemingly endless debates over whether or not agents like dichlorvos, trichloroethylene, and pentachlorophenol pose carcinogenic risks to humans. More often than not, the sticking point is an inability to reach scientific consensus on the overall interpretation of a family of supposedly appropriate and meaningful experimental data. The database is usually ’composed of information about the chemical’s structural relationships to known carcinogens; abilities to cause in vivo and/or in v i m genotoxicity; and of course its ability to increase tumor incidence in rodents that have swallowed, inhaled, or had it applied to their skin for the better part of their lives. The bioassay data are usually regarded as key, whereas the chemical and genotoxicity information serve adjunctive roles in the overall interpretative process. Some research and development investigators refer tolhe . 2-yr bioassay as the “gold standard.” The bioassay as a stand-alone may be no more than iron pyrite. At first blush, interpretation of the data would seem simple enough: chemicals that increase tumors in more than 1 species, usually regardless of the nature of the remaining information, are considered cancer risks for humans. Chemicals causing no tumors and possessing neither structural nor genotoxic liabilities may be regarded as negative, at least for a while. But in the everyday world of the research and development sqientists, few data sets are that easily interpreted. The findings in roughly half of all rodent bioassays fall somewhere between the 2 extremes of the result continuum-and all too frequently that leads to equivocation, indecision, programmatic delays, and even worse, project termination. And that should surprise no one. Regulators charged with ensuring public safety will always choose to err on the protective side, and when push comes to shove, as a member of the public, I have no problem with that. And the industrial scientist who must factor the bottom line into all decisions may find it too risky to continue pursuit of the development of a promising project that has been flawed by so much as a marginally positive response in a rodent bioassay. Even in cases where regulatory compromise is a possibility, the litigious nature of our society must be weighed. It would be naive to suggest that there might be a quick and easy cure-all that could be satisfactory to all parties. But progress in molecular biology has offered us a chance to address at least 1 of the issues both the regulators and the regulatees must deal with. One of the frustrating issues facing the regulatory toxicologist is characterized by the candidate agent found to be devoid of either structural or genotoxic liabilities but capable of eliciting an increased incidence in frequently observed spontaneous tumors in a rodent species, strain, or sex. To most of us, the predictive utility of a speciesor sex-specific tumor is at best questionable, and we advocate weighing the sum total of experimental evidence in the development of regulatory positions. But even then, few of us are completely comfortable dismissing a large and statistically significant tumor excess that is clearly treatment related, even if it is a strain-specific neoplasm. The qualitatively constant but quantitatively variable appearance of spontaneous cancers in inbred rodent strains is almost assuredly a product of our early attempts to stamp out biological variability, long a scourge to the interpretation of pharmacology and toxicology experiments. -. An enormous number of genes control the way the individual organism interacts with its chemical surroundings, and the extent of responsive variability in a population is directly proportional to the extent of genetic diversity. Reduced genetic diversity, clearly associated with repeated generations of brother-to-sister matings, has reduced biological variability and offers great value in many areas of investigation. Inbred animals are more uniform than are their outbred cousins in their pharmacokinetic and pharmacodynamic responses to xenobiotics. But a liability that inbred animals must carry is their familial susceptibilities to certain spontaneous cancers, cancers that seem to grow in response to challenge by nongenotoxic, proliferative agents such as hormones or enzyme-inducing agents. In a word, the ‘downside to the use of inbred animals is that they, and even their hybrid cousins, are representative of only their close relatives. An increased incidence of a strain-specific tumor may be highly predictive-for that strain-but at the end of the day who really cares? We must use the data we generated in a predictive, extrapolative manner. Data that are highly predictive of the responsiveness of a particular species, strain, or even sex may bear little relevance for the ultimate target species, humans. We need to refocus on our ultimate objective, perfecting the way we reach logical and realistic assessments of the health risk attending human exposure to a chemical. Anything less begs the question. As we exit this millennium, let us leave behind all baggage of the simplistic

Journal ArticleDOI
TL;DR: It is equally important to consider the application of simpler "regression-type" models whose parameters seem to me to be easier to interpret.
Abstract: Goodman and Hout have introduced an important group of models and related graphical methods for the analysis of three-way contingency tables. Their method is innovative and the group of models proposed is widely applicable. Nevertheless, the results from the proposed models seem to me to be difficult to interpret. This occurs because the models they applied to empirical data retain the most general form in parameterizing the interactions between the row and the column variables. I believe that it is equally important to consider the application of simpler "regression-type" models whose parameters seem to me to be easier to interpret. Goodman and Hout consider the following model:

Book ChapterDOI
01 Jan 1998
TL;DR: Distinctions between single chain and parallel chain control methods have already been discussed, but as Brooks and Roberts (1998) point out, other characteristics must be taken into account for evaluating control methods.
Abstract: Distinctions between single chain and parallel chain control methods have already been discussed in Chapter 2. However, as Brooks and Roberts (1998) point out, other characteristics must be taken into account for evaluating control methods. An important criterion is the programming investment: diagnostics requiring problem-specific computer codes for their implementation (e.g., requiring knowledge of the transition kernel of the Markov chain) are far less usable for the end user than diagnostics solely based upon the outputs from the sampler, which can use available generic codes. Another criterion is interpretability, in the sense that a diagnostic should preferably require no interpretation or experience from the user.

Journal ArticleDOI
TL;DR: The two methods for black-box modeling of a rapid sand filter using global nonlinear models of a polynomial type and local linear Takagi-Sugeno fuzzy models are compared.

Journal ArticleDOI
TL;DR: The notion of m-linearity for m ∈ ω(m > 1) is defined and interpretability (and noninterpretability) ofm-linear orders in structures and theories are discussed.
Abstract: In this paper we define the notion of m-linearity for m ∈ ω(m > 1) and discuss interpretability (and noninterpretability) of m-linear orders in structures and theories.