scispace - formally typeset
Search or ask a question

Showing papers on "Class (philosophy) published in 1976"


Journal ArticleDOI
TL;DR: In this paper, the authors define basic objects as those categories which carry the most information, possess the highest category cue validity, and are the most differentiated from one another, and thus the most distinctive from each other.

5,074 citations


Book
01 Jan 1976
TL;DR: It is argued that "non-attentive" vision is in practice implemented by these grouping operations and first order discriminations acting on the primal sketch, and implies that such knowledge should influence the control of, rather than interfering with, the actual data-processing that is taking place lower down.
Abstract: An introduction is given to a theory of early visual information processing. The theory has been implemented, and examples are given of images at various stages of analysis. It is argued that the first step of consequence is to compute a primitive but rich description of the grey-level changes present in an image. The description is expressed in a vocabulary of kinds of intensity change (EDGE, SHADING-EDGE, EXTENDED-EDGE, LINE, BLOB etc.). Modifying parameters are bound to the elements in the description, specifying their POSITION, ORIENTATION, TERMINATION points, CONTRAST, SIZE and FUZZINESS. This description is obtained from the intensity array by fixed techniques, and it is called the primal sketch. For most images, the primal sketch is large and unwieldy. The second important step in visual information processing is to group its contents in a way that is appropriate for later recognition. From our ability to interpret drawings with little semantic content, one may infer the presence in our perceptual equipment of symbolic processes that can define "place-tokens" in an image in various ways, and can group them according to certain rules. Homomorphic techniques fail to account for many of these grouping phenomena, whose explanations require mechanisms of construction rather than mechanisms of detection. The necessary grouping of elements in the primal sketch may be achieved by a mechanism that has available the processes inferred from above, together with the ability to select items by first order discriminations acting on the elements' parameters. Only occasionally do these mechanisms use downward-flowing information about the contents of the particular image being processed. It is argued that "non-attentive" vision is in practice implemented by these grouping operations and first order discriminations acting on the primal sketch. The class of computations so obtained differs slightly from the class of second order operations on the intensity array. The extraction of a form from the primal sketch using these techniques amounts to the separation of figure from ground. It is concluded that most of the separation can be carried out by using techniques that do not depend upon the particular image in question. Therefore, figure-ground separation can normally precede the description of the shape of the extracted form. Up to this point, higher-level knowledge and purpose are brought to bear on only a few of the decisions taken during the processing. This relegates the widespread use of downward-flowing information to a later stage than is found in current machine-vision programs, and implies that such knowledge should influence the control of, rather than interfering with, the actual data-processing that is taking place lower down.

984 citations


01 Jan 1976
TL;DR: This paper presents a meta-analyses of instructional materials and practices designed to teach the concept of Christian Witness in the context of modern education.
Abstract: THE DEVELOPMENT AND FORMATIVE EVALUATION OF INSTRUCTIONAL MATERIALS AND PROCEDURES DESIGNED TO TEACH THE CONCEPT OF CHRISTIAN WITNESS

20 citations


Journal ArticleDOI
Meyer1
TL;DR: A reliability analysis method for computing systems is considered in which the underlying criteria for "success" are based on the computations the system must perform in the use environment.
Abstract: A reliability analysis method for computing systems is considered in which the underlying criteria for "success" are based on the computations the system must perform in the use environment. Beginning with a general model of a "computer with faults," intermediate concepts of a "tolerance relation" and an "environment space" are introduced which account for the computational needs of the user and the probabilistic nature of the use environment. These concepts are then incorporated to obtain a precisely defined class of computation-based reliability measures. Formulation of a particular measure is illustrated and results, applying this measure, are compared with those of a typical structure-based analysis.

15 citations


Journal ArticleDOI
TL;DR: Foskett as discussed by the authors argues that modern classification theory departs from the "agreed orientation" viewpoint in order to provide a flexible approach which caters for many points of view.
Abstract: As a social scientist, I approach with diffidence the task of reviewing a book by the eminent librarian of the Institute of Education at the University of London for the Library Quarterly. Nevertheless, I accepted the invitation because of the intrinsic importance of the theme and the emphasis laid by D. J. Foskett on "the most important task at the moment," namely, to "bring about a closer relationship between librarians and research workers in social science" (p. 15). It is difficult to induce scholars to understand that librarians, documentalists, and "information scientists" can really help them; conversely, many librarians are resistant to the design of user-oriented information facilities rather than to stereotyped modes of materials processing. It is therefore most encouraging to note Foskett's persistent emphasis on the need to design classification and indexing structures that will more effectively bring to scholarusers the information they require. In this connection, he draws a useful distinction between the librarian's function of retrieving information "relevant" to a user's needs and the researcher's function of selecting from this information items that are "pertinent" to his work (p. 6). A major thrust of the work is Foskett's effort to establish the relative advantages of classification as a format for information retrieval in contrast to alphabetically based indexes. While recognizing the inadequacy of established classification schemes (Dewey and Library of Congress in the United States, Universal Decimal Classification and others elsewhere), Foskett explains how a superior modern framework for classification can be designed. He characterizes the traditional schemes as based on a "universal context assumption" or "generic relation" which hampers the work of users who need to place terms in different contexts. In contrast, Foskett argues that modern classification theory departs from the "agreed orientation" viewpoint in order "to provide a flexible approach which caters for many points of view . (p. 30). In elaborating his thesis, the author stresses the use of "facets" which can be used in classification to identify relationships and diverse aspects of any topic. Examples are S. R. Ranganathan's basic categories of "personality, matter, energy, space, and time" applicable to any class in his Colon Classification and the simpler dichotomy of "activities and personalities" used by

5 citations



Journal ArticleDOI
TL;DR: The question why the laws of human thought have the structure they do is a biological rather than a logical question, and the empirical theory about natural-language semantic universals is extended.
Abstract: Chomsky has constructed an empirical theory about syntactic universals of natural language by defining a class of ‘possible languages’ which includes all natural languages (inter alia) as members, and claiming that all natural languages fall within a specified proper subset of that class. I extend Chomsky's work to produce an empirical theory about natural-language semantic universals by showing that the semantc description of a language will incorporate a logical calculus, by defining a relatively wide class of ‘possible calculi’, and by specifying a proper subset of that class which, I hypothesize, includes the calculi needed for the semantic description of any natural language. I argue that the special status, with respect to natural languages, of this particular type of logical calculus is an empirical finding which does not follow from any independently-known principles, and I conclude that the question why the laws of human thought have the structure they do is a biological rather than a logical question.

4 citations


Journal ArticleDOI
TL;DR: A fully tested Table for the elementary sub-domain King and Rook versus King (elementary to play, not to program) showed an order-of-magnitude advantage over a conventional programming approach.
Abstract: A POP-2 package known as AL1 ("Advice Language 1") has been developed during the Spring semester by the CS397DM graduate class while the author was visiting the University of Illinois. The object was to facilitate the transfer of specialist knowledge about chess end-games into machine memory. The package comprises two main modules:An Advice module (input: a board-state; output: an advice-list). This module is partitioned into "Advice Tables" corresponding to a subdivision of the task domain into sub-domains.A Search module (inputs: a board-state and an advice-list; output: a "forcing-tree strategy" for securing specified goals). This module includes move-generation, the only part of the package which is specific to chess.For executing strategies, a tree-lookup routine generates play against an opponent. A Table-editor allows the user to display, modify, or extend the Tables. These are of two kinds: A single Master Table decides, on the basis of properties of the input board-state, to which of a "committee of experts" it should be referred. The "experts" themselves are the individual Advice Tables, each written for a specific sub-domain. A fully tested Table for the elementary sub-domain King and Rook versus King (elementary to play, not to program) showed an order-of-magnitude advantage over a conventional programming approach.

1 citations


Book ChapterDOI
01 Jan 1976
TL;DR: A function is considered to be computable if there exists an “effective description” for it, that is a finite sentence unambiguously indicating what is its “behaviour” in correspondence to each natural number.
Abstract: We will talk about a class of function defined on subsets of the set of nonnegative integers, and assuming nonnegative integer values: the so-called “computable functions”. We define a function to be computable if there exists an “effective description” for it, that is a finite sentence unambiguously indicating (possibly only some explicit way) what is its “behaviour” in correspondence to each natural number. In other words, an effective description must allow us to deduce, for any given argument, whether the described function is defined on it and, if it is so, the value it assumes there. Perhaps the careful reader will observe that this definition is very different from the one usually found in the literature: in fact, it may seem strange to call “computable” those functions for which we didn’t explicitly prescribe the precise means for computing them, for example, some particular procedure or model of an abstract “computing device”. Moreover, even the general concepts of “effective procedure”, “algorithm”, or “abstract computing device” are not needed for the definition we gave of a computable function.

1 citations