scispace - formally typeset
Search or ask a question

Showing papers on "Class (philosophy) published in 2000"


Journal ArticleDOI
TL;DR: In this paper, a series of clinical psycholo- gists in the military services were described, with which the present writer is familiar only indirectly: the utility of any instrument in the medical situation can be most competently assessed by those in contact with clinical material in that situation, and the present paper is in no sense to be construed as an answer to or an attempted refutation of Hutt's remarks.
Abstract: installations. This article was part of a series describing the work of clinical psycholo- gists in the military services, with which the present writer is familiar only indirectly: The utility of any instrument in the military situation can, of course, be most competently assessed by those in contact with clinical material in that situation, and the present paper is in no sense to be construed as an "answer" to or an attempted refutation of Hutt's remarks. Nevertheless, there are some incidental observations contained in his article which warrant further critical consideration, particularly those having to do with the theory and dynamics of "structured" personality tests. It is with these latter observations rather than the main burden of Hutt's article that this paper is concerned. Hutt defines "structured personality tests" as those in which the test material consists of conventional, culturally crystallized questions to which the subject must respond in one of a very few fixed ways. With this definition we have no quarrel, and it has the advantage of not applying the unfortunate phrase "self-rating questionnaire" to the whole class of question-answer devices. But immediately following this definition, Hutt goes on to say that "it is assumed that each of the test questions will have the same meaning to all subjects who take the examination. The subject has no opportunity of organizing in his own unique manner his response to the questions." These statements will bear further examination. The statement that personality tests assume that each question has the same meaning to all subjects is continuously appearing in most sources of late, and such an impression is conveyed by many discussions even when they do not explicitly make this assertion. It should be emphasized very strongly, therefore, that while this perhaps has been the case with the majority of question-answer personality tests, it is not by any means part of their essential nature. The traditional approach to verbal question-answer personality tests has been, to be sure, to view them as

408 citations


Proceedings ArticleDOI
15 Jun 2000
TL;DR: A method to learn heterogeneous models of object classes for visual recognition that automatically identifies distinctive features in the training set and learns the set of model parameters using expectation maximization.
Abstract: We propose a method to learn heterogeneous models of object classes for visual recognition. The training images contain a preponderance of clutter and learning is unsupervised. Our models represent objects as probabilistic constellations of rigid parts (features). The variability within a class is represented by a join probability density function on the shape of the constellation and the appearance of the parts. Our method automatically identifies distinctive features in the training set. The set of model parameters is then learned using expectation maximization. When trained on different, unlabeled and unsegmented views of a class of objects, each component of the mixture model can adapt to represent a subset of the views. Similarly, different component models can also "specialize" on sub-classes of an object class. Experiments on images of human heads, leaves from different species of trees, and motor-cars demonstrate that the method works well over a wide variety of objects.

274 citations


01 Jan 2000
TL;DR: This application shows some of the weaknesses of the SVDD, particularly the dependence on the scaling of the features, by rescaling features and combining several descriptions oll well scaled feature sets, performance can be significantly improved.
Abstract: In previous research the Support Vector Data Description is proposed to solve the problem of One-Class classification. In One-Class classification one set of data, called the target set, has to be distinguished from the rest of the feature space. This description should be constructed such that objects not originating from the target set, by definition the outlier class, are not accepted by the data description. In this paper the Support Vector Data Description is applied to the problem of image database retrieval. The user selects an example image region as target class and resembling images from a database should be retrieved. This application shows some of the weaknesses of the SVDD, particularly the dependence on the scaling of the features. By rescaling features and combining several descriptions oll well scaled feature sets, performance can be significantly improved.

131 citations


01 Jul 2000
TL;DR: In this article, a large number of uncommon perspectives for the analysis of contemporary societies will be introduced which run under the heading of a so-called "epigenetic research program", the main emphasis of the epigenetic program lies in a conscious attempt to shed fresh or innovative light on the co-evolution between knowledge and society.
Abstract: Within this paper, a large number of uncommon perspectives for the analysis of contemporary societies will be introduced which run under the heading of a so-called "epigenetic research program". The main emphasis of the epigenetic program lies in a conscious attempt to shed fresh or innovative light on the co-evolution between "knowledge and society". More conventionally, the main focus of this perspective lies in basic innovation processes as well as in the "core dynamics" within socio-economic domains both nationally and globally. Among these innovative conceptual features, one will find unfamiliar notions like "Turing societies", "epigenetic regimes" or "four different layers of societal knowledge bases", or "societal substitution power". Moreover, one will be confronted with a wave of unusual assessments of risk potentials, risk-incidences as well as of the substitution and repair processes inherent in today's societal "fabric". Finally, one will find a comprehensive analysis of the so-called "year 2000-problem" which will be treated as a significant instance in a much wider class of "knowledge based risks" and, above all, of "knowledge based-failures".;

50 citations


Patent
20 Jan 2000
TL;DR: In this paper, a method, apparatus, and article of manufacture for providing for persistence of Java™ objects is described, where a Java object is instantiated from its corresponding Java™ class definition and then loaded into a Java™ virtual machine.
Abstract: A method, apparatus, and article of manufacture for providing for persistence of Java™ objects. A Java™ object is instantiated from its corresponding Java™ class definition and then loaded into a Java™ virtual machine. The class definition corresponding to the Java™ object can be derived using either the Java™ Reflection API. Once the class definition is derived, it can be used to inspect the contents of the Java™ object. A structured type instance is then generated from the inspected contents of the Java™ object, wherein the structured type instance is stored in a column of a table of a relational database managed by a relational database management system. As a result of these steps, the Java™ object is persistently stored in the database, yet the persistence semantics for storing the object are not specified as part of the class definition of the object, which means that the persistence semantics are orthogonal to the class definition.

44 citations


Journal ArticleDOI
TL;DR: The authors make a strong connection between what makes class excessive and specific problems in theorizing working-class representation, and make a forceful connection between the difficulties of social science and the difficulty of writing a novel about working class life.
Abstract: "Worker of the World(s)." M | OST LITERARY CRITICS visibly wince at the mention of working-class representation as a significant component of cultural analysis ("too sociological," "too political," some may say, while others might offer more interesting but no less dismissive assessments: "too realist," "too easy," "too coarse," or simply "too late"). One could respond that such judgments confirm the constitutive excess of workingclass culture as inherently valuable within literary criticism. They certainly support Raymond Williams's position that "the simplest descriptive novel about working-class life is already, by being written, a significant and positive cultural intervention. For it is not, even yet, what a novel is supposed to be, even as one kind among others. And changing this takes time" ("Working-Class" 111). Nevertheless, the idea that changing a cultural form, or even modes of literary criticism, takes time may strike one as "too precious" in the timespace compression of postmodernity, where the shifting sands of literary taste scarcely allow for projects with a notion of longue dure'e. The difficulty is not intrinsically about processes of adjudication but more about the elusive and unstable nature of class itself, the consciousness of which provides its own forms of historical intervention (also a political lesson about identity in feminist and race studies). Rather than read this difficulty outside the confines of art, as an excess that is merely not art, I want to maintain a forceful connection between what makes class excessive and specific problems in theorizing working-class representation. Why? While class is constantly being rethought vis-'a-vis the social,' it is generally undertheorized in terms of the literary, as if what is problematic for the social scientist is transparent or inconsequential for the literary critic.2 In what follows I want to expand a particular vocabulary in literary criticism, a lexicon of labor, by taking on some of the methodological impurities of class head on. Far from confining working-class representation as some historical curiosity, the rethinking, reworking, and reformation of class provide a prescient

38 citations


Journal ArticleDOI
TL;DR: This work provides sufficient conditions under which an optimal matching can be found between two mappings from a fixed interval to some "feature space", the optimal matching being a homeomorphism of the interval I.
Abstract: We study a class of functional which can be used for matching objects which can be represented as mappings from a fixed interval, I, to some "feature space." This class of functionals corresponds to "elastic matching" in which a symmetry condition and a "focus invariance" are imposed. We provide sufficient conditions under which an optimal matching can be found between two such mappings, the optimal matching being a homeomorphism of the interval I. The differentiability of this matching is also studied, and an application to plane curve comparison is provided.

28 citations


Patent
26 Jul 2000
TL;DR: In this article, a data structure suitable for use in collecting, distributing or storing product data for using in a catalog is disclosed, which is based on a data model having one or more classes.
Abstract: An invention is described herein that provides methods and apparatus for collecting, distributing and storing product data. A data structure suitable for use in collecting, distributing or storing product data for use in a catalog is disclosed. More particularly, the data structure is based on a data model having one or more classes, where each of the classes has one or more associated categories. The data structure includes at least one class definition, each class definition being arranged to identify one or more associated categories. In addition, the data structure includes a plurality of category definitions, each category definition being arranged to identify an associated attribute group. The data structure further includes a plurality of attribute group definitions, where each attribute group definition is arranged to identify one or more attributes that are associated with the attribute group. In order to assist in the capture of data, each attribute has an associated possible value list that identifies values that are selectable as values for the associated attribute.

20 citations


Dissertation
01 Jan 2000
TL;DR: A theoretical framework is proposed to specify the relationship between distance measurement and class score and suggest a novel combining method to reduce the effect of code word selection in non-optimum codes and suggest novel reconstruction frameworks to combine the component outputs.
Abstract: In "decomposition/reconstruction" strategy, we can solve a complex problem by 1) decomposing the problem into simpler sub-problems, 2) solving sub-problems with simpler systems (sub-systems) and 3) combining the results of sub-systems to solve the original problem. In a classification task we may have "label complexity" which is due to high number of possible classes, "function complexity" which means the existence of complex input-output relationship, and "input complexity" which is due to requirement of a huge feature set to represent patterns. Error Correcting Output Code (ECOC) is a technique to reduce the label complexity in which a multi-class problem will be decomposed into a set of binary sub-problems, based oil the sequence of "0"s and "1"s of the columns of a decomposition (code) matrix. Then a given pattern can be assigned to the class having minimum distance to the results of sub-problems. The lack of knowledge about the relationship between distance measurement and class score (like posterior probabilities) has caused some essential shortcomings to answering questions about "source of effectiveness", "error analysis", " code selecting ", and " alternative reconstruction methods" in previous works. Proposing a theoretical framework in this thesis to specify this relationship, our main contributions in this subject are to: 1) explain the theoretical reasons for code selection conditions 2) suggest new conditions for code generation (equidistance code)which minimise reconstruction error and address a search technique for code selection 3) provide an analysis to show the effect of different kinds of error on final performance 4) suggest a novel combining method to reduce the effect of code word selection in non-optimum codes 5) suggest novel reconstruction frameworks to combine the component outputs. Some experiments on artificial and real benchmarks demonstrate significant improvement achieved in multi-class problems when simple feed forward neural networks are arranged based on suggested framework To solve the problem of function complexity we considered AdaBoost, as a technique which can be fused with ECOC to overcome its shortcoming for binary problems. And to handle the problems of huge feature sets, we have suggested a multi-net structure with local back propagation. To demonstrate these improvements on realistic problems a face recognition application is considered. Key words: decomposition/ reconstruction, reconstruction error, error correcting output codes, bias-variance decomposition.

20 citations


Patent
07 Aug 2000
TL;DR: In this paper, the measured data is plotted in discriminant space and decision boundaries or thresholds determined, preferably such that at least one object from one class is isolated from the remaining objects, removed from the training set, and the process repeated until an acceptable number of unclassified objects remain.
Abstract: Stand-alone or assistive pattern recognition system and process enabling error free classification of all objects in a training set and application to unclassified objects. Parameters and/or features of the data objects in a training set are selected and measured, from which discriminants are computed. The measured data is plotted in discriminant space and decision boundaries or thresholds determined, preferably such that at least one object from one class is isolated from the remaining objects, removed from the training set, and the process repeated until an acceptable number of unclassified objects remain. The system can be applied sequentially to classify all the members of the training set belonging to one class and then applied to objects in other classes. Fuzzy quantifiable determinations of an object's likelihood of class membership can be made. Objects' positions and classifications are obtainable in an optical system using Fourier techniques without limitation to linearly discriminable problems.

16 citations


Journal ArticleDOI
TL;DR: A novel approach for studying the relationship between the properties of isolated cells and the emergent behavior that occurs in cellular systems formed by coupling such cells, and introduces a nonhomogeneity measure, called cellular disorder measure, which was inspired by the local activity theory from [Chua, 1998].
Abstract: This paper presents a novel approach for studying the relationship between the properties of isolated cells and the emergent behavior that occurs in cellular systems formed by coupling such cells. The novelty of our approach consists of a method for precisely partitioning the cell parameter space into subdomains via the failure boundaries of the piecewise-linear CNN (cellular neural network) cells [Dogaru & Chua, 1999a] of a generalized cellular automata [Chua, 1998]. Instead of exploring the rule space via statistically defined parameters (such as λ in [Langton, 1990]), or by conducting an exhaustive search over the entire set of all possible local Boolean functions, our approach consists of exploring a deterministically structured parameter space built around parameter points corresponding to "interesting" local Boolean logic functions. The well-known "Game of Life" [Berlekamp et al., 1982] cellular automata is reconsidered here to exemplify our approach and its advantages. Starting from a piecewise-linear representation of the classic Conway logic function called the "Game of Life", and by introducing two new cell parameters that are allowed to vary continuously over a specified domain, we are able to draw a "map-like" picture consisting of planar regions which cover the cell parameter space. A total of 148 subdomains and their failure boundaries are precisely identified and represented by colored paving stones in this mosaic picture (see Fig. 1), where each stone corresponds to a specific local Boolean function in cellular automata parlance. Except for the central "paving stone" representing the "Game of Life" Boolean function, all others are mutations uncovered by exploring the entire set of 148 subdomains and determining their dynamic behaviors. Some of these mutations lead to interesting, "artificial life"-like behavior where colonies of identical miniaturized patterns emerge and evolve from random initial conditions. To classify these emergent behaviors, we have introduced a nonhomogeneity measure, called cellular disorder measure, which was inspired by the local activity theory from [Chua, 1998]. Based on its temporal evolution, we are able to partition the cell parameter space into a class U "unstable-like" region, a class E "edge of chaos"-like region, and a class P "passive-like" region. The similarity with the "unstable", "edge of chaos" and "passive" domains defined precisely and applied to various reaction–diffusion CNN systems [Dogaru & Chua, 1998b, 1998c] opens interesting perspectives for extending the theory of local activity [Chua, 1998] to discrete-time cellular systems with nonlinear couplings. To demonstrate the potential of emergent computation in generalized cellular automata with cells designed from mutations of the "Game of Life", we present a nontrivial application of pattern detection and reconstruction from very noisy environments. In particular, our example demonstrates that patterns can be identified and reconstructed with very good accuracy even from images where the noise level is ten times stronger than the uncorrupted image.

Journal Article
TL;DR: In the context of service-learning courses, the authors proposed a framework for service-texts in interdisciplinarity, which can help faculty conceptualize four types of service texts: broad, broad, narrow and broad.
Abstract: From a faculty perspective one of the most constructive ways to conceptualize service-learning is to refine the pedagogically purposeful metaphor "service as text" (Morton, 1996; Varlotta, 1996). Unfortunately, service-learning's own theory is insufficiently developed to explicate this metaphor. Therefore, a related theoretical framework--interdisciplinary theory--is, for two reasons, an appropriate choice: (2) 1. Interdisciplinary theory introduces an assortment of terms--"partial," "full," "narrow," and "broad"--that can help faculty contemplate and, ideally, answer the question: What type of service text should be utilized in this course? Faculty may assign, for example, a one-time or short-term project, dubbed a "partial" text; or, they may expect students to uphold an ongoing service commitment, labeled a "full" text. Additionally, faculty may require a "narrow" service text in which all students work on related projects at the same agency, or "broad" texts in which each student works on a unique service project. 2. Interdisciplinarians utilize terms like "multidisciplinary," "crossdisciplinary," and "interdisciplinary" to describe and differentiate various types of disciplinary integration. Because service itself is not a discipline, interdisciplinarity's terminology--one that reflects the integration of disciplinary perspectives--is not completely transferable to service-learning. When service is configured as a text, however, the prefixes of interdisciplinarity's terminology ("multi," "cross," and "inter") can be affixed to the root word "text" to answer the question, How will the service text be meaningfully integrated with other course texts (e.g., films, books, journal articles)? A cross-textual course, for example, will integrate the service text more fully than a multitextual course but less fully than an intertextual one. This paper does more than simply raise the pedagogical questions that too few have posed. It uses interdisciplinarity to offer viable answers. What Types of Service Texts are Feasible? Interdisciplinary theory can help faculty conceptualize at least four types of service texts. Two types of service texts may be described by invoking the "broad" and "narrow" rhetoric of interdisciplinarians Van Dusseldorp and Wigboldus (1994), the other two by employing the "full" and "partial" terminology of William Newell (1998). Broad or Narrow Service Texts For Van Dusseldorp and Wigboldus (1994), a "broad" interdisciplinary course pulls together a wide range of disciplines. An example of such a course is one that draws from a liberal arts discipline like philosophy, a natural science like chemistry, and a social science like anthropology. Such a diversity of disciplines entertain a broad range of inquiries, coin and utilize a broad variety of terms, and construct a broad assortment of arguments. A "narrow" interdisciplinary course, on the other hand, pulls together a more related set of disciplines. An example of this type of course is one that draws from three natural sciences, e.g., biology, chemistry, and physics. Though service itself is not a discipline, interdisciplinary terminology can provide service-learning instructors with two important options in course design. First, faculty may choose to design and teach a "broad" service-learning class in which individual students or student groups are engaged in very different types of projects. In a broad class, faculty may allow each student to choose a unique service-learning project, or cluster students in groups and assign a different project to each group (e.g., one group of students may be working with homeless men at a local shelter, a second may be volunteering at a YWCA's outreach program that assists survivors of domestic violence, and a third group may be supervising after school programs at a junior high school). To determine whether or not to use a broad approach to service-learning, faculty might consider some of the pros and cons associated with this approach. …

Patent
26 Jul 2000
TL;DR: In this paper, a data structure suitable for use in collecting, distributing or storing product data for using in a catalog is disclosed, which is based on a data model having one or more classes.
Abstract: An invention is described herein that provides methods and apparatus for collecting, distributing and storing product data. A data structure suitable for use in collecting, distributing or storing product data for use in a catalog is disclosed. More particularly, the data structure is based on a data model having one or more classes, where each of the classes has one or more associated categories. The data structure includes at least one class definition, each class definition being arranged to identify one or more associated categories. In addition, the data structure includes a plurality of category definitions, each category definition being arranged to identify an associated attribute group. The data structure further includes a plurality of attribute group definitions, where each attribute group definition is arranged to identify one or more attributes that are associated with the attribute group. In order to assist in the capture of data, each attribute has an associated possible value list that identifies values that are selectable as values for the associated attribute.

Patent
30 Aug 2000
TL;DR: In this article, a data structure suitable for use in collecting, distributing or storing product data for using in a catalog is disclosed, which is based on a data model having one or more classes.
Abstract: An invention is described herein that provides methods and apparatus for collecting, distributing and storing product data. A data structure suitable for use in collecting, distributing or storing product data for use in a catalog is disclosed. More particularly, the data structure is based on a data model having one or more classes, where each of the classes has one or more associated categories. The data structure includes at least one class definition, each class definition being arranged to identify one or more associated categories. In addition, the data structure includes a plurality of category definitions, each category definition being arranged to identify an associated attribute group. The data structure further includes a plurality of attribute group definitions, where each attribute group definition is arranged to identify one or more attributes that are associated with the attribute group. In order to assist in the capture of data, each attribute has an associated possible value list that identifies values that are selectable as values for the associated attribute.

01 Jan 2000
TL;DR: This article presents seven key properties of member functions that the authors use in their daily design and programming work and catalogs them for use as part of a shared vocabulary.
Abstract: As C++ developers, we talk a lot about member functions (methods) of a class. We talk about member function types like getters and setters, command methods, and factory methods. Next to classifying member functions by purpose, we also talk about properties of member functions like being a primitive or composed method, a hook or template method, a class or instance method, or a convenience method. Obviously, we have a large vocabulary for talking about member function types and properties. We use this vocabulary to communicate and document different aspects of a member function, for example, what it is good for, who may use it, and how it is implemented. Understanding this vocabulary is a key to fast and effective communication among developers. In a previous article, we discussed several key types of member functions [1]. This article presents seven key properties of member functions that we use in our daily design and programming work. It illustrates them using a running example and catalogs them for use as part of a shared vocabulary. Some of the method properties have their own naming convention. Mastering this vocabulary helps us better implement our member functions, better document our classes, and communicate more effectively.

01 Jan 2000
TL;DR: In this article, the authors explored the breakdown that may occur between teachers' adoption of reform objectives and their successful incorporation of reform ideals by comparing and contrasting two reform-oriented classrooms.
Abstract: Mathematics education reform in the United States has marshaled large-scale support for instructional innovation, and enlisted the participation and allegiance of large numbers of mathematics teachers. However, there is concern that many teachers have not grasped the full implications of the reform ideals. This study explored the breakdown that may occur between teachers’ adoption of reform objectives and their successful incorporation of reform ideals by comparing and contrasting two reform-oriented classrooms. This study was an exploratory, qualitative, comparative case study using constant comparative analysis. Seven mathematics lessons were video-taped from each class, and intensive interviews conducted with the two teachers. The study provided a detailed description to explore how the participants in each class established a reform-oriented mathematics microculture. Then the two classes were compared and contrasted in terms of their general social norms and sociomathematical norms (Cobb & Yackel, 1996). The two classes established similar social participation patterns but very different mathematical microcultures. In both classes open-ended questioning, collaborative group work, and students’ own problem solving constituted the primary modes of classroom participation. However, in one class mathematical significance was constituted as using standard algorithm with accuracy, whereas the other class established a focus on providing reasonable and convincing arguments. Given these different mathematical foci, students' learning opportunities were seen as unequal. The

Book ChapterDOI
TL;DR: Using this model of intensional communities, two examples are developed: a Web-based class in which a teacher leads a student through a multidimensional page, and the programming of chatrooms.
Abstract: Intensional communities are composed of intensional programs sharing a common context, implemented using a networked context server, called the AEPD aether process daemon.The programs are written in the ISE intensional imperative scripting language, which is illustrated through versioned assignments in undergraduate classes. Using this model of intensional communities, two examples are developed: a Web-based class in which a teacher leads a student through a multidimensional page, and the programming of chatrooms.

Journal ArticleDOI
TL;DR: In this paper, the authors assume a minimal account based on the type/token model, which will be fleshed out a little as the argument progresses, and take as their subject music that admits the general characterization of compositions that can have multiple performances that are substantially similar.
Abstract: The central problem of musical ontology-and a problem whose special qualities can be overlooked when it is so easily pigeonholed as an "aesthetic" one-is the problem of how performances are related to compositions. There have been a great many different solutions. The performance of a work has been called an instantiation of a universal, a token of a type, a species of a genus, and even a member of a class. These differences will not concern us directly here. Whichever sort of "generic entity" a composition is, there will have to be conditions that must be satisfied in order for a particular performance to be identified as a performance of that work. It is how these conditions might be characterized that interests us here, and not the ontological structure to which they relate. I will, however, assume a minimal account based on the type/ token model, which will be fleshed out a little as the argument progresses. It will be tempting from the start to take as our subject music that admits the general characterization of compositions that can have multiple performances that are substantially similar. Too many writers have assumed that European classicism can be taken as a paradigm case for all musical traditions, and this despite the fact that European classicism offers the easiest case for a model that separates composition and performance. As a result, ontological characterizations of "music" often apply in reality to only a narrow range of musical activities. Musical practices such as improvisation and recording-two extremes in terms of the regularity of instances compared with the generic entity itself-tend to pose serious problems for such models. This said, it is still possible to begin with the canonical situation of a composition that is created by one person, encoded, and then performed by other persons. Put naively, composers have a series of musical ideas that they write down in a score. We are not concerned here with anything prior to the production of the score, since prior to this point there is nothing to be ontologically concerned about. Idealists, who might object that this begs the question since the work of music may precede the score's production, would be missing the point that it is not the composition's ontology that interests us so much as it is its ontologically determinate relation to performances. These performances are carried out by persons who have learned how to interpret the code of the score as a set of instructions for action, and who have executed this set of instructions successfully. On the face of it, there seem to be many similarities between this situation and the situation in respect of games. The game of chess, for example, consists of two things: a set of physical objects (the board and pieces) and a set of rules for what to do with them. The game of chess is analogous to a composition; and it may result in many games of chess, played by people with the proper equipment and knowledge of the rules. As John Searle points out, there is a difference in chess between rules and conventions:

Journal ArticleDOI
TL;DR: In this paper, a syntactic analysis of the French "double conjunctions" class is presented, which contains ninety one items: some of them are subordinating items, and in majority coordinating items.
Abstract: We present a syntactic analysis of the French "double conjunctions" class, which contains ninety one items: some of them are subordinating items, and in majority coordinating items. We analyse their specific syntactic properties by means of the "parallelism constraints" (see Z. S. Harris (1968)), related to the specific nature and order of these items and their sentences. Study of restrictions in their positions and combinations reveals complete independancy between "double conjunctions" and some homonymous "simple" ones, excluding any analyse of the last ones from the first ones.

Journal ArticleDOI
TL;DR: The notion of "world uterature" was coined by Goethe in 1827 to describe a process ofreading across national boundaries that was cultural as much as it was literary as discussed by the authors.
Abstract: "World Uterature" has occupied an ambiguous, contradictory, or perhaps merely flexible position in the decades-long controversies over canon and curriculum. Certainly there is no single canon ofworld literature, nor is there likely to be one. As Brandt Corstius has shown in studying canons around the world, different countries always give special weight to their own or famuiar traditions (6). In the American curriculum, world literature is both mainstream and marginal: on the one hand, it is part of a firmly situated dominant tradition, its "window on the world"; on the other, it is a sign ofdefinitive difference from that tradition. CoUege surveys ofWestern "world" literature and Western civilization have flourished since the time ofWorld War I, when they were offered as an introduction to other cultures that also strengthened the common anchor in Western tradition; they were not introductions to global literature (LawaU 8). Modern courses, responding to the discrepancy between the image of"world" and a syUabus Umited to Western tradition, have either broadened their horizons toward global coverage or, retaining their original scope, have reserved the term "world Uterature" for a second semester that conveys the rest of the world to students emerging from the Western survey. In either framework, competing possibilities remain; the same canonical books may be interpreted as "world- class," illustrating universal values and common humanity, or as proving, through increased attention to context, an "otherness" that chaUenges universalist assumptions. Still another academic format—one that focuses on individual and cultural differences, and has an immediate pragmatic aim— presents a range of diverse (usually modern) selections that ulustrate ethnic, social, gender, and class variations. Such courses, also titled "world Uterature" and situated as part ofthe new "multicultural" curriculum, are especially popular in modern high schools aware ofthe need to facilitate communication inside a diverse audience (Massachusetts Department ofEducation 76-78). The emphasis on communication through Uterature, rather than on the commemorative study of canonical works, recalls J. W. Goethe's exchange-oriented definition of world Uterature. In 1827, the German author coined the term "world Uterature" to describe a process ofreading across national boundaries that was cultural as much as it was literary. Reading a French review ofhis own work, he was struck by the possibility of seeing one's own image mirrored in another cultural vision and subtly transformed by different habits of mind. Goethe subsequently proposedthat contemporary writers—"the Uving, striving men ofletters" —write with a transnational audience in mind, read each other's work

18 May 2000
TL;DR: The study aims at developing optimal representation and recognition of the objects from range data at developed for applications of machine vision in engineering tasks.
Abstract: The subject of this thesis is object recognition from range data. Being developed for applications of machine vision in engineering tasks, the discussion of this subject is restricted to the class of man-made objects where object surfaces can be modelled with quadric primitives. The study aims at developing optimal representation and recognition of the objects from range data.

01 Jan 2000
TL;DR: A STUDY of the out-of-class EXPERIENCES of SAINT MARY's COLLEGE STUDENTS STUDYING in MAYNOOTH, IRELAND as mentioned in this paper.
Abstract: A STUDY OF THE OUT-OF-CLASS EXPERIENCES OF SAINT MARY’S COLLEGE STUDENTS STUDYING IN MAYNOOTH, IRELAND

Proceedings Article
01 Jan 2000
TL;DR: In this paper, the authors used the Valdes-Perez rules to determine if a rule trying to identify patterns that are interesting from an was novel with respect to the prior model.
Abstract: Introduction Operational Definition of Novelty One ofthe challenges ofknowledge discovery is Several heuristics were used to determine if a rule trying to identify patterns that are interesting from an was novel with respect to the prior model. Novel abundance ofuninteresting patterns. Valdes-Perez rules either contain only features not included in the provides a working definition ofwhat it means for a prior model or use features from the prior model to pattern to be interesting: make a prediction that would not be expected based "... results are interesting in science Xif on the variable's use in the prior model. Novel practitioners ofX (or users ofthe programs) classification rules could also contain some elements believe them to be interesting". used by the prior model and some elements not used In order to discover pattems that would be by the prior model. In this case, the new rule must thought of as interesting according to this definition, correctly classify test items with greater certainty most knowledge discovery problems require a than the same rule with the unused variables subjective definition of interestingness. In this way removed. A final heuristic was the complex the user of the program, or a domain expert, provides combination of individual features of the prior model. the standard by which interestingness is measured. Methods Lenat, in his program AM and Livingston, in his We randomly selected 600 cases from the PORT discovery program HAMB, measure interestingness cohort study data used in validating the PSI, with using a collection ofheuristics. Their heuristics are 70% ofthese devoted to training and 30% to testing. based on domain-independent intuitions about when A rule-learning program (RL) was modified to use a conjecture in mathematics or science meet this definition ofnovelty, combined with a prior operational definitions ofnovelty, empirical-support, model, to search for novel rules. simplicity, and utility. This maps neatly onto other Results definitions of interestingness provided in the RL produced a set of 16 rules, seven ofwhich literature, such as "novel, interesting, plausible, and were classified as novel-by the program. Ofthese intelligible " and "validity, novelty, usefulness, and novel rules, four ofthem contained features known to simplicity ". In all ofthese definitions, novelty be prodictive ofpneumonia severity, but not included (sometimes called "surprise") plays an important in the prior model. Three rules contained a single role. feature that was not included in the prior model and In some domains, prior models have been was not known to be predictive ofpneumonia developed that can provide an abundant source of outcome. background knowledge. Whether prior beliefs are Discussion codified in a well-supported, published model or These results demonstrate the feasibility ofusing a exist as informal, common knowledge, a discovery prior model to select novel rules from a set of rules. program that has access to these beliefs can better Our method selected novel rules using variables judge when new conjectures are novel. obviously equivalent to those in the PSI model, The goal of this paper is to develop a domain variables known to be predictive of mortality but not dependent definition of novelty based on the included in the PSI model, and one variable that background knowledge contained in a prior model, appears to be part of a truly novel relationship with and to test this definition on pneumonia outcome data the target class (i.e., not previously thought to be using a published pneumonia severity index (PSI) as associated with mortality). This is exactly what we the prior model. To achieve this, we will embed this wanted to find interesting pattems that cannot be definition ofnovelty within a knowledge discovery summarily validated or invalidated, but instead are program, search for classifiers using pneumonia data, starting points for future investigation. and analyze the results for novelty.

Patent
17 May 2000
TL;DR: In this paper, the authors present a method for handling objects distinguished from other objects by at least one distinctive attribute of that respective class in a hierarchical containment tree based on the CMIP-type management protocol.
Abstract: The invention concerns a method for handling, in a computer system, at least an object included in at least a hierarchic-type containment tree contained in at least a machine based on the CMIP-type management protocol, an object distinguished from other objects by at least one distinctive attribute DN of that respective class. The invention is characterised in that it consists in: querying the set of objects by means of a request, the query consisting in assembling the instances of a common class distributed at different levels of the class tree; in storing the respective distinctive attribute of each instance and in applying said attributes to an operator capable of unifying, differentiating or performing an overlap between classes distributed in the class tree.