scispace - formally typeset
Search or ask a question

Showing papers on "Class (philosophy) published in 2003"


Proceedings ArticleDOI
12 Jan 2003
TL;DR: In this paper, a new class of metrics appropriate for measuring effective similarity relations between sequences, say one type of similarity per metric, is studied, and a new "normalized information distance", based on the noncomputable notion of Kolmogorov complexity, is proposed.
Abstract: A new class of metrics appropriate for measuring effective similarity relations between sequences, say one type of similarity per metric, is studied. We propose a new "normalized information distance", based on the noncomputable notion of Kolmogorov complexity, and show that it minorizes every metric in the class (that is, it is universal in that it discovers all effective similarities). We demonstrate that it too is a metric and takes values in [0, 1]; hence it may be called the similarity metric. This is a theory foundation for a new general practical tool. We give two distinctive applications in widely divergent areas (the experiments by necessity use just computable approximations to the target notions). First, we computationally compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we give fully automatically computed language tree of 52 different language based on translated versions of the "Universal Declaration of Human Rights".

145 citations


Journal ArticleDOI
TL;DR: This paper shows that no RCC model can be interpreted extensionally anyway and hence give a negative answer to a conjecture raised by Bennett, and attaches to each cell entry in the RCC8 CT a superscript to indicate in what circumstances an extensional interpretation is possible.

112 citations


Journal ArticleDOI
TL;DR: In this article, a didactic account of a theoretical-methodological nature of the category of race is given, in connection with other categories such as "color", "ethnics", "class", "nation", "people", "state", etc.
Abstract: In a didactic account of a theoretical-methodological nature the author explains how the category of "race" is used in his research, in connection with other categories such as "color", "ethnics", "class", "nation", "people", "state", etc. Assuming that concepts, theoretical or otherwise, can only be applied and understood within their discursive contexts, the author establishes the distinction between "analytical" and "native" concepts, that is, between categories extracted from a theoretical corpus, and those that comprise the discursive universe of the subjects being analyzed, but that must be employed by the sociologist. In the central part of the text, the author sketches a history of the meanings of the category "race" in Brazil and of the various explanations of the nature of the relations between white and black people put forward by Sociology: starting with the 1940s pioneering work of Donald Pierson, going through the UNESCO studies of the 1950s and the work of the so-called "Sao Paulo School" in the 1960s, up to the more recent revival of the theory of "racial democracy" in close dialogue with Black movements. The author concludes the article with a brief discussion about the various questions or stimuli given in surveys for the definition and measurement of the color or race variable.

109 citations


Book ChapterDOI
08 Sep 2003
TL;DR: It is argued that widening the notion of software development to include specifying the behaviour of the relevant parts of the physical world gives a way to derive the specification of a control system and also to record precisely the assumptions being made about the world outside the computer.
Abstract: Well understood methods exist for developing programs from given specifications. A formal method identifies proof obligations at each development step: if all such proof obligations are discharged, a precisely defined class of errors can be excluded from the final program. For a class of "closed" systems such methods offer a gold standard against which less formal approaches can be measured. For "open" systems -those which interact with the physical world- the task of obtaining the program specification can be as challenging as the task of deriving the program. And, when a system of this class must tolerate certain kinds of unreliability in the physical world, it is still more challenging to reach confidence that the specification obtained is adequate. We argue that widening the notion of software development to include specifying the behaviour of the relevant parts of the physical world gives a way to derive the specification of a control system and also to record precisely the assumptions being made about the world outside the computer.

69 citations


Patent
30 Apr 2003
TL;DR: In this article, a mechanism for allowing new functionality for an object to be expressed as a property that is not built into the class from which the object derives is described, and the mechanism associates properties in one class with another class.
Abstract: Described is a mechanism for allowing new functionality for an object to be expressed as a property that is not built into the class from which the object derives. More specifically, the mechanism associates properties in one class with another class. A computer-readable medium, that includes an object having a property in a first set of properties, further includes a data structure. The data structure includes definitions for each of a second set of properties and includes at least one static method. The static method is associated with one property out of the second set of properties and includes a first parameter. The first parameter uniquely identifies the one property. The static method is operative to associate the one property with the object without specifying an explicit reference to the one property in the object. The property is registered during run-time in order to receive the unique identifier.

64 citations


Patent
16 May 2003
TL;DR: In this article, a method and data structure that enables an object to be specified declaratively within a markup document is described, where the object is written based on a mapping.
Abstract: Described is a method and data structure that enables an object to be specified declaratively within a markup document. The markup document may be XML-based. In accordance with the invention, the object is written based on a mapping. The mapping includes a URL attribute for defining a location for a definition file having assemblies and namespaces where classes are specified. The class name is mapped to the markup document as a tag name. Properties and events of a class are mapped to attributes of the tag associated with the class. The method further includes a method to parse the markup language to create a hierarchy of objects. Attributes that do not map directly to a class are defined with a definition tag. The definition tag is also used to add programming code to a page.

48 citations


Book ChapterDOI
13 Oct 2003
TL;DR: Two new methods for the definition of integrity constraints in object-oriented conceptual modeling languages are proposed, one applies to static constraints and the other to creation-time constraints.
Abstract: We propose two new methods for the definition of integrity constraints in object-oriented conceptual modeling languages. The first method applies to static constraints, and consists in representing them by special operations, that we call constraint operations. The specification of these operations is then the definition of the corresponding constraints. The second method, which is a slight variant of the previous one, applies to creation-time constraints, a particular class of temporal constraints. Both methods allow the specialization of constraints and the definition of exceptions. We include also an adaptation of the two methods to the UML.

25 citations


Journal ArticleDOI
TL;DR: This article defined and classified FS according to Raupach's contextual list (1984) and lexical criteria as well as differentiating it from creative speech, presenting FS as a pragmatic concept challenging the usual conceptions of language acquisition as an analytical process.
Abstract: This study looks into one context of Formulaic Speech (FS) usage: The partial L2 immersion class. It tries to define and classify FS according to Raupach's contextual list (1984) and lexical criteria as well as differentiating it from creative speech. FS is presented mostly as a pragmatic concept challenging the usual conceptions of language acquisition as an analytical process. Also challenged is the idea that language production is based on analysis of the input followed by production out of parsed output. In a Second Language Acquisition perspective, FS is shown as being a temporary stage of acquisition which, among other aspects, enables the speaker to reach idiomaticity in his or her L2 and thereby efficient communication with native speakers. Keywords: Formulaic speech, Second language acquisition, Second language production, Partial immersion, Communicative competence.

20 citations


Journal ArticleDOI
TL;DR: It is shown that the set of elementary components can be grouped into pairwise disjoint families determined by the "two-way accessible" relationship among them.

19 citations


Book ChapterDOI
TL;DR: In this article, the use-case class and its instance are unusually defined, a conjecture that is against the definition of the class is introduced without any reasons, and the execution procedure of a usecase instance does not actually work because of some flaws concerning the execution control.
Abstract: UML is revealed to contain three different defects concerning the use-case class that were buried in OOSE and handed over to it. These defects are: 1) the use-case class and its instance are unusually defined, 2) a conjecture that is against the definition of the class is introduced without any reasons, and 3) the execution procedure of a use-case instance does not actually work because of some flaws concerning the execution control. These defects have been causing unnecessary confusion in UML’s specification of the use-case class. An object-oriented real-world model is built that represents a typical situation of using a use case in the analysis and design stages, and another definition of the use-case class is constructed that successfully solves the problems.

15 citations


Patent
13 Aug 2003
TL;DR: In this article, an on-line analysis data model of geographical information system and the object-oriented association method of attribute data are characterized by that each geographical entity (object) is considered as one member of a set, each object class belongs to higher-level class and inherits its characteristics.
Abstract: An on-line analysis data model of geographical information system and the object-orientated association method of attribute data are characterized by that each geographical entity (object) is considered as one member of a set, each object class belongs to higher-level class and inherits its characteristics, the association between space data and attribute data is realized by on-line analysis OLAP, and the auxiliary information appended to space data and the "view set" in attribute data are linked and stored in the form of 2D relation table for expressing complex space relation and network relation.

Journal ArticleDOI
TL;DR: It does not trouble this reviewer that the resulting MAXCOV device requires judgment, and does not supply standard errors or tests of "significance," and it is not a solution that springs to mind easily in response to the stated problem.
Abstract: It is an easy but unfair tactic for a reviewer to complain that the title of a text is misleading. The stated objectives of Waller and Meehl's brief monograph are (a) to describe Meehl's MAXCOV device for detecting "taxonicity"-that is to say, for detecting a latent profile model with just two latent classes, (b) to describe two further procedures for this purpose-MAXEIG and L-mode, and (c) to discuss some of the philosophical misconceptions pertaining to "taxonic" notions-notions concerning types versus traits. Paul Meehl's inventive spirit informed much of the social science philosophy I learned from my undergraduate days. I consider it my own fault that I had not previously come into contact with his contributions to this class of psychometric problem. The basic insight behind the MAXCOV device has the admirable quality of all truly inventive devices. It is perfectly obvious as soon as it is stated, but it is not a solution that springs to mind easily in response to the stated problem. Essentially, Meehl's conception is this: If each of a set of individuals belongs to one of just two latent classes, and within each, p quantitative indicator variables are uncorrelated (at least to a good approximation), membership is probabilistically indicated by any one of the p variables. Accordingly, when conditioned on one ("input") indicator, a number of statistics of other ("output") indicators will vary as a function of the conditioning variable. We could call this wide-sense heteroskedasticity. If, in particular, we compute the covariance of two output variables within successive fractiles defined by an input variable, a plot of the sequence will show covariances lowest at the extremes, where the input indicator selects nearly pure types, and a maximum at the cut-point where members of the two classes are selected about equally into the fractile. Given more than three indicators, we may condition on each in turn and average results to get a best cut-point. It does not trouble this reviewer that the resulting MAXCOV device requires judgment, and does not supply standard errors or tests of "significance." Chapter 3 treats this device, but for a more general account the reader must pursue the references. Chapter 4 describes MAXEIG, an extension of the principle to conditioning the largest eigenvalue of the covariance matrix of remaining indicators on overlapping fractiles of one quantitative indicator. An adjustment intended to eliminate the

Book
01 Jan 2003
TL;DR: In this paper, the authors examined the influence of teaching approaches on thoughts and practice behaviors of students, and how those thoughts and behaviors affect transfer of learning, and found that lower-skilled students were responsible for improvements over time.
Abstract: The purpose of the study was to examine the influence of teaching approaches on thoughts and practice behaviors of students, and how those thoughts and behaviors affect transfer of learning. First, a self-report instrument for assessment of cognitive processes that meditate motor skill outcomes was validated. The cognitive processes included prior knowledge usage, self-efficacy, critical thinking and attention-concentration. University students who had taken a physical activity class (N=409) completed the questionnaires. Three out of the initial four subscales were confirmed as fitting the data. Attention-concentration was dropped probably because it was an element of critical thinking. In a university golf activity class, students were assigned into three groups for instruction to learn a golf-pitching task: guided discovery (scaffolded movement challenges using task cards to learn movement concepts), model group (students were presented concepts and shown a correct model) and a control group (received no information except the initial basic instruction the other two groups also received). Instruction lasted six days. Skill performance scores, form scores and self-report cognitive measures (cognitive processes questionnaire and strategies students used to be successful) were recorded. Results indicated that it was the lower-skilled students were responsible for improvements over time. Students used different strategies depending upon the instruction they received. Students in the trial and error used attentional strategies, those in the correct model reported that it was the technique related to posture and grip that helped and the guided discovery group clearly concentrated on applying concepts to be successful. However, no differences in transfer were evident. It is possible that guided discovery students did not have enough time to translate their understanding into outcomes.

03 Aug 2003
TL;DR: Two types of techniques for interpolation are explored: class-based methods generally lead to overly smooth signals, while contextual-based ones, on the contrary, tend to generate spurious details.
Abstract: A ubiquitous problem in signal processing is to obtain data sampled with the best possible resolution. At the acquisition step, the resolution is limited by various factors such as the physical properties of the captors or the cost. It is therefore desirable to seek methods which would allow to increase the resolution after acquisition. This is useful for instance in medical imaging or target recognition. In some applications, one dispose of several low resolution overlapping signals [1]. In more general situations, a single signal is available for superresolution. Interpolation then requires that the available data be supplemented by some a priori information. Two types of methods have been explored: In the rst, "class-based" one, the signal is assumed to belong to some class, with conditions expressed mainly in the time or frequency domain [3, 5, 10]. This puts constraints on the interpolation, which is usually obtained as the minimum of a cost-function. The second type of approaches hypothesizes that the information needed to improve the resolution is local and is present in a class of similar signals [2, 4]. This type of approach could be called "contextual". Both "class-based" and "contextual" approaches use a "model" for interpolation: The "class-based model" is that the signal belongs to an abstract class characterized by a certain mathematical property. The "contextual model" is that the signal will behave locally under a change of resolution in way "similar" to other signals in a given set, for which a high resolution version is known. Both types of techniques have some drawbacks. Roughly speaking, class-based methods generally lead to overly smooth signals, while contextual-based ones, on the contrary, tend to generate spurious details.

Patent
14 Oct 2003
TL;DR: In this paper, the escape analysis of an application is described and a method and apparatus to analyze escape analysis is described, based on which one or more methods associated with a violating condition of the application are identified.
Abstract: Methods and apparatus to analyze escape analysis of an application are described herein. In an example method, one or more methods associated with a violating condition of the application are identified. The one or more methods are parsed into at least one equivalence class. A first escape indicator and a second escape indicator associated with each of the at least one class are identified. Based on the first and second escape indicators, the one or more methods are propagated.

Journal Article
TL;DR: This paper uses an abstract form of the object calculus to check the absence of race conditions, and shows that abstract interpretation is more flexible than type analyses, and it allows to certify as "race free" a larger class of programs.
Abstract: In this paper we investigate the use of abstract interpretation techniques for statically preventing race conditions. To this purpose we enrich the concurrent object calculus concζ by annotating terms with the set of "locks" owned at any time. We use an abstract form of the object calculus to check the absence of race conditions. We show that abstract interpretation is more flexible than type analyses, and it allows to certify as "race free" a larger class of programs.

Journal ArticleDOI
TL;DR: Ortner's New Jersey Dreaming as discussed by the authors is a history of class in the second half of the twentieth century in the United States, and it is a rich source of inspiration for our work.
Abstract: In this review essay we are particularly intrigued by the New Jersey m Dreaming's theoretical and historical arguments in relation to its mode of ethnographic representation and writing. We admire the book's enormous ambition-to sketch the history of class in the second half of the twentieth century in the United States-and we are interested in examining how this project is achieved through ethnographic writing. The strengths and failings of New Jersey Dreaming are ones that should be of great interest to a very wide readership-to all of us who struggle with the articulation of political economy and persons, and more specifically to those engaged with the complexities of race, ethnicity, class, gender, and place in the contemporary United States. In this essay we aim to sketch the work's contributions, and to think about its rougher edges and spottier patches as well-edges and patches that speak eloquently to the costs of particular modes of ethnographic representation and ambition. Class and History Ortner's is a capacious project indeed, both theoretically and historically. Ortner's "class" project for the class of 1958 (rendered as "capital" in the book's title) is huge. Against the invisibility of class in American popular culture and discourse (see also Ortner 1991), Ortner is determined to render class visible, to read class. Ortner's ambition begins with her numerical quest-to account for the whole class of 1958. Her efforts to locate her 303 classmates of Newark, New Jersey's Weequahic High School in are simply indefatigable-we marvel at her tenacity and creativity in this pursuit. Ortner's own review of these efforts-and the account offered in Appendix 1, "Finding People," by her classmate Judy Epstein Rothbard whom she hired to assist her-is a wonderful lesson in how class articulates with people's ability to be reached. Revealingly, the relatively dispossessed are significantly over-represented among those who have somehow fallen off the social map. In Appendix 4, Ortner lists the vitals (or their lack) of every single classmate. One senses that hers is an offering to perpetuity: a local data set fine-tuned to matters of class and social mobility that an array of future scholars might find very handy-and perhaps quite remarkable. Ortner's numerical ambition, however, is larger than a mere accounting for the "whole" class. The book offers nearly two dozen fourfold tables (i.e., 4 squares) designed to consider the interaction between "capital" (i.e., class) on the left axis and the top axis which gauges "some kind of personal agency" (205). One of Ortner's guiding premises is that the agentive top axis is the culturally hegemonic one in the United States-namely the idea that it is grit and determination that determines the course of people's lives. Against this cultural grain, Ortner joins those determined to "bring class back in" (263). Much of the book then aims to explore the workings of class, not only in terms of outcomes, but in practice-in the fabric, feeling, and narration of daily life. Ortner's is a tough job as she aims to combine "objectivist moments" (a term she takes from Bourdieu, 264) in which she is compelled to make an axis or grid of class with class in all its intersections and complexity so as to be true to the cultural or discursive turn in the human sciences (263). In bringing class back in, Ortner means to tell two (and in fact many more) historical stories: that of the upward mobility of Weequahiq High's Class of '58, and more broadly that of the changing class configuration of the United States in the second half of the twentieth century-a reconfiguration that Ortner considers a signpost of "late capitalism." The story closer to the ethnography's home, Weequahiq High's Class of '58, is about the remarkable entry of Jewish men (and secondarily of African American men) into the PMC, the professional managerial class, which Ortner describes as the "fat top" of today's class hourglass (273). …

Patent
18 Apr 2003
TL;DR: An automated symbolic recognition system and method as discussed by the authors includes the basis for the system's feature set by identifying a set of logical symbols comprising a finite class of arcpolys (lines and arcs and a point) that to the exclusion of the point, each member class has a unique (distinct) orientation.
Abstract: An automated symbolic recognition system and method includes the basis for the system's feature set by identifying (a) a set of logical symbols comprising a finite class of arcpolys (lines and arcs and a point) that to the exclusion of the point, each member class has a unique (distinct) orientation, and (b) a set of subclass symbols per logical class of symbol representing a finite subclass of arcpolys (lines and arcs and a point) that to the exclusion of the point, each subclass member has a unique (distinct) extreme points size and/or depth size.

01 Jan 2003
TL;DR: The definition of PMC equation model was first set up, and a concrete method--equation diagnosis algorithm was found, on seeking all consistent fault patterns for the class of models without supposing " diagnosable" or "believe the most".
Abstract: The definition of PMC equation model (that is equation description of PMC model)was first set up.PMC model is a kind of most common system level fault model.Owing to the characteristics that traditional graph theory diagnosis algorithm couldn't be without " t diagnosabe" and "believe the most" (that is supposing the number of fault processors less than half the number of all processors in the system),the concepts of "absolute fault base"etc.were defined,and the tool of "body grouping"was fully wielded,finally a concrete method--equation diagnosis algorithm was found,on seeking all consistent fault patterns for the class of models without supposing " t diagnosable" or "believe the most".

Bjarne Stroustrup1
01 Jan 2003
TL;DR: This note proposes a notion of user-defined literals based on literal constructors without requiring new syntax, which becomes a generalization of the C99 notion of compound literals.
Abstract: This note proposes a notion of user-defined literals based on literal constructors without requiring new syntax If combined with the separate proposal for generalized initializer lists, it becomes a generalization of the C99 notion of compound literals Basically, a constructor defines a user-defined literal if it is inline and specifies a simple mapping of its arguments to object representation values and is invoked with constant expressions or objects that can be trivially copied (such as pointers) The Problem C++ does not provide a way of defining literals for user-defined types Instead, constructors are used For example: 15 // int literal "15" // string literal (zero terminated array of characters) complex(15) // “sort of complex literal” When a constructor is simple and inlining is done well, such constructor calls provide a reasonable substitute for literals However, a constructor is a very general construct and there have been many requests for a way to express literals for user-defined types in such a way that a programmer can be confident that a value will be constructed at compile time and potentially stored in ROM For example: complex z(1,2); // the variable z can be constructed at compile time const complex cz(1,2); // the const cz can potentially be put in ROM A Solution The most direct and obvious solution would be to introduce syntax to distinguish a literal constructor and to distinguish literals of user-defined types For example: class X { int x,y,z; public: literal X(int a, int b) :x(a+1),y(0),z(b) {} // literal constructor // }; X"1,2" // a literal of type X This syntax is just for the illustration of the idea; a better syntax is suggested below This “literal constructor” illustrates the requirements for any specification of a literal for a userdefined type It specifies a (simple) mapping from a set of arguments to the representation of the type Often, that will simply specify a value for each member of the type's representation, but slight generalizations are possible and sometimes useful Here, I have indicated that the member y's value need not be specified by the user and that a slight transformation takes place on the argument used to specify a (x becomes a+1) The body is empty Since the construction of the representation of a value takes place at compile time, very few constructs could reasonably be allowed in a literal constructor body The simplest rule would be to require that body to be empty That is, the mapping of arguments to representation (member values) must be specified as member initializers In addition, a literal constructor must be inline What can be accepted as an argument type? An argument must of a type that can be copied without the use of a nontrivial copy constructor (eg ints, pointers, and references) What can be accepted as an initializer? An initializer can be another argument, a value of a type that can be copied without the use of a non-trivial copy constructor, or a constant expression Note that this definition is recursive in that it allows the use of literals of user defined types as arguments to be used For example: class Y { complex x, y; literal X(complex a, int b) : x(a), y(complex"a,0") {} // }; const int c = 3; Y"complex"1,2",c"; This simple definition could be elaborated For example, should we accept floating point expressions, such as d+17 where d is an argument of floating point type? I think not Even if d is a literal so that the expression to be evaluated is something like 23+17, I suspect that the complication of requiring floating point arithmetic at compile time is not worth the bother – especially for cross compilers Syntax for user-defined literals The syntax used to illustrate the idea of a “literal constructor” above has some obvious problems Consider that last use: const int c = 3; Y"complex"1,2",c"; That use of quotes (chosen to emphasize the literal nature of the construct) would clearly confuse any traditional lexer (and many human readers) Also, it doesn't exactly extend to string literal arguments Furthermore, it would not be easy to get used to the idea that elements of the string-like part are separate values and that variable can occur there I think that a much more natural (ie familiar) and readable notation would use parentheses: const int c = 3; Y(complex(1,2),c); After all, parentheses are the way we usually express arguments However, by doing so, we lost the syntactic distinction of the literal That is,

Book ChapterDOI
01 Jan 2003
TL;DR: The notion of equality and difference with respect to numbers has been used by many mathematicians to define concepts that are neither capable of definition nor in need of it as mentioned in this paper, and these concepts have led to a class of definitions of the number concepts themselves.
Abstract: Ever since Euclid’s “Elements” attained the status of the model of scientific exposition, mathematicians have followed the principle of not regarding mathematical concepts as fully legitimized until they are well-distinguished by means of rigorous definitions. But this principle, undoubtedly quite useful, has not infrequently and without justification been carried too far. In over zealousness for a presumed rigor, attempts were also made to define concepts that, because of their elemental character, are neither capable of definition nor in need of it. Of this sort are the so-called “definitions” of equality and difference with respect to number whose refutation will now engage us. And they have indeed a special claim on our interest precisely because they have led to a class of definitions of the number concepts themselves. These definitions, baseless and scientifically useless, have nevertheless, in virtue of a certain formal character, found favor among mathematicians and among the philosophers influenced by them.

Dissertation
01 Jan 2003
TL;DR: The preverb can be defined as an "element of relation" bringing into contact an entity-located and an "entity-landmark" within the framework of a process as discussed by the authors.
Abstract: The preverb can be defined as "the prefix of a verb". It is appropriate to apprehend it like an "element of relation" bringing into contact an "entity-located" and an "entity-landmark" within the framework of a process. A confrontative study of preverbs ad-, in-, ob and per- in the class of the verbs of agentive movement makes it possible to characterize the specific value of each one. We note in particular that ob- knows another value that the one of "face-to-face" : the "covering", which seems closer to original meaning of the preverb. In addition, the difference between "approach" and "entry" is not enough to return account of what opposes ad- and in- : the first one often underlines the initial distance between entity-located and entity-landmark, while the "double limit" can apply to an easy access or give a touch of hostility. Owing to the semantic study of the preverbs, we can establish some characteristics which concern more largely the preverb: constitution of paradigmatic series, semantic principle of affinity, continuum between "internal range" and "external range", importance of the geometrical representation lent to the referent of the landmark. A last questioning relates to the links between the preverb and the verbal base. It seems that it is necessary to give up lending to the preverb of an inchoative verb an aspectual value: it is rather the range of the assignment of the subject by the predicated transformation which is characterized by the preverb. In the parasynthesis, the prefixal element seems to be a functional preverb. Syntactically, duplication does not seem to be a free alternative, but always finds a justification, whether it concerns the expressivity whether the constraints related to the semantics of the roles. Not being neither a preposition nor a "co-verb", the preverb largely deserves a specific study. The preverbation, processus of lexical creation, fits then in the more general question of the modeling of reality through the language.

Journal ArticleDOI
01 Jan 2003-Scripta
TL;DR: In this paper, the authors distinguish between the following general approaches to the study of RITUALS in a semi-empirical and semi-theoretical fashion: what IT is, what IT does, how IT works, and why IT is as it is.
Abstract: The topic presented in this paper lies at the crossroads between ritual studies and ritual theory. In order to get an idea of the field of study, it may be useful to distinguish between the following general approaches to the study of ritual. To begin with, ritual theory in the strict sense, i.e. with explanatory ambitions etc., tends to focus on RITUAL as such: what IT is, what IT does, how IT works ("functions"), and why IT is as it is.Softer varieties of ritual theory, e.g. approaches that wish to foster a better "understanding" of what goes on when rituals are being performed, may focus on RITUALS in a semi-empirical and semi-theoretical fashion. As a matter of fact, to a large extent ritual "theory" seems to be the result of theoretical reflections on matters of empirical research. Apart from that, we find studies of this and that phenomenon (e.g. time, space, violence, aesthetics, media, etc.) in relation to rituals ("ritual and time", "ritual and space", etc.). Then, of course, we have a good dose of studies on different "types", "classes", or "groups" of rituals. Most popular, (in the absence of any statistical evidence), are studies of "sacrifice", "rites of passage", and "initiations", with "healing rituals" and "pilgrimages" as ever more successful runner-ups. Correspondingly, there is a number of studies about any variety of any class of rituals among the so-and-so people ("initiation among the NN"). Moreover, there are plenty of books about the rituals of this and that religion or people — in colonial times often published under such titles as "The customs and ceremonies of the NN". Last but not least, there is an overwhelming amount of studies devoted to the presentation or analysis of single rituals.

Journal Article
TL;DR: For example, the authors found that preschool children from poverty-level backgrounds were much less productive in category and descriptor usage than comparison preschoolers attending the same DISTAR Language classes, but not taught statements.
Abstract: Six preschool children, mostly from poverty-level backgrounds, were taught to make descriptive statements about objects. Each statement began with one of 16 abstract category terms (e.g., parts, color, texture, location) and was followed by a more descriptive term, as m, "The parts of this hammer are a handle and a claw" and "The colors of this hammer are black and silver." The category-descriptor statements were organized and sequenced into four clusters and dealt with these object categories: (a) name, class membership, parts; (b) color, shape, size, and luster; (c) weight, texture, density, and material made of; and (d) object use, how used, who uses, where located, and when used. As sets of new statements were successively taught and evaluated by a multiple-probe design, the number and diversity of probed category and descriptor terms steadily and substantially increased across the four clusters, and these verbal behaviors were maintained one to two months after training. Whereas all category-trained (CT) preschoolers were proficient at giving an ordered account of complete, precise, and diverse statements both to practiced objects and non-practiced objects, comparison preschoolers attending the same DISTAR Language classes, but not taught statements, were much less productive in category and descriptor usage. The CT children also produced a much larger number of same/not same object comparisons, especially when provided with prompts that highlighted the general features of certain clusters. Key Terms: Language production and generalization; describing objects; same-difference comparisons; verbal fluency; multiple-probe design; preschool children; DISTAR LANGUAGE ********** Being able to provide extensive, relevant, and varied descriptions of objects is fundamental to many early communication and cognitive activities (Hart & Risely, 1995; Krauss & Glucksberg, 1977; Solso, 1995; Weisberg, 1992, 2002). Indeed, the ability to generate and expand upon object descriptions is emphasized in many language assessment and instructional tasks. Some of these descriptions appear in tasks that deal with definitions, opposites, analogies, and object comparisons (Alexander et. al., 1989; Richard & Hanner, 1995; Weschler, 1989; Zimmerman, Steiner, & Pond, 1992). Others include the ability to identify common objects that belong to certain classes, for instance, naming objects in the class of vehicles, tools and clothing (Engelmann & Osborn, 1976, 1977, 1987). Moreover, tasks containing these skills have formed the basis to modify and evaluate language expression in early childhood intervention programs (Karnes, 1973; Mosley & Plue, 1983). Listing, modifying, and recombining the attributes or desc riptors of objects has also been suggested as a way to create unique object descriptions and uses (Halpern, 1989). Many children, when called upon to give descriptive accounts of objects, unfortunately end up with incomplete, imprecise, and idiosyncratic descriptions that fail to communicate. Hart and Risley (1974) found that the free-play material requests of many preschool children from poverty-level backgrounds frequently did not identify the object by its name. Instead, they often said, "I want that." Once an appropriate request was taught as in, "I want a car," the children often neglected to mention a descriptor that would distinguish one car from another either by its color, size, shape, and, in some cases, by its number. Hart and Risely (1968, 1974) taught these preschoolers to request play objects by referring to the object's name and a color attribute, as in, "1 want the red truck". Such statements were prompted initially by the teachers who asked, "What color truck do you want?" In these procedures the child was not required to use the dimensional term "color" in his/her answer. In addition, only adjectives or descriptors relating to color (e.g., red, orange, etc) were explicitly taught, though mention of adjective-noun requests from non-taught dimensions relating to shape, size and number were also recorded. …

Book ChapterDOI
01 Jan 2003
TL;DR: This paper addresses the question of whether it is possible to build a superval-uational semantics for the language of first order predicate logic with an added binary majority quantifier in such a way that a counterpart of the monotonicity condition is satisfied and proposes a definition of a class of semipartial models for which it can be done.
Abstract: This paper addresses the question of whether it is possible to build a superval-uational semantics for the language of first order predicate logic with an added binary majority quantifier in such a way that a counterpart of the monotonicity condition is satisfied. I will show that this is not possible in the general case (for all partial models) and will propose a definition of a class of semipartial models for which it can be done. These remarks can be generalized at least to all binary quantifiers that are not left-monotone.

Book ChapterDOI
Myrna Estep1
01 Jan 2003
TL;DR: The extent to which neural network models can characterize the kinds of complex self-organizing dynamics found in Boundary Set S is evaluated.
Abstract: In this chapter, I will evaluate the extent to which neural network models can characterize the kinds of complex self-organizing dynamics found in Boundary Set S. In the last chapter, we saw that it is the attractors that become the sources of the order we observe exhibited by one who knows how to do tasks or performances found in Boundary Set S. But it is clear that attractors in neural networks will be chaotic unless there is some other ordering principle. Kauffman has suggested that principle may be learning, for example, Hebbian. However, there is a sense in which that cannot be so for kinds of knowing found in Boundary Set S. Among other things, those kinds of knowing are primitive relations between a subject and object where the latter is not a class object. Hebbian learning essentially proceeds by modifying synaptic weight so as to amplify changes throughout a network. Data is presented to a system input layer where the system algorithm operates upon it in terms of local rules, computing an input-output mapping with certain desirable properties. But all this is actually carried out on class objects in terms of some class rule, such as some rule of similarity based on Euclidean distance, defining a class. It is not carried out on unique, sui generis objects.