scispace - formally typeset
Search or ask a question

Showing papers on "Class (philosophy) published in 1996"


Journal Article
TL;DR: In this article, the authors explore a class of stochastic processes, called "adaptive dynamics", which supposedly capture some of the essentials of the long-term biological evolution, and provide a classification of their qualitative features which in many aspects is similar to classifications from the theory of deterministic dynamical systems.
Abstract: We set out to explore a class of stochastic processes, called "adaptive dynamics", which supposedly capture some of the essentials of the long-term biological evolution. These processes have a strong deterministic component. This allows a classification of their qualitative features which in many aspects is similar to classifications from the theory of deterministic dynamical systems. But they also display a good number of clear-cut novel dynamical phenomena. The sample functions of an adaptive dynamics are piece-wise constant function from R_+ to the finite subsets of some "trait" space X in R^k. Those subsets we call "adaptive conditions". Both the range and the jumps of a sample function are governed by a function s, called "fitness", mapping the present adaptive condition and the trait value of a potential "mutant" to R. Sign(s) tell us which subsets of X qualify as adaptive conditions, which mutants can potentially "invade", leading to a jump in the sample function, and which adaptive condition(s) can result from such invasion. Fitness supposedly satisfy certain constraints derived from their population/community dynamical origin, such as the fact that all mutants which are equal to some "residents", i.e., element of the present adaptive condition, have zero fitness. Apart from that, we suppose that s is as smooth as can be possibly condoned by its community dynamical origin. Moreover, we assume that a mutant can differ but little from its resident "progenitor". In sections 1 and 2 we describe the biological background of our mathematical framework. In section 1 we deal with the position of our framework relative to present and past evolutionary research. In section 2 we discuss the community dynamical origin of s, and the reasons for making a number of specific simplifications relative to the full complexity seen in nature. In sections 3 and 4 we consider some general, mathematical as well as biological, conclusions that can be drawn from our framework in its simplest guise, that is, when we assume that X is 1-dimensional, and that the cardinality of the adaptive conditions stays low. The main result is a classification of the adaptively singular points. These points comprise both the adaptive point attractors, as well as the points where the adaptive trajectory can branch, thus attaining its characteristic tree-like shape. In section 5 we discuss how adaptive dynamics relate through a limiting argument to stochastic models in which individual organisms are represented as separate entities. It is only through such a limiting procedure that any class of population or evolutionary models can eventually be justified. Our basic assumptions are (i) clonal reproduction, i.e., the resident individuals reproduce faithfully without any of the complications of sex or Mendelian genetics, except for the occasional occurrence of a mutant, (ii) a large system size and an even rarer occurrence of mutations per birth event, (iii) uniqueness and global attractiveness of any interior attractor of the community dynamics in the limit of the infinite system size. In section 6 we try to delineate, by a tentative listing of "axioms", the largest possible class of processes that can result from the kind of limiting considerations spelled out in section 5. And in section 7 we heuristically derive some very general predictions about macro-evolutionary patterns, based on those weak assumptions only. In the final section 8 we discuss (i) how the results from the preceding sections may fit into a more encompassing view of biological evolution, and (ii) some directions for further research.

739 citations


Patent
16 Oct 1996
TL;DR: In this paper, the authors define objects as collections of properties, each having a unique property name and define a "shape", which is defined independently of objects and is applicable to a specified shape, rather than to objects that are derived from a class in which the method is defined.
Abstract: The present invention provides a new system for implementing software objects using an object-prototype model. Objects are defined as collections of properties, each having a unique property name. A collection of property names defines a 'shape'. The use of shapes frees the representation of an object in memory from the order in which the properties of the object are declared. Methods are defined independently of objects and are applicable to a specified shape, rather than to objects that are derived from a class in which the method is defined. Methods can be applied to any object that has the specified shape or that has a superset of the properties defining the specified shape, regardless of the place of the object in any inheritance hierarchy. The definition of a shape can also include additional selection criteria, such as restrictions on the values of properties, so that the application of a method can be restricted to objects satisfying the specified criteria. The properties of objects can be divided into subgroups representing different aspects of the object and different subgroups of an object can be inherited from different parent objects, based upon either a has-a or an is-a hierarchy. The shape of an object is determined by all of its properties and is not confined by subgroup boundaries.

97 citations


Journal ArticleDOI
TL;DR: This work states that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
Abstract: In his 1936 paper, On the Concept of Logical Consequence, Tarski introduced the celebrated definition of logical consequence: “The sentenceσ follows logically from the sentences of the class Γ if and only if every model of the class Γ is also a model of the sentence σ.” [55, p. 417] This definition, Tarski said, is based on two very basic intuitions, “essential for the proper concept of consequence” [55, p. 415] and reflecting common linguistic usage: “Consider any class Γ of sentences and a sentence which follows from the sentences of this class. From an intuitive standpoint it can never happen that both the class Γ consists only of true sentences and the sentence σ is false. Moreover, … we are concerned here with the concept of logical, i.e., formal, consequence.” [55, p. 414] Tarski believed his definition of logical consequence captured the intuitive notion: “It seems to me that everyone who understands the content of the above definition must admit that it agrees quite well with common usage. … In particular, it can be proved, on the basis of this definition, that every consequence of true sentences must be true.” [55, p. 417] The formality of Tarskian consequences can also be proven. Tarski's definition of logical consequence had a key role in the development of the model-theoretic semantics of modern logic and has stayed at its center ever since.

81 citations


Proceedings ArticleDOI
23 Sep 1996
TL;DR: This paper views design patterns as a transformation from a "before" system consisting of a set of classes (often a single unstructured class) into an "after" system, which includes the introduction of concurrent execution into an initially sequential system.
Abstract: This paper views design patterns [5] as a transformation from a "before" system consisting of a set of classes (often a single unstructured class) into an "after" system consisting of a collection of classes organised by the pattern. To prove that these transformations are formal refinements, we adopt a version of the Object Calculus [4] as a semantic framework. We make explicit the conditions under which these transformations are formally correct. We give some additional design pattern transformations which have been termed "annealing" in the VDM++ world, which include the introduction of concurrent execution into an initially sequential system. We show that these design patterns can be classified on the basis of a small set of fundamental transformations which are reflected in the techniques used in the proof of their correctness.

52 citations


01 Oct 1996
TL;DR: This paper encourages teachers to consider teaching as a performing art by using the theater as a metaphor for effective, innovative instructional methods, and parallels various aspects of performing arts with various aspect of teaching.
Abstract: This paper encourages teachers to consider teaching as a performing art. By using the theater as a metaphor for effective, innovative instructional methods, the author parallels various aspects of performing arts with various aspects of teaching. The "stage" represents the classroom and "sets" the teaching environment; the "set" characterizes the classroom and determines whether it's conducive to learning. A class must have appropriate "props," or teaching materials, in order to function properly. The "curtain," which covers a stage, represents negative attitudes or conflict in the classroom and must be avoided to encourage an open environment of ethics. Every "play" has "leading actors," or star students, who set the "stage" for the rest of the class, as well as a "supporting cast," those students who do not openly demonstrate such apt learning. The "playwright" represents the curriculum used. The curriculum must be up-to-date, understandable, and applicable to the objectives of the class. The teacher plays the role of the "director," working to nurture and develop the inner abilities of the students, and the "script" provides the game plan and is the guideline for how information will be presented. The class must be varied and entertaining, including such techniques as games, art, music, humor, dramatics, and technology. (YKH) ******************************************************************************** * Reproductions supplied by EDRS are the best that can be made * * from the original document. * ******************************************************************************** Teaching as a Performing Art

52 citations


Journal ArticleDOI
TL;DR: The linguistic therapy of evaluation (LTE) as mentioned in this paper is a clinical procedure based on the theory of general semantics and is based on a theory which connects emotional problems such as anxiety and depression with language use.
Abstract: The clinical procedure presented here could be considered a type of cognitive psychotherapy which has the main aim of producing an evaluational change through linguistic means. The linguistic therapy of evaluation is based upon the theory of general semantics and is based on a theory which connects emotional problems, such as anxiety and depression, with language use. These elements are described along with some other important issues, such as how language is understood from the general semantics perspective, the four main therapeutic elements of the therapy, the structure of the clinical procedure and two of its two main techniques: the orders of abstraction and the extensional devices. The paper will conclude by introducing some of the main differences and similarities between the linguistic therapy of evaluation and two of the main cognitive perspectives, those termed rationalist and those termed constructivist. The linguistic therapy of evaluation (LTE), formerly referred to as the "cognitive therapy of evaluation" (Caro, 1990) has been developed from the theory of general semantics (Korzybski, 1933) as a clinical procedure for the treatment of emotional problems. The results found thus far have allowed us to ascertain that the clinical procedure based on a general semantics approach offers good results in the treatment of emotional problems such as anxiety and depression (Caro, 1992a), although it has been used with other types of problems (Caro, 1986; Caro & Ibafiez, 1993). In this paper the LTE will be discussed from several vantage points. First, the psychotherapeutic importance of general semantics theory will be discussed, followed by the main theoretical ideas and definition of the LTE. Third, a general description of the clinical procedure will be given. Then, some of its main therapeutic techniques will be described. And finally, the differences and similarities between the LTE and other cognitive perspectives will be described. "THOSE WHO RULE THE SYMBOLS, RULE US" (KORZYBSKI, 1933, p. 76) General semantics theory was developed by Alfred Korzybski (1933) as an explanatory theory about human beings which could be applied as a general orientation in all human fields. According to Korzybski (1921, 1933) the two fundamental characteristics of human beings are: a time-binding class of life and the functioning of the organism as a whole. The first characteristic means that human beings "make the past live in the present and the present in the future" (Korzybski, 1921, p. 186). Human beings store, inherit and transmit, construe and reconstrue knowledge because of one of their main distinctive characteristics: the use of symbols. Symbolizing appears to be a unique human element, basic for survival, as symbols represent our tools for thinking and communicating. Since we are surrounded by symbols and create them in an ongoing way, we wish to point to several precautions to be taken when dealing with symbols. General semantics is based on three main theoretical ideas, which are called the three non-Aristotelian premises. The first premise states that the map is not the territory (or the word is not the object). The second is that the map does not cover all the territory characteristics (words are incomplete or human knowledge is an abstraction). The third states that language is self-reflexive (we can always make a map of a map of a map, etc.). Another main concept is the idea of human beings as organisms-as-a-whole, living in a neurolinguistic and neurosemantic environment (Korzybski, 1933). It means that we are living in a world of relations and that everything is interconnected. To "be" means to be related. These two issues, "as a whole" and "linguistic- neurosemantic," are the two concepts at the core of the clinical application of general semantics: evaluation and how human beings use language. The fact that general semantics deals with one of the main elements of human beings' functioning, language use, could explain why it has been widely applied. …

26 citations


Journal ArticleDOI
TL;DR: This article argued that not all four propositions are embraced by every class approach with equal vigor, and that one would expect to find differences among class theorists on the precise meaning of "the fundamental structuring principle;' real features" or "transformative capacity."
Abstract: Within these three moderate and fair-minded critiques of our essay, a central criticism concerns the accuracy of our portrayal of the class paradigm. This is partly an empirical question, and partly a matter of semantics. The summary propositions are derived from the class literature with the intention of capturing typicality rather than constructing an easy target. Naturally, our critics have every right to distance themselves from some of the elements. Indeed, we insist that not all four propositions are embraced by every class approach with equal vigor, and that one would expect to find differences among class theorists on the precise meaning of "the fundamental structuring principle;' "real features" or "transformative capacity." We endorse Wright's objection to the excessive determinism that characterizes some versions and his stress on the relative autonomy of politics. However, we disagree with our critics on two points. First, Manza and Brooks's depiction of our argument as "one-sided" and "misleading," relies on a misinterpretation. Of course, class analysts can and do address racial, ethnic, and gender divisions, but it is simply a fact that they do so within a context of an either explicitly stated or an assumed primacy of economic-class divisions. This means privileging class divisions and relations by disproportionate attention or by misattributing causal directionality. This should be seen as a fair representation of not only the more orthodox Marxism of, say, Poulantzas or Miliband but also of more sophisticated neo-Marxist analyses of Wright and the "neo-Weberian" class studies of Marshall et al. 1 Such analyses also seldom theorize these non-class divisions.

24 citations


Journal ArticleDOI
TL;DR: A new and elegant definition of the class of recursive functions is proposed, analogous to Kleene's definition but differs in the primitives taken, thus demonstrating the computational power of the concurrent programming language introduced in Walters (1991), Walters (1992) and Khalil and Walters (1993).
Abstract: In this paper, we propose a new and elegant definition of the class of recursive functions, which is analogous to Kleene's definition but differs in the primitives taken, thus demonstrating the computational power of the concurrent programming language introduced in Walters (1991), Walters (1992) and Khalil and Walters (1993). The definition can be immediately rephrased for any distributive graph in a countably extensive category with products, thus allowing a wide, natural generalization of computable functions.

22 citations


Journal ArticleDOI
TL;DR: An approach to intensional answering in databases utilizing soft computing methodologies is described, and a genetic algorithm technique is used to obtain near-optimal intensional answers that fit a given set of tuples.

12 citations


Book ChapterDOI
12 Aug 1996
TL;DR: A class of agents, defined in terms of communities of agents with three properties, are defined: "typed-message agents", which allows one to characterize agenthood as a property relative to a system rather than an isolated candidate software module.
Abstract: The latter is important because not only is "intelligence" an overloaded term with defined meaning and not a design goal, but does not distinguish "agents" from from other kinds of AI software. However, "autonomous" suffers also from lack of definition and is even used by some to define "intelligence". The definition used by Franklin and Graesser is not as precise as claimed. It will always be a subjective evaluation as to whether a system "senses in the future". There is no objective, operational test. And thus, no clear method of distinguishing "agents" from other software by this definition.In the panel discussion, Franklin agreed that more work needed to be done to provide an operational definition. Further, the definition has little practical benefit. It does not suggest technical issues or help to design agents or their infrastructure. Another way to distinguish agents from other kinds of software is to look at the communications technology employed. One kind of agent of special note exchanges typed messages on a peer-to-peer basis. KQML-Iike agents are an example. Emphasizing the nature of the technology allows one to distinguish such agents from expert systems, object-oriented programming, and client-server systems with no reference to "sensing". Further, this method allows one to characterize agenthood as a property relative to a system rather than an isolated candidate software module. Finally, the approach points out some fundamental technical issues in combining such agents with the World-Wide Web. We do not conclude that only these are "agents" but that focus on communications technology is one useful approach to defining agenthood. In particular, we define a class of agents: "typed-message agents". They are defined in terms of communities of agents with three properties:

11 citations


Proceedings ArticleDOI
Tanveer Syeda-Mahmood1
25 Aug 1996
TL;DR: This paper presents an approach to recognizing the class or category of an object in the case where the similarity between member objects is specified by a constrained non-rigid transform.
Abstract: The recognition of the class or category of an object based on shape similarity, is an important problem in image databases. Categorizing objects not only helps in efficient image database organization for faster indexing but also allows shape similarity-based querying. The recognition of category is, however, a difficult problem since member objects of a class can show considerable variation in the size and position of individual features even when the overall shape similarity is maintained. In this paper we present an approach to recognizing the class or category of an object in the case where the similarity between member objects is specified by a constrained non-rigid transform. The class is characterized by a single model or prototype consisting of a set of non-overlapping regions and a set of motion (direction and extent) constraints that capture the relation between members of the class. The recognition of category is done by using region correspondence between model and image and recovering the constrained non-rigid transform corresponding to a member of the class that is nearest in shape to the image.

Journal ArticleDOI
TL;DR: This work introduces abstract concepts that, as will be seen below, play a part in building up the "brassy-able" sound type's representation, and develops a hierarchy of concepts that represent the structural part of the programmer's expertise.
Abstract: Concepts On top of this first layer of representation, we build a hierarchy of concepts that represent the structural part of the programmer's expertise. As will be explained below, the concepts introduced here are used to define transformations. We divide these concepts into three categories: Abstract Concepts Built from Fundamental Concepts. These include partial descriptions of sounds, such as heldTimeFunction, defined as a TimeFunction whose "sustainLevel" value isConcepts Built from Fundamental Concepts. These include partial descriptions of sounds, such as heldTimeFunction, defined as a TimeFunction whose "sustainLevel" value is greater than or equal to 1. In synthesis terms, this allows us to describe, for instance, sounds whose loudness eventually stabilizes to a non-zero value. On the Korg 05R/W, this is obtained by setting the Time Variant Amplifier Envelope Generator's Sustain Level to a strictly positive value: timeFunction and the (sustainLevel, ge(1)) => heldTimeFunction. Similarly, we introduce abstract concepts that, as will be seen below, play a part in building up the "brassy-able" sound type's representation. Figure 3 Rolland and Pachet 53 This content downloaded from 207.46.13.131 on Sat, 15 Oct 2016 04:26:25 UTC All use subject to http://about.jstor.org/terms Figure 3. Example of code used for introducing abstract concepts. FilterEnveloppe and heldTimeFunction and the(attackTime, ge(17) and le(25)) and the(attackLevel, ge(85)) and the(decayTime, ge(60) and le(75)) and the(intermediateLevel, ge(25) and le(35)) and the(heldLevel, ge(25) and le(35)) => brassyAbleFilterEnv. filter and the(hasEnveloppe, brassyAbleFilterEnv) => brassyAbleFilter. brightTone and the(hasFilter, brassyAbleFilter) and the(hasAmp, brassyAbleAmp) => brassyAbleTone. voice and atleast (1, hasTone, brassyAbleTone) => brassyAbleVoice. shows the manner in which these interdependent, abstract concepts are introduced. Other examples of abstract concepts that reflect structural expert knowledge are those describing "non-transformable" sounds. Contrary to the above abstract concepts, these concepts provide complete descriptions of sounds which do not afford any particular transformation, but which are used for describing transformable sounds: sound and no(hasVoice, voiceWithInharmonicTone) => harmonicSound. Note that since any sound type is subsumed by the transformable sound type soundInGeneral, even instances of non-transformable sounds types can undergo some general transformations, such as "make bright" or "make dull." Concepts Describing Transformable Sounds. Figure 4 shows a few examples of such concepts. Concepts Representing Transformations Themselves. With each sound type we associate a list of transformations, represented as character strings, as illustrated in Figure 5. This list is materialized by a sub-concept of possibleTransformations, an extensional concept that lists all possible transformations for all existing sound types. The Object-Oriented Representation of Sounds Representing sounds as objects, in the sense of object-oriented programming, is particularly natuFigure 4. Examples of transformable sounds. harmonicSound and some (hasVoice, brassyAbleVoice) => brassyAbleSound sound and all(hasVoice, sustainAbleVoice) => sustainAbleVoice. sound => soundInGeneral. ral in our context, where the emphasis is put on transformations. Each transformation is represented by a Smalltalk method defined in class CurrentSound. This method modifies the values of "terminal parameters," using a pre-defined set of modifiers. For instance, Figure 6 shows the Smalltalk method that makes a sound "sustained."

Journal ArticleDOI
TL;DR: These classes of quadratic time-frequency distributions that retain the inner structure of Cohen's class are proposed, and each is based on a pair of "conjugate" unitary operators producing time- frequency displacements.
Abstract: We propose classes of quadratic time-frequency distributions that retain the inner structure of Cohen's (see IEEE Trans. Signal Processing, vol.41, no.12, p.3275-3292, 1993) class. Each of these classes is based on a pair of "conjugate" unitary operators producing time-frequency displacements. The classes satisfy covariance and marginal properties corresponding to these operators. For each class, we define a "central member" generalizing the Wigner distribution and the Q-distribution, and we specify a transformation by which the class can be derived from Cohen's class.

Book ChapterDOI
01 Jan 1996
TL;DR: A concept is a mental entity representing a class of objects or events, which is completely different from the notion of a group of objects as discussed by the authors, which is defined as a set of objects, symbols or events (referents) which have been grouped together because they share some common characteristics.
Abstract: There is no commonly accepted definition for the term concept in psychology, as with all psychological terms. Some definitions are, simply, unacceptable. “A concept consists of a set of objects, symbols or events (referents) which have been grouped together because they share some common characteristics” (Merril & Wood, 1974, p. 19). A concept is a mental entity, an idea.1 It cannot be a group of objects. One may claim that a concept is an idea representing a class of objects or events, which is completely different. In classical logic textbooks, a concept was said to be determined by two properties: its extension and its intension (comprehension in French). The extension is the totality of elements (objects, events, etc.) to which the concept refers. The intension is the totality of common, essential properties which characterize the concept. By knowing the intension of a concept we possess the criteria by which we are able to identify those objects which are represented by the concept and distinguish them from those which are not.

Proceedings ArticleDOI
09 Sep 1996
TL;DR: The authors propose a set of operations that can serve for querying the intensional portion of the database, and for using the relationships between individual objects and the conceptual schema in conventional queries, if integrated into an object algebra.
Abstract: The authors propose a set of operations for inquiring about the properties of individual objects and classes in an object-oriented database. The operations are closely related to the constructs introduced by their formalisation of the object-oriented data model, through which they unify the representation of the intensional and the extensional portions of the object-oriented database. As a consequence, they provide a uniform access to the extensional and the intensional parts of the object-oriented database. The proposed operations are used for inquiring about the associations among individual objects, the relationships between individual objects and class objects, and the relationships among the class objects themselves. They show that if they are integrated into an object algebra, they can serve for querying the intensional portion of the database, and for using the relationships between individual objects and the conceptual schema in conventional queries.

Journal ArticleDOI
TL;DR: This paper proposes a classification of object-oriented systems based upon the conceptualization underlying an object and how such a conceptualization is described, which serves both as a framework for comparison and as a context within which individual concepts can be described.
Abstract: The object-oriented paradigm is characterized in a general sense by a grouping of information with the concept or entity to which it relates. Corresponding to this rather vague definition, there is a wide range of systems that can be classified as object-oriented. However, such systems may provide significantly different perspectives on the structure and manipulation of objects. This stems principally from the different motivations underlying the distinct fields from which object-oriented systems have emerged, such as Data Bases, Artificial Intelligence and Programming Languages. As a result, a myriad of systems have appeared in which diverse terminology is used. For example, are terms such as class, frame, term, actor and entity synonyms, related notions, or descriptions of distinct concepts? How is an object different from these terms? This paper proposes a classification of object-oriented systems based upon the conceptualization underlying an object and how such a conceptualization is described. Hence, the issue is more on modelling with objects rather than system idiosyncrasies. Four broad families are identified depending on whether systems follows either a class-based model, a frame-based model, a terminological model or an actor-based model. In the paper, for each category, a definition of the conceptualization is first presented, followed by a description of its characteristics together with some examples of its intended use. The classification serves both as a framework for comparison and as a context within which individual concepts can be described.

01 Jan 1996
TL;DR: In this article, an ethnographic study of the use of videoconferencing to teach an eight-week parenting class to parents enrolled in two families literacy programs is presented.
Abstract: This ethnographic study provides a description and analysis of the use of videoconferencing to teach an eight-week parenting class to parents enrolled in two fam ily literacy programs. It describes the sites, details interactions that take place during videoconferencing, and investigates parents affective response to the technology. Data were collected through participant observation, interviews, video-recordings, and questionnaires. Results indicated cross-site differences in the existing program , in affective responses of users, and in interactions during the technologym ediated classes. Findings showed that the parenting classes were successful. Factors that influence effectiveness of videoconferencing instruction are discussed. Results indicated that technological factors such as quality of transmission, organizational factors such as frequency of exposure, and contextual factors such as receptivity affected videoconferencing. Instructional m ethods and techniques that work well in videoconferencing and the teachers' suggestions for future uses of videoconferencing are presented. A fram e-work for interaction analysis examining direct and indirect effects of the m ediating technology on interaction is presented. Effects of videoconferencing on technology-centered interactions, studentcentered interactions (amount, participation, type, responsivity, attention, and affective engagement) and pedagogy are examined. Implications for theory, practice, and future research are presented.

Patent
22 Oct 1996
TL;DR: In this article, the authors propose to reuse existing program assets by simultaneously generating the class definition decomposing and processing the data prepared by a COBOL language for every record definition element.
Abstract: PURPOSE: To effectively reuse existing program assets by simultaneously generating the class definition decomposing and processing the data prepared by a COBOL language for every record definition element and the class definition processing the data without decomposing the data. CONSTITUTION: When the record definition 40 for which a normal COBOL language is used is inputted, the setting part 131 of attribute information decomposes this record definition 40 for every data item and generates the assembly of class attributes in a class definition editor 1. At the same time, a method definition component preparation part 132 generates an object directing type method and a structuring technique type method making a data item a class attribute by using a template for method. A class definition preparation part 133 generates an object directing type class definition 6 by this assmebly of class attribute and the object directing type method and generates a structuring technique type class definition 7 by the structuring technique type method, by using a class definition template 43.

Posted Content
19 Apr 1996
TL;DR: A perspective of statistical language models which emphasizes their collocational aspect is advocated in this article, where it is suggested that strings be generalized in terms of classes of relationships instead of classes for comparing patterns when patterns are fully generalized a natural definition of syntactic class emerges as a subset of relational class.
Abstract: A perspective of statistical language models which emphasizes their collocational aspect is advocated It is suggested that strings be generalized in terms of classes of relationships instead of classes of objects The single most important characteristic of such a model is a mechanism for comparing patterns When patterns are fully generalized a natural definition of syntactic class emerges as a subset of relational class These collocational syntactic classes should be an unambiguous partition of traditional syntactic classes

01 Jan 1996
TL;DR: The resemblances method is probably a "bad good idea" since, for the eighty per cent, the method doesn't supply significantly different results from more traditional approaches, whereas for the remaining 20%, it provides a non-decidable result.
Abstract: Researchers who use the resemblances method don't all work in the same way. Some hold that this approach doesn't permit the development of a valid hypothesis concerning genealogical relationships between compared languages. Rather they believe that it supplies a "presumptive hypothesis" if only because, in many cases, the evidence shows that genealogically-related languages demonstrate resemblances. Handled this way, the technique provides an initial count allowing one to define a set of languages requiring further analysis. However, this operation is not a method. As a procedure capable of describing reality, it has no scientific validity whatever. It is an exploratory practice. In "reality", there are numerous apparently "obvious" structures, which doesn't mean that the regular features they contain and which can be defined by mere observation enjoy any kind of necessary status. However, it is by showing just such a necessary status (even when recognized a posteriori) that one can validate a method. In addition, as Dalby (1966) has already pointed out, it can be dangerous to classify and group languages too hastily since any form of classification involves a limitation which will preconstrain the organisation of the data and predetermine the results. This is not too serious when there exist strict procedural rules which allow one to check whether various items belong to a class. However, the situation is very different when these procedures are merely "evaluative" and non-falsifiable. Hence the idea that the method allows one to account for 80% of reality and can therefore be used to make a rough outline of a situation is probably a "bad good idea" since, for the eighty per cent, the method doesn't supply significantly different results from more traditional approaches, whereas for the remaining 20%, it provides a non-decidable result.

01 Jan 1996
TL;DR: In this paper, the inner structure of Cohen's class is retained by a pair of "conjugate" unitary operators producing time-frequency displacements, which satisfy covariance and marginal properties corresponding to these operators.
Abstract: We propose classes of quadratic time-frequency distributions that retain the inner structure of Cohen's class. Each of these classes is based on a pair of "conjugate" unitary operators producing time-frequency displacements. The classes satisfy covariance and marginal properties corresponding to these operators. For each class, we define a "central member" generalizing the Wigner distribution and the Q-distribution, and we specify a transformation by which the class can be derived from Cohen's class.

Book ChapterDOI
01 Jan 1996
TL;DR: In this article, the authors define a family of languages, where the members of the family can differ only in the class of non-logical constants, and the relation of consequence (i.e., the relation "A follows from Γ" where A is a sentence and Γ is a set of sentences) and as a special case the notion of logical truth or validity.
Abstract: In contemporary logic the presentation of a logical theory follows, in general, the line of thought given in (A) and (B) below: (A) The language (grammar) of the system is described. First the primitive expressions of the language—among them logical constants, auxiliary signs (e.g. brackets), variables bindable by operators, and nonlogical (descriptive) constants—are enumerated, sometimes grouped according to their category (logical type), allowing for extra-categorical signs, too. Then rules are specified by which from certain expression(s) compound expressions can be generated. Generally in these rules it is explicitly stated to which category the input(s) and the output can belong. As a result the class of the well-formed expressions of the language subdivided according to the different categories is defined by an inductive syntactical definition. Among the syntactical categories of the language there is one which is analogous to the category of the indicative sentence in natural language, which is called ‘formula’, or ‘well-formed formula’, or simply ‘sentence’. A more general method is to define a family of languages instead of a single language, where the members of the family can differ only in the class of non-logical constants. (B) The relation of consequence (i.e. the relation ‘A follows from Γ’ where A is a sentence and Γ is a set of sentences) and as a special case the notion of logical truth or validity is defined. The definition of consequence can be either syntactical or semantical. In the case of a syntactical definition we speak about deductibility (A is deductible from Γ), and the relation of deductibility is introduced by an inductive syntactical definition. In a semantical definition of consequence first the class of admissible interpretations is given (most often these interpretations are set-theoretical entities), and it is specified what semantic value the well-formed expressions of the language can take. The possible semantic values of sentences are normally called truth values. In the classical case there are two truth values, Truth and Falsity, in other cases it is postulated that among the truth values there is one or more which is a ‘distinguished’ value. Then the definition of ‘A is a semantic consequence of I (or ‘A follows semantically from Γ’) may be formulated as ‘A is true (A bears a distinguished value) in all interpretations where every member of Γ is true (bears a distinguished value)’. This definition may be strengthened by some additional clauses.