scispace - formally typeset
Search or ask a question

Showing papers on "Class (philosophy) published in 1998"


01 Jan 1998
TL;DR: This chapter describes version 1.5 of gcov, a tool you can use, together with GNU CC, to test code coverage in your programs, and the advantages of using signatures instead of abstract virtual classes.
Abstract: virtual classes provide somewhat similar facilities in standard C++. There are two main advantages to using signatures instead: 1. Subtyping becomes independent from inheritance. A class or signature type T is a subtype of a signature type S independent of any inheritance hierarchy as long as all the member functions declared in S are also found in T. So you can de ne a subtype hierarchy that is completely independent from any inheritance (implementation) hierarchy, instead of being forced to use types that mirror the class inheritance hierarchy. 2. Signatures allow you to work with existing class hierarchies as implementations of a signature type. If those class hierarchies are only available in compiled form, you're out of luck with abstract virtual classes, since an abstract virtual class cannot be retro tted on top of existing class hierarchies. So you would be required to write interface classes as subtypes of the abstract virtual class. There is one more detail about signatures. A signature declaration can contain member function de nitions as well as member function declarations. A signature member function with a full de nition is called a default implementation; classes need not contain that particular interface in order to conform. For example, a class C can conform to the signature signature T { int f (int); int f0 () { return f (0); }; }; whether or not C implements the member function `int f0 ()'. If you de ne C::f0, that de nition takes precedence; otherwise, the default implementation S::f0 applies. 170 Using and Porting GNU CC Chapter 8: gcov: a Test Coverage Program 171 8 gcov: a Test Coverage Program gcov is a tool you can use, together with GNU CC, to test code coverage in your programs. gcov is free software, but for the moment it is only available from Cygnus Support (pending discussions with the FSF about how they think Cygnus should really write it). This chapter describes version 1.5 of gcov. Jim Wilson wrote gcov, and the original form of this note. Pat McGregor edited the documentation. 8.

141 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a framework for the formulation and testing of ontological theories as they relate to the specific domain of geographic objects and categories, and describe a testing methodology in which more traditional methods of ontology will guide the formulation of questions to be tested and the construction of the framework in which the results of testing shall be expressed.
Abstract: I Introduction Ontology, since Aristotle, has been conceived as a sort of highly general physics, a science of the types of entities in reality, of the objects, properties, categories and relations which make up the world. At the same time ontology has been for some two thousand years a speculative enterprise. It has rested methodologically on introspection and on the construction and analysis of elaborate world-models and of abstract formal-ontological theories. In the work of Quine and others this ontological theorizing in abstract fashion about the world was supplemented by the study, based on the use of logical methods, of the ontological commitments or presuppositions embodied in scientific theories. In recent years both types of ontological study have found application in the world of information systems, for example in the construction of frameworks for knowledge representation and in database design and translation. As ontology is in this way drawn closer to the domain of real-world applications, the question arises as to whether it is possible to use empirical methods in studying ontological theories. More specifically: can we use empirical methods to test the ontological theories embodied in human cognition? In what follows we set forth the outlines of a framework for the formulation and testing of such theories as they relate to the specific domain of geographic objects and categories. Objects, properties, categories and relations are what they are, independently of how people think of them. Some objects, properties, categories and relations, however, are the products of human cognition. This holds not least in the geographic realm, where many of the entities with which we have to deal may be conceived by analogy with shadows cast on the surface of the earth by human practices of specific sorts. In relation to such entities empirical testing makes reasonable sense. We describe a testing methodology in which the more traditional methods of ontology will guide the formulation of questions to be tested and the construction of the framework in which the results of testing shall be expressed. II Theories of Conceptual Organization We begin with the general topic of human cognitive categories such as rabbit, electron, island. Such categories exist in two forms: on the one hand as concepts on the side of human subjects; on the other hand as kinds on the side of reality. On the classical view, dating back to Aristotle, each concept or kind is associated with certain defining attributes or properties which suffice to determine exactly which objects fall within the relevant extension. On more recent views, categorial kinds are to be understood by analogy with a mathematical set. All objects within the extension set are equally representative instances of the category, and for each object or event it is fully determinate whether or not it falls under a given category. Geographers, like other scientists, have typically accepted this model of categories as sets in the mathematical sense, and the model is presupposed for example in work on cartographic data standards (Mark 1993, 1993a). As an account of the categories used by ordinary humans in everyday situations, however, the model has obvious defects. First, and most obviously, not every set in the mathematical sense is a class in the sense of kind or category. Hence we need to go beyond set theory in order to fill this gap. But further, as has been shown by Rosch (1973, 1978) and others (Keil 1979, Estes 1994), for most such categories, and for most people, some members are better examples of the class than are others; furthermore, there is a great degree of agreement among human subjects as to what constitutes good and bad examples. Human cognitive categories often possess a radial structure, having prototypes or more central or typical members surrounded by a penumbra of less central or less typical instances. …

74 citations


Patent
16 Nov 1998
TL;DR: In this paper, the authors propose a method for interacting with a test subject with respect to knowledge or functionality characterized by a plurality of states in one or more domains, denoted as a fact state, a value state, or a combination state.
Abstract: The invention is a method for interacting with a test subject with respect to knowledge or functionality characterized by a plurality of states in one or more domains. A domain is a set of facts, a set of values, or a combination of a set of facts and a set of values. The set of facts for a knowledge domain is any set of facts. The set of facts for a functionality domain is a set of facts relating to the functionality of a test subject. A state is denoted as a fact state, a value state, or a combination state, a fact state being characterized by a subset of facts, a value state being characterized by a subset of values, and a combination state being characterized by a combination of a subset of facts and a subset of values. The method consists of specifying one or more domains, specifying a domain pool for each domain comprising a plurality of test item blocks consisting of one or more test items, specifying a class conditional density for each test item in each test item block for each state in each domain, selecting one or more test item blocks from the one or more domain pools to be administered to a test subject, and processing the responses of the test subject to the one or more test item blocks administered to the test subject.

59 citations


Journal ArticleDOI
TL;DR: In this article, the notion of universal stabilizers was introduced to identify the underlying matroid structure that guarantees that a given matroid will be an F-stabilizer for a given field over which members of the field are representable.

26 citations


Journal ArticleDOI
TL;DR: The TIMSS video study as mentioned in this paper found that the proportion of concrete settings was significantly lower in the United States than in Japan or Germany, and that mathematics instruction is oriented toward "problem solving" and also that routine skills are emphasized.
Abstract: situations involve only abstract concepts such as numbers and geometric shapes-even specific examples such as a rectangle three inches wide and five inches long-in contrast to concrete situations that involve physical or social representations of abstract concepts-such as a 3-by-5 inch index card. One reason reviewers were interested in this dichotomy was the call from some advocates of mathematics education reform in the United States for far more mathematics instruction to be done in "real world" contexts. Reviewers wanted to determine whether the proportion of concrete settings was significantly lower in the United States than in Japan or Germany. The locus of control of the solution of a problem-whether the task in the context of the class essentially included its own method of solution-is another aspect of the extent to which instruction was consistent with some current calls for reform of mathematics education in the United States. An important component of problem solving is the solver's control of the process of solving a problem [3]. A comparison of the extent to which exercises appear to require solvers to find solution methods should provide one indication of the extent to which mathematics instruction is oriented toward "problem solving" and also, perhaps, of the extent to which routine skills are emphasized. Similarly, the complexity of exercises is another indicator of their sophistication and mathematical richness, especially from a problem solving perspective. Determining whether a problem requires a "single-step" or "multi-step" solution depends upon not only the statement of the problem but also the presumed sophistication of the solver. For example, solving a simple linear equation such as 5x + 4 = 19 would be considered a two-step process for students who are first learning how to solve such equations but would constitute one very small step for students who had developed proficiency at solving systems of linear equations. As a result, reviewers had to consider the problem in the context of the class when judging its complexity. The reviewers classified the subject and described the subject matter of each class. Each of the 90 classes was classified according to its likely position in a traditional (1980s) United States college preparatory mathematics curriculum. The three classifications used were "Before Algebra," "Algebra," and "Geometry." Approximately half the classes were classified as Geometry, 30% as Algebra, and the remainder as Before Algebra. The reviewers also rated each class on the basis of their judgment of its potential for helping students understand mathematics. The five categories used 1998] EIGHTH GRADE MATHEMATICS CLASSES 795 This content downloaded from 157.55.39.178 on Sun, 24 Jul 2016 06:17:36 UTC All use subject to http://about.jstor.org/terms were "Weak," "Almost Good," "Good," "Better," and "Strong." While this is quite subjective, the four reviewers were able to reach complete agreement in all 90 cases. Readers of this paper who would like to make their own judgments may send email to timss@ed.gov to inquire about obtaining copies of the tables and other data from the TIMSS video study. The table for each class consisted of a few pages, typically about three, that included a summary description of each segment of the class. Segments were defined and identified by the members of the Stigler laboratory. A new segment started when there was a significant change in activity or in the content being presented. Reviewers categorized each segment by the nature of its mathematical activity and by its role in the structure of the class. The segments became the nodes of the directed graphs used to represent graphically the structure of the

22 citations


Patent
Colin Gajraj1
16 Jul 1998
TL;DR: In this article, a tool for transforming SGML documents using SGML architectures determines to what class an element in a first document belongs by searching at least part of the first document type definition for a qualifier to an element, indicating an association with a definition of the class of element.
Abstract: A tool for transforming SGML documents using SGML architectures determines to what class of element an element in a first document belongs, from the first document type definition, by searching at least part of the first document type definition for a qualifier to an element, indicating an association with a definition of the class of element. It then determines for that class, at least one corresponding element in the second document type definition, and includes in the second document, an instance of the corresponding element or elements. Interchange of documents becomes easier because a single generic tool can be used for transformation between many different types of documents.

22 citations


Proceedings ArticleDOI
13 Sep 1998
TL;DR: This paper proposes a new declarative scheme for the definition of feature classes that provides a unified description of the shape and validity issues of a feature class, as well as a flexible configuration of the feature class interface.
Abstract: Designing mechanical parts using a feature vocabulary is a very effective and rich paradigm. Its expressive power, however, is severely limited if the set of feature types available in a feature library is fixed. It is, therefore, desirable to be able to extend and configure a feature library according to particular requirements, either of an end-user of a CAD system or of an application area. These requirements are not limited to topologic and parametric aspects of a generic feature definition, but include also validity conditions to be verified for each feature instance in a model. This paper proposes a new declarative scheme for the definition of feature classes. This scheme provides a unified description of the shape and validity issues of a feature class, as well as a flexible configuration of the feature class interface. In the definition process, the various constraint classes available play a central role, whereas an inheritance mechanism structures the feature library hierarchy. At the end of the process, validation of the class is performed, in order to avoid overand underconstrained specifications. A graphical user interface supports the whole feature class definition process. Once defined, a feature class is automatically made available for use in a feature library of the modeling system.

18 citations


01 Jan 1998
TL;DR: The purpose of this work is to extend the set of notions used to reason about the problem to be solved and the structure of the corresponding software system, along with the notion of a context, an approach to problem decomposition, and a programming technique appear.
Abstract: Context-oriented programming (COP) introduces one more notion to reason about the structure of software systems: a context (an environment) is a set of entities bound with a system of relations. This view is applicable where the object-oriented one is inadequate. Implementation of COP requires the same techniques as OOP: COP and OOP are different things assembled from the same components. COP allows things that OOP cannot do, for example, COP enables us to use late binding for elementary data that are not OOP objects. 1. The purpose of the work The key element of software activities (like designing, writing, debugging, reading and modifying programs) is reasoning. We do not give a definition for reasoning, but we assume that the use of a small number of adequate concepts facilitates reasoning—that is, if the concepts in use are adequate and expressive, if no auxiliary concepts are required, it is easier to reason about programs. The purpose of this work is to extend the set of notions, which may be used to reason about the problem to be solved and the structure of the corresponding software system. Along with the notion of a context, an approach to problem decomposition, and a programming technique appear. 2. The method of COP 2.1. Quick introduction An environment (a context) expresses the idea of a system of relations (where "relation" is used in its everyday meaning, a connexion between things). The phrase "system of relations" implies that there are some entities, upon which relations are established. The key point in COP is that to use the relations, we do not need to know details about the entities. This is good, but how can we use the relations? In principle, different approaches are possible. The simplest way is to implement a set of functions, which imply that the data elements to be processed are bound with these relations. (Indeed, using these functions, we use knowledge of that the data elements are related, otherwise it would be incorect to use the functions). The next step is to introduce polymorphism. This may be done via tables of pointers to functions, that is, virtual method tables (VMT, where "method" is understood as "member function"). Contexts are implemented as VMT's, and these VMT's are not bound to data. So, from the implementation point of view, there are interchangeable sets of functions, allowing us to work with different (but somehow similar) sets of objects; the objects in a set do not necessarily form a datastructure, and may have different lifetimes. It should be also noted that COP introduces two levels at which we can work with contexts. At the level of implementation, we know all details about related entities that constitute the context. At the level of use, we do not have full access to the entities; we can use only a set of functions allowing us to work with the entities, but this should be enough to accomplish our tasks. In exchange, this restricted access allows us to work in different contexts (of the same "class") in a uniform way.

17 citations


Book ChapterDOI
12 Jul 1998
TL;DR: It is proved that the proposed algorithm correctly identifies any TSDL language in the limit if structural information is presented, and a definition of a characteristic structural set for any target grammar is given.
Abstract: A method to infer a subclass of linear languages from positive structural information (ie skeletons) is presented The characterization of the class and the analysis of the time and space complexity of the algorithm is exposed too The new class, Terminal and Structural Distinguishable Linear Languages (TSDLL), is defined through an algebraic characterization and a pumping lemma We prove that the proposed algorithm correctly identifies any TSDL language in the limit if structural information is presented Furthermore, we give a definition of a characteristic structural set for any target grammar Finally we present the conclusions of the work and some guidelines for future works

16 citations


Patent
16 Jun 1998
TL;DR: In this article, the authors propose a solution to provide different service for each member using a customer management system to manage customers registered as members in a game parlor such as pachinko parlor or the like.
Abstract: PROBLEM TO BE SOLVED: To make it possible to provide different service for each member using a customer management system to manage customers registered as members in a game parlor such as pachinko parlor or the like SOLUTION: A member management database 32A has a space to store, in connection with a membership number, member classifying flags F, which distinguish whether a member is an ordinary member, a silver member or a gold member When a member tries to insert the membership card into a device in a hall 1 to use the device, the membership number is transmitted to a management computer 30 through a communication cable 50 Based on the transmitted membership number, the member management database 32A is referred, and the member classifying flag F corresponding to the membership number is read Then, the member classifying flag F is transmitted back through the communication cable 50 Thereby, the membership class represented by the member classifying flag F is displayed to show the class to the member, and provide service corresponding to the member classifying flag F for the member

14 citations


Patent
Henry W. Burgess1
04 Sep 1998
TL;DR: In this article, a method and system for interconnecting software components is presented, where message information describing the message and a dispatching member function for invoking a member function of a target object passing the message information is invoked.
Abstract: A method and system for interconnecting software components. In a preferred embodiment, the present invention instantiates an event object. The event object includes message information describing the message and a dispatching member function for invoking a member function of a target object passing the message information. A message is passed by invoking the dispatching member function of the event object passing an identifier to a target object and an identifier of a member function of the target object. The dispatching member function invokes the identified member function of the identified target object passing the event information as an actual parameter. The event object is preferably of a derived class that inherits a base class. The base class provides common event behavior, while the derived class provides behavior specific to a type of message.

Book ChapterDOI
15 Jun 1998
TL;DR: This paper presents an automated method that deals with termination of constraint logic programs in two steps: from the text of a program, the method infers a set of potentially terminating classes using abstract interpretation and boolean mu-calculus and finds a static order over the literals of the clauses of the program to ensure termination.
Abstract: This paper presents an automated method that deals with termination of constraint logic programs in two steps. First, from the text of a program, the method infers a set of potentially terminating classes using abstract interpretation and boolean mu-calculus. By "potentially", we roughly mean that for each of these classes, one can find a static order over the literals of the clauses of the program to ensure termination. Then, given a terminating class among those computed at the first step, the second step consists of a "compilation" (or transformation) of the original program to another one by reordering literals. For this new program, the universal left-termination of any query of the considered class is guaranteed. The method has been implemented.

Patent
Andy I-Shin Wang1
12 Nov 1998
TL;DR: In this paper, a translator-based embedded scripting environment includes multiple translators executed by one or more computers, where the original input source is split into a plurality of intermediate sources.
Abstract: A translator-based embedded scripting environment includes multiple translators executed by one or more computers. An original input source is split into a plurality of intermediate sources for processing by a plurality of translators executed by one or more computers. One or more of the corresponding intermediate sources includes a class definition that contains one or more methods. Another one of the corresponding intermediate sources includes logic to instantiate the class definition as an object and logic to invoke the one or more of the methods of the object in order to maintain a sequence of execution specified in the original input source. Placeholders are used within the class definition during the translations to identify locations of file input/output operations that write data to an output destination.

Journal ArticleDOI
TL;DR: An inheritance flow model is presented, which represents the inheritance relationships among classes as a flow graph, which provides several analyses in a class hierarchy, such as implicit inherited member and polymorphic method invocation.

01 Jan 1998
TL;DR: In this article, a refinement of the existential object model of Pierce and Turner is presented, where signatures and interfaces are defined as the types of objects, and classes as the type of objects.
Abstract: We present a refinement of the existential object model of Pierce and Turner. In addition to signatures (or interfaces) as the types of objects, we also provide classes as the types of objects. These class types not only specify an interface, but also a particular implementation. We show that class types can be interpreted in the standard PER model. Our main result is that the standard interpretation of subtyping in PER models - i.e. subtypes are subpers - is then exactly the notion of behavioural subtyping as it is defined by Leavens.

Patent
11 Aug 1998
TL;DR: In this article, the authors propose to eliminate the need of managing an inclusion file and reduce the complexity of program description by outputting the extension code of class definition information to an execution program based on the registration contents of a class definition table.
Abstract: PROBLEM TO BE SOLVED: To eliminate the need of managing an inclusion file and to reduce the complication of program description by outputting the extension code of class definition information to an execution program based on the registration contents of a class definition table. SOLUTION: After a source program 100 is inputted by an input device and the grammar of a source code is checked by a lexial analysis part and a syntax analysis part, a syntax tree is generated corresponding to analyzed syntax and the syntax tree is stored in a syntax tree storage part 26. A semantic analysis part performs semantic analysis based on the information of the generated syntax tree and the rearrangement of the tree and the registration of the definition of a class for performing registration to a data base 20 to the class definition table, etc., are performed as needed. Then, a code output part outputs the extension code to the execution program 200 based on the information of the tree after the semantic analysis. Thus, the need of managing the source file for the respective classes is eliminated.

Patent
Colin Gajraj1
16 Nov 1998
TL;DR: In this article, a tool for transforming SGML documents using SGML architectures is presented, which determines to what class an element in a first document belongs, from the first document type definition, by searching at least part of the first sentence in the first definition for a qualifier to an element, indicating an association with a definition of the class of element.
Abstract: A tool for transforming SGML documents using SGML architectures determines to what class of element an element in a first document belongs, from the first document type definition, by searching at least part of the first document type definition for a qualifier to an element, indicating an association with a definition of the class of element. It then determines for that class, at least one corresponding element in the second document type definition, and includes in the second document, an instance of the corresponding element or elements. Interchange of documents becomes easier because a single generic tool can be used for transformation between many different types of documents.

Patent
26 Jun 1998
TL;DR: In this article, a schedule is constituted of the hierarchized objects of a class, instance, and work item, and a schedule preparing part 110 prepares a schedule by referring to each definition on a storage device 140 based on the inputted parameter and the schedule is displayed at a display device 130.
Abstract: PROBLEM TO BE SOLVED: To prepare schedule with high precision without requiring any experience of a planer related with a similar project at the time of preparing the schedule of a project. SOLUTION: A schedule is constituted of the hierarchized objects of a class, instance, and work item. The work item is made correspond to a single work including a start date and a work period. Class definition 141 defines the start date and the work period for each work item included in the class, and instance definition 142 defines the start date and the work period for each work item included in each instance. Constraint condition definition 143 defines the sequence of the work items across the classes. Parameter definition 144 defines a parameter for selecting the class and the instance, and selecting the sequence relation of the instances in the class. A schedule preparing part 110 prepares a schedule by referring to each definition on a storage device 140 based on the inputted parameter, and the schedule is displayed at a display device 130.

Proceedings ArticleDOI
22 Sep 1998
TL;DR: The purpose is to make policies more distinguishable from the rest of the class definition so that any maintenance effort in accommodating changes in policy definition can be reduced.
Abstract: The definition of classes in an object oriented system is generally specified in terms of attributes and methods since they are represented in programming languages using these constructs. However, there is other information pertinent to application domain related objects (known as policies) that are embedded in methods. Since policies are complex statements that cannot be easily represented in terms of attributes or directly translated into method definition, they are usually realized by a combination of attribute and method implementation. Also, policies are highly volatile elements easily affected by changes in the business environment. One approach as proposed in the paper is to raise the level of representation of policies in class definitions. The purpose is to make policies more distinguishable from the rest of the class definition so that any maintenance effort in accommodating changes in policy definition can be reduced. The paper discusses how policies can be defined in a class definition, the advantages of the proposed approach, and how the proposed class definition can be implemented. An example from the library domain is used to illustrate the class definition approach discussed.

01 Jan 1998
TL;DR: This research defines a three-phase methodology that inputs source code and outputs an object-oriented design including hierarchy diagrams and interaction diagrams and the results of applying the methodology in two case studies are presented.
Abstract: A majority of legacy systems in use in the scientific and engineering application domains are coded in imperative languages, specifically, COBOL or FORTRAN-77. These systems have an average age o f 15 years or more and have undergone years of extensive maintenance. They suffer from either poor documentation or no documentation, and antiquated coding practices and paradigms [Chik 94] [Osbo 90], The purpose of this research is to develop a reverseengineering methodology to extract an object-oriented design from legacy systems written in imperative languages. This research defines a three-phase methodology that inputs source code and outputs an object-oriented design. The three phases of the methodology include: Object Extraction, Class Abstraction, and Formation of the Inheritance Hierarchy. Additionally, there is a pre-processing phase that involves code structuring, alias resolution, and resolution of the COMMON block. Object Extraction is divided into two stages: Attribute Identification and Method Identification. The output of phase one is a set of candidate objects that will serve as input for phase two, Class Abstraction. The Class Abstraction phase uses clustering techniques to form classes and define the concept of identical objects. The output of phase two is a set o f classes that will serve as input to the third phase, Formation o f the Inheritance Hierarchy. The Formation of the Inheritance Hierarchy phase defines a similarity measure which determines class similarity and further refines the clustering performed in phase two, Class Abstraction. The result of the methodology is an object-oriented design including hierarchy diagrams and interaction diagrams. Additionally, the results of applying the methodology in two case studies are presented.

Patent
04 Dec 1998
TL;DR: In this paper, a level zero learning process is used to estimate an unknown parameter of identification functions of M kinds which are preliminarily given by using training data 7 that is externally given and forms an level zero identification function.
Abstract: PROBLEM TO BE SOLVED: To automatically generate the class boundary of appropriate complxexity to a classification problem where class boundaries that have different complexity coexist and to obtain a satisfactory class boundary by using an identification function that linearly confined plural models. SOLUTION: A level zero learning process 1 estimates an unknown parameter of identification functions of M kinds which are preliminarily given by using training data 7 that is externally given and forms a level zero identification function. Also, a level one learning process 3 successively extracts one set of training data from N sets of the training data 7, also, newly learn M kinds of identification functions by using the rest of N-1 sets of training data every time one set is extracted and forms M kinds of level one identification functions in each class. After that, N sets of level one data which consists of a pair of a KM-dimensional vector to an extracted one set of data and a class label of extracted data are formed. An identification function integration process 5 forms a new identification function as a linear sum of outputs of learned identification functions in the process 1 by using the level one data and performs pattern recognition.

Journal Article
TL;DR: The Structure of Value: Foundations of Scientific Axiology as mentioned in this paper provides a general definition of value together with a description of specific types of values including extrinsic, systemic and intrinsic.
Abstract: Introduction: Its Meaning is Its Value Evaluating a program without an attempt to develop an understanding of underlying processes and functions is like drinking from an empty bottle to satisfy a thirst, a vacuous and frustrating enterprise. Evaluation should not only be an attempt to quantitatively measure outcomes, but also a process of meaning-making. Evaluation has been described as, "a meaning-making technology which is applied to the curriculum, instruction and learning" (Hill, 1997, p. 4) in educational institutions. The term "meaning," however, is an abstract concept that needs to be defined and broken down into more specific components if it is to be applied to the practical evaluation process that requires specificity for success. So then, what is meaning? In Plato's view, "the meaning of the world is its value" (Hartman, 1967, p. 49). Still, if meaning is defined as value, then a more precise idea of value that is so much a part of e-valu-ation is necessary. Hartman (1967) by offering the logic of value in his book, The Structure of Value: Foundations of Scientific Axiology, provided a general definition of value together with a description of specific types of values. The purpose of this article is to, first, define value in general, secondly, describe Hartman's value dimensions including extrinsic, systemic and intrinsic and thirdly, discuss their significance in relation to answering the evaluation question, Is this college or university's general education program valuable? To Have and to Have Not How can an individual decide whether or not someone or something has value? According to Hartman (1967, Presno & Presno, 1980), value is the fulfillment of a thing by its concept. In other words, when a thing matches a person's concept or idea of it, the thing is thought to be worthy, exceptional, valid and good. By contrast, a thing that does not fulfill its concept or definition is considered not valuable, inferior, inadequate, wrong, deleterious, or just plain bad. So, in determining the value of a general education program, it is necessary to compare and contrast whether specific instances such as those under the categories of institutional practices and student outcomes correspond to the chosen concept of general education. In order to find out if general education is valuable at a particular institution, the actual program as it exists must be compared to the concepts of general education held by involved members which include faculty, staff, administrators, students, the community and the state and federal government. The Value Dimensions In order to further dissect the concept of value, Hartman (1967) discussed it in terms of three categories or dimensions: Extrinsic, systemic and intrinsic. It is important to note that each of these three dimensions may be applied to a single person including the self, to a group of people or to things like an institution or a program. The following are descriptions of the three types of valuing. Extrinsic If it's practical, it has extrinsic value. Hartman (1967) defined extrinsic value as a thing fulfilling an abstract concept or class concept. In a class, each thing shares common properties with all the other things so something that is considered good or valuable would be part of the class community and correspond to the mutual characteristics (Presno & Presno, 1980). Thus, a "good" faculty member would be one that adheres to the class of faculty members that generally includes the aspects of teaching, research and community service. A faculty member who meets these criteria not only stands a good chance of getting tenure, but also has extrinsic or practical value. Conversely, a thing that does not adequately represent its class lacks extrinsic value. The use of words and phrases such as excellent, four stars, two thumbs up, satisfactory or poor are cues of pragmatic thinking and that one is judging someone or something's practical value. …

Book
01 Jan 1998
TL;DR: A convenient way to achieve this is to embed structure into a framework of duality and to model interpretation by a hermeneutic circle driven by external insight constraining the class of a priori feasible interpretations (gauge conditions, respectively gauge invariance).
Abstract: Vicious circularities pervade the eld of image analysis. For instance, features like \edges" only exist by virtue of a ducial \edge detector". In turn, such a detector is typically constructed with the aim to extract those features one is inclined to classify as \edges". The paradox arises from abuse of terminology. The polysemous term \edge" can be used in two distinct meanings: as an operationally de ned concept (output of an edge detector), or as a heuristic feature pertaining to our intuition. In the former case the design of edge detection lters is|strictu sensu|merely a convention for imposing structure on raw data. In the latter case it is our expectation of what an \edge" should be like that begs the question of appropriate detector design. The keyword then becomes interpretation. Clearly all low-level image concepts pertain to structure as well as interpretation. Once de ned, structure becomes evidence. Interpretation amounts to a selection among all possible hypotheses consistent with this evidence. Clarity may be served by a manifest segregation of the two. A convenient way to achieve this is to embed \structure" into a framework of duality and to model \interpretation" by a hermeneutic circle driven by external insight constraining the class of a priori feasible interpretations (gauge conditions, respectively gauge invariance). Here the proposed framework is applied to motion analysis. Duality accounts for the role of preprocessing lters. The notorious \aperture problem" arises from an intrinsic local invariance (or gauge invariance), which cannot be resolved on the exclusive basis of image evidence. Gauge conditions re ect external knowledge for disambiguation. In a similar fashion, two stages can be distinguished in an error analysis of the outcome. There are errors of the obvious kind, caused by inadequate modelling (\semantical errors", or \mistakes"), which one would like to remove altogether, and subtle but inevitable errors propagated by any structural representation of data of intrinsically nite tolerance. Indeed, the exibility to alter the gauge (re-interpret the data) and the possibility to carry out a rigorous error propagation study for the data formatting stage is a major rationale behind the current framework, cf. Fig. 1 and 2.