scispace - formally typeset
Search or ask a question

Showing papers on "Class (philosophy) published in 2014"


Posted Content
TL;DR: In this paper, a formula for special L-values of Anderson's modules, analogue in positive characteristic of the class number formula, was proved and applied to two kinds of L-series.
Abstract: We prove a formula for special L-values of Anderson's modules, analogue in positive characteristic of the class number formula. We apply this result to two kinds of L-series.

23 citations


Journal ArticleDOI
TL;DR: This paper introduces the class of polynomials Z"n"""1","...","n"""j^(^@a^)(x"1,...,x"j;@r"1...,@ r"j) in the kernel and obtains the solution in terms of multivariate analogue of the Mittag-Leffler functions.

13 citations


Journal Article
TL;DR: In this article, the concept of non-systems is introduced, and it is shown that if any of the conditions of "element", "interrelated" and "set" in the definiton is not satisfied, a class of nonsystems exist.
Abstract: It is generally acknowledged to define the concept of "system" as "a set of interrelated elements". The concept requires the existence of nonsystem in logic, can not completely explain itself in form, and does not apply to those containing unconnected components in physics. Therefore, it is necessary to introduce the concept of non-system. If any of the the conditions of "element", "interrelated" and "set" in the definiton is not satisfied, a class of non-systems exist. It is of important methodological value, epistemological significance and ontological foundation to introduce the concept of non-system, and thus to go beyond system monism and recognize the dualistic existence of system as well as non-system.

11 citations


Posted Content
TL;DR: The existence of infinitely many low-lying fundamental geodesics was proved in this paper, answering a question of Einsiedler-Lindenstrauss-Michel-Venkatesh.
Abstract: A closed geodesic on the modular surface is "low-lying" if it does not travel "high" into the cusp. It is "fundamental" if it corresponds to an element in the class group of a real quadratic field. We prove the existence of infinitely many low-lying fundamental geodesics, answering a question of Einsiedler-Lindenstrauss-Michel-Venkatesh.

10 citations


Book ChapterDOI
27 Oct 2014
TL;DR: The specifics of the semantics for mass nouns can be integrated in a recent type-theoretical framework with rich lexical semantics, similarly to collective plural readings, and the significance of an higher-order type system for gradable predicates and other complex predications, as well as the relevance of a multi-sorted approach to such phenomena.
Abstract: We demonstrate how the specifics of the semantics for mass nouns can be integrated in a recent type-theoretical framework with rich lexical semantics, similarly to collective plural readings. We also explore the significance of an higher-order type system for gradable predicates and other complex predications, as well as the relevance of a multi-sorted approach to such phenomena. All the while, we will detail the process of analysis from syntax to semantics and ensure that compositionality and computability are kept. The distinction between massive and countable entities is similar to a clas-sical type/token distinction — as an example of the type/token distinction, "the bike" can refer both to a single physical bicycle (as in the sentence "the bike is in the garage") but also the the class of all bicycles (as in the sentence "the bike is a common mode of transport in Amsterdam"). However, linguists such as Brendan Gillon warn against such a generalisation (long made in the literature) and remark that, as far as the language is concerned, mass nouns are more alike to the collective readings of pluralised count nouns. Among the many similarities is, for instance, the identical behaviour of plurals and mass nouns with cumulat-ive readings: "Both the pens on the desk and the pens in storage use black ink, so I only have black pens" and "There is red wine on display and red wine in the back, so we only have red" are logically similar (see [5] for discussion). Several different approaches have been proposed to account for the specific semantic issues of mass nouns, from Godehard Link's augmented mereological approach in [12] to David Nicolas' revision of plural logic in [25], all remarking upon this similarity. Many different formalisms, using advanced type theories for the purpose of modelling semantics, have been recently proposed, see e. g. [1,3]. Among those, we proposed a semantic framework based on a multi-sorted logic with higher-order types in order to account for notoriously difficult phenomena pertaining (among many others) for many helpful discussions.

10 citations


Proceedings Article
27 Jul 2014
TL;DR: Chockler and Halpern as discussed by the authors extended the definition of causality by introducing notions of responsibility and blame, which has a nontrivial impact on the complexity of computing actual cause.
Abstract: Halpern and Pearl introduced a definition of actual causality; Eiter and Lukasiewicz showed that computing whether X = x is a cause of Y = y is NP-complete in binary models (where all variables can take on only two values) and Σ2P- complete in general models. In the final version of their paper, Halpern and Pearl slightly modified the definition of actual cause, in order to deal with problems pointed by Hopkins and Pearl. As we show, this modification has a nontrivial impact on the complexity of computing actual cause. To characterize the complexity, a new family DkP, k = 1, 2, 3,..., of complexity classes is introduced, which generalizes the class DP introduced by Papadimitriou and Yannakakis (DP is just D1P). We show that the complexity of computing causality under the updated definition is D2P -complete. Chockler and Halpern extended the definition of causality by introducing notions of responsibility and blame. The complexity of determining the degree of responsibility and blame using the original definition of causality was completely characterized. Again, we show that changing the definition of causality affects the complexity, and completely characterize it using the updated definition.

8 citations


01 Jan 2014
TL;DR: In this paper, the postcopular kind-defining relative clause belongs to a larger class of kinddefining relatives, in which other kind of verbs incorporate a stage level predicate.
Abstract: We will show that the postcopular kind-defining relative clause belongs to a larger class of kind-defining relatives, in which other kind of verbs incorporate a stage level predicate We will consider their properties in varieties of Italian and English, and their evolution in the history of Italian The copular structures whose predicate can be the antecedent of a kind-defining relative belong to the predicational (canonical, extensional) type, while the identificational (inverse, intensional) ones are excluded The relatives that appear in this context, namely those modifying the nominal predicate of a copular sentence, display a number of special properties that set them apart from ordinary restrictive relative clauses and render them partly similar to non-restrictive relatives We will try to relate these special properties to the fact that such relatives concur to define a kind, and more specifically to the fact that (in opposition to restrictives) the relative clause content is not presupposed as true, and the DP in predicate position is not truly referential

8 citations


Posted Content
TL;DR: In this paper, a technique based on mixture models is proposed for surmounting these problems by determining the number of classes in a population and estimating the probability that an agent belongs to a particular class.
Abstract: Classifying agents into subgroups in order to measure the plight of the "poor", "middle class" or "rich" is common place in economics, unfortunately the definition of class boundaries is contentious and beset with problems. Here a technique based on mixture models is proposed for surmounting these problems by determining the number of classes in a population and estimating the probability that an agent belongs to a particular class. All of the familiar statistics for describing the classes remain available and the possibility of studying the determinants of class membership is raised. As a substantive illustration we analyze household income in Urban China in the last decade of the 20th Century. Four income groups are classified and the progress of those "poor", "lower middle", "upper middle" and "rich" classes are related to household and regional characteristics to study the impact of urbanization and the one child policy on class membership over the period.

7 citations


Posted Content
TL;DR: The concept of Ambiguity is defined in this article as those situations where the information available to the decision maker is insufficient to form a probabilistic view of the world and it has provided the motivation for departing from the Subjective Expected Utility paradigm.
Abstract: The concept of Ambiguity designates those situations where the information available to the decision maker is insufficient to form a probabilistic view of the world. Thus, it has provided the motivation for departing from the Subjective Expected Utility (SEU) paradigm. Yet, the formalization of the concept is missing. This is a grave omission as it leaves non-expected utility models hanging on a shaky ground. In particular, it leaves unanswered basic questions such as: (1) Does Ambiguity exist?, (2) If so, which situations should be labeled as "ambiguous"?, (3) Why should one depart from Subjective Expected Utility (SEU) in the presence of Ambiguity?, and (4) If so, what kind of behavior should emerge in the presence of Ambiguity? The present paper fi lls these gaps. Specifically, it identifi es those information structures that are incompatible with SEU theory, and shows that their mathematical properties are the formal counterpart of the intuitive idea of insufficient information. These are used to give a formal definition of Ambiguity and, consequently, to distinguish between ambiguous and unambiguous situations. Finally, the paper shows that behavior not conforming to SEU theory must emerge in correspondence of insufficient information and identifies the class of non-EU models that emerge in the face of Ambiguity. The paper also proposes a new comparative definition of Ambiguity, and discusses its relation with some of the existing literature.

7 citations


Proceedings ArticleDOI
11 Jan 2014
TL;DR: This paper develops a class of axiomatically defined categorical models of FRP with processes, called abstract process categories (APCs), and relates APCs to other categorical model of FRp, namely temporal categories and concrete process categories.
Abstract: Linear-time temporal logic and functional reactive programming (FRP) are related via a Curry-Howard correspondence. Thereby proofs of "always," "eventually," and "until" propositions correspond to behaviors, events, and processes, respectively. Processes in the FRP sense combine continuous and discrete aspects and generalize behaviors and events. In this paper, we develop a class of axiomatically defined categorical models of FRP with processes. We call these models abstract process categories (APCs). We relate APCs to other categorical models of FRP, namely temporal categories and concrete process categories.

6 citations


Proceedings ArticleDOI
01 Jan 2014
TL;DR: It is shown here that every such language class that contains any non-regular language already includes the whole arithmetical hierarchy, and that aside from the regular languages, no full trio generated by one language is closed under complementation.
Abstract: A Boolean closed full trio is a class of languages that is closed under the Boolean operations (union, intersection, and complementation) and rational transductions. It is well-known that the regular languages constitute such a Boolean closed full trio. It is shown here that every such language class that contains any non-regular language already includes the whole arithmetical hierarchy (and even the one relative to this language). A consequence of this result is that aside from the regular languages, no full trio generated by one language is closed under complementation. Our construction also shows that there is a fixed rational Kripke frame such that assigning an arbitrary non-regular language to some variable allows the definition of any language from the arithmetical hierarchy in the corresponding Kripke structure using multimodal logic.

01 Jan 2014
TL;DR: Theater as an Outlet, Escape, and Arena for Praise; Theory and Methods; and Bibliography.
Abstract: ..................................................................................................................................... v Chapter One: An Introduction ..................................................................................................... 1 Chapter Two: Theory and Methods ........................................................................................... 15 Chapter Three: Community as a Schema ................................................................................... 40 Chapter Four: Theater as an Outlet, Escape, and Arena for Praise.............................................. 54 Chapter Five: Conclusion .......................................................................................................... 64 Bibliography ............................................................................................................................. 67 Appendix A............................................................................................................................... 69 Appendix B ............................................................................................................................... 71 Vita ........................................................................................................................................... 73


Journal Article
TL;DR: El-Gebeily and Yushau as discussed by the authors proposed an extension of Simpson's rule to fit a series of parabolic segments to groups of three successive data points and accumulate the areas under the segments.
Abstract: (ProQuest: ... denotes formulae omitted.)1. IntroductionA survey of basic techniques of numerical integration is a common element of college-level calculus classes. Virtually all such students can expect to be exposed to the Trapezoidal rule and the somewhat more accurate Simpson's rule, both of which are specific cases of a broader class of techniques known as Newton-Cotes formulas (Press, et al., 1986) More advanced students may encounter more sophisticated techniques such as Gaussian quadrature.Most textbook examples of these techniques utilize data points that are equally-spaced in the abscissa coordinate. This is not a fundamental requirement, but has the advantage that very compact expressions can be developed for the integral in such cases; a paper previously published in this journal describes how to program such routines into a spreadsheet (El-Gebeily & Yushau, 2007). In many experimental circumstances, however, the {x,y) values are not equally spaced in x. How then can you estimate the area under a "graphical" y(x) curve? Surprisingly, textbooks tend to be silent on this very practical issue. For readers familiar with more advanced numerical methods, one tactic might be to apply an interpolation scheme such as a cubic spline fit. Aside from the issue that such techniques are usually directed more at determining values of the dependent variable at specified values of the independent variable, they demand knowledge of the values of the derivatives ofy(x) at the end points of the data - information unlikely to be known in an experimental circumstance.While the simplest approach to determining the desired integral would be a trapezoidal or "picket fence" - type summation, such a procedure would be aesthetically unsatisfying: physical phenomena are not normally discontinuous. Any sensible approach needs to incorporate some "smoothing," presumably based on some sort of interpolation.The purpose of this article is to offer an easy-to-use scheme for dealing with such circumstances. The essence of the method, which is an extension of Simpson's rule, is to fit a series of parabolic segments to groups of three successive data points and accumulate the areas under the segments.Before describing the details of the computation, there is a philosophical issue here that deserves some discussion. This is that if an N-th order polynomial can always be fit exactly through N points, why not build the method to fit higher-order polynomial segments to the data? The answer offered here is that "simplest is best." If there is no model equation for the data, then there is no justification for using a polynomial of any specific order, or, for that matter, any particular function at all on which to base computing the integral. Quadratic segments are the lowest-order ones which allow one to build in some "curvature" to the run ofy(x). Simpson's rule is based on fitting parabolic segments to the often presumed equally-spaced data points, so the method developed here can be considered an extension of this time-honored technique.As sketched in Figure 1, consider three successive (x ,y) points in your data table; call them (xi, yi), (x2,y2), and (x3, yj). It is assumed that your data are ordered in terms of monotonically increasing or decreasing values of x, and do not include any "degenerate" points, that is, there can be no duplicate values of x. A unique interpolating parabola can always be fit through three non-vertical points in a plane:... (1)where the coefficients are given by inverting a 3 by 3 matrix:. …

Journal ArticleDOI
27 Feb 2014
TL;DR: In this paper, the authors reconstruct Ockham's semantics of the categories in order to prove first that his semantics is consistent, and then they show that it is not possible to prove that there is a determined number of categories.
Abstract: In this paper, I intend to reconstruct Ockham’s semantics of the categories in order to prove first that his semantics is consistent. Second, Ockham is not skeptical about the possibility to derive the categories from primitives. According to Ockham, one must accept two principles in order to derive the categories. The first is the principle of ‘in quid’ predication, according to which a name of category can be predicated ‘in quid’ of a determined class of terms. The second is the principle of the transitivity of predication, according to which A is predicated of C if A is predicated of B and B is predicated of C. I will show that Ockham’s semantics of the categories makes two assumptions. According to the first assumption, there exist only two types of things, substances and qualities. According to the second, the categories are mutually exclusive. Ockham’s semantics of the categories implies that the categories are both ontological and conceptual and that it is not possible to prove that there is a determined number of categories.

01 Jan 2014
TL;DR: Bervoets et al. as mentioned in this paper studied the semantics of a set of intensional verbs that are used to report a subject's standpoint on a given possibility, including permit, promise, offer, guarantee, demand, insist on, recommend, suggest, encourage, and a handful of others.
Abstract: Author(s): Bervoets, Melanie Jane | Advisor(s): Sharvit, Yael; Spector, Benjamin | Abstract: This dissertation is concerned with the semantics of a specific set of intensional verbs, those that are used to report a subject's standpoint on a given possibility. Among these verbs are permit, promise, offer, guarantee, demand, insist on, recommend, suggest, encourage, and a handful of others. When the objects of these verbs are disjunctive, we find the kind of free choice effects previously observed with possibility and necessity modals. Based on whether the verbs pattern like may or like must with respect to these inferences, we separate the verbs into two classes, which we call Class I (may-like), and Class II (must-like). This behavior suggests that at the level of interpretation, these verbs contain quantifiers over possible worlds—an existential one in the case of Class I, and a universal one for the members of Class II. However, motivated by an unexpected range of readings found when sentences built with these verbs are negated, an investigation reveals that the members of Class I and II are more than just modal. They also appear to be accomplishment verbs that describe external events. As a result, we give a semantic analysis of these verbs that casts them as complex creatures, describing external events in which subjects indicate their modal opinions. Taking the verbs to be reporters of external events, we then need to explain why some of the negative sentences built with the Class I/II verbs appear to describe internal cognitive states. The solution to this involves two elements: first, we appeal to a version of the habitual operator that can deliver dispositions that are not necessarily established by repetitive action. Second, after noticing that all habitual sentences have extra, unexpectedly strong readings with negation, we enlarge the scope of the phenomenon previously called Neg-raising, and show how an existing pragmatic account for this (that of Romoli (2013)) can be modified to deal with the broader array of extra strong negative readings.Along the way, we will account for why dispositions described by habitual Class I/II predicates seem to have different establishment requirements than those described by similar accomplishment verbs. We also address how the performativity of these verbs follows from the semantics proposed.

Dissertation
01 Jan 2014
TL;DR: In this paper, reference analysis is used to derive objective priors for Bayesian analysis with subjective elicitation and time restrictions. But the subjective priors are not used in this paper.
Abstract: Bayesian analysis is widely used recently in both theory and application of statistics. The choice of priors plays a key role in any Bayesian analysis. There are two types of priors: subjective priors and objective priors. In practice, however, the difficulties of subjective elicitation and time restrictions frequently limit us to use the objective priors constructed by some formal rules. In this dissertation, our methodology is using reference analysis to derive objective priors.

Posted Content
TL;DR: In this paper, the necessity of the minimum aggregation for the persistent existence of strong equilibria, actually, just Pareto optimal Nash equilibrium, is established, and the existence of games with structured utilities is established.
Abstract: A rather general class of strategic games is described where the coalition improvements are acyclic and hence strong equilibria exist: The players derive their utilities from the use of certain "facilities"; all players using a facility extract the same amount of "local utility" therefrom, which amount depends both on the set of users and on their actions, and is decreasing in the set of users; the "ultimate" utility of each player is the minimum of the local utilities at all relevant facilities. Two important subclasses are "games with structured utilities," basic properties of which were discovered in 1970s and 1980s, and "bottleneck congestion games," which attracted researchers' attention quite recently. The former games are representative in the sense that every game from the whole class is isomorphic to one of them. The necessity of the minimum aggregation for the "persistent" existence of strong equilibria, actually, just Pareto optimal Nash equilibria, is established.

01 Feb 2014
TL;DR: One class of Japanese adjectives, namely i-adjectives, is entirely extensional in the sense that there are no i-addjectives that are non-subsective as mentioned in this paper.
Abstract: One class of Japanese adjectives, namely i-adjectives, is entirely extensional in the sense that there are no i-adjectives that are non-subsective. Non-subsectivity of prenominal modifiers is expressed by other parts of speech in Japanese. The absence of non-subsectivity has been verified against a list of approximately 1,000 i-adjectives.

Posted Content
TL;DR: The main result states that the class of complete Elgot monads is closed under such cofree extensions, which thus serve as domains for effectful recursive definitions with free operations, allowing for a non-trivial semantics of non-terminating computations with free effects.
Abstract: A pervasive challenge in programming theory and practice are feature combinations. Here, we propose a semantic framework that combines monad-based computational effects (e.g. store, nondeterminism, random), underdefined or free operations (e.g. actions in process algebra and automata, exceptions), and recursive definitions (e.g. loops, systems of process equations). The joint treatment of these phenomena has previously led to models tending to one of two opposite extremes: extensional as, e.g., in domain theory, and intensional as in classical process algebra and more generally in universal coalgebra. Our metalanguage for effectful recursive definitions, designed in the spirit of Moggi's computational metalanguage, flexibly combines these intensional and extensional aspects of computations in a single framework. We base our development on a notion of complete Elgot monad, whose defining feature is a parametrized uniform iteration operator satisfying natural axioms in the style of Simpson and Plotkin. We provide a mechanism of adjoining free operations to such monads by means of cofree extensions, thus in particular allowing for a non-trivial semantics of non-terminating computations with free effects. Our main result states that the class of complete Elgot monads is closed under such cofree extensions, which thus serve as domains for effectful recursive definitions with free operations. Elgot monads do not require the iterated computation to be guarded, and hence iteration operators are not uniquely determined by just their defining fixpoint equation. Our results however imply that they are uniquely determined as extending the given iteration in the base effect and satisfying the axioms. We discuss a number of examples formalized in our metalanguage, including (co)recursive definitions of process-algebraic operations on side- effecting processes.

Proceedings Article
01 Jan 2014
TL;DR: One of the main challenges in knowledge representation and reasoning is still to cope with vague and imprecise information in an adequate manner, and rough extensions of Description Logics have been proposed as a formalism for handling these upper and lower approximations.
Abstract: One of the main challenges in knowledge representation and reasoning is still to cope with vague and imprecise information in an adequate manner. This imprecision is found in many knowledge domains, particularly medicine and life sciences. A typical source of imprecision in these domains arises from the level of detail in which the knowledge is described. For example, a disease is usually diagnosed through a series of symptoms that a patient presents, but two individuals, say Ana and Bob, showing the same symptoms might in fact suffer from different maladies. Thus, while these individuals might be equivalent from a symptomatic point of view, they might be classified into different illness classes. One of the many approaches suggested for handling imprecise knowledge is based on rough approximations. Generally speaking, the individuals in a domain are partitioned into equivalence classes, based on their indiscernibility according to the current level of detail. An individual belongs to the upper approximation of a class C (denoted C), if it is indiscernible from some element of C. For instance, Ana and Bob are in the same symptomatic equivalence class. If Bob is diagnosed with, say the Cooties, then Ana potentially has the Cooties, too. In rough terminology, Ana is in the upper approximation of Cooties (Cooties). An analogous lower approximation of a class can be defined, too. Intuitively, C contains the prototypical elements of the class C: if an element x belongs to C, then every element indiscernible from x is guaranteed to belong to C. Rough extensions of Description Logics (DLs) [1] have been proposed as a formalism for handling these upper and lower approximations [5]. An important example is the rough DL ELρ, which extends EL with two new rough constructors. Formally, ELρ concepts are built from concept names A and role names r through the grammar rule C ::= A | > | C u C | ∃r.C | C | C. The semantics of this logic is based on interpretations I = (∆I , ·I , ρI) that extend standard interpretations by an equivalence relation ρI over the elements of ∆I . The interpretation function is extended to the classical constructors in the usual way, and to the rough constructors by setting

Dissertation
09 Oct 2014
TL;DR: In this article, the authors present an acknowledgment of the importance of diversity in the field of science and technology, and propose a framework for acknowledging diversity in scientific publications, including acknowledgements and acknowledgements.
Abstract: ..................................................................................................................................... 2 Resumen ..................................................................................................................................... 5 Acknowledgement ...................................................................................................................... 8


Patent
30 Sep 2014
TL;DR: In this paper, a modification of a class, included in program code, from a first class definition to a second class definition that is different from the first-class definition is detected.
Abstract: A device may detect a modification of a class, included in program code, from a first class definition to a second class definition that is different from the first class definition. The device may create a relationship indicator that references the second class definition and that indicates that the class has been modified. The device may store an association between the class and the relationship indicator. The device may access an instance of the class associated with the first class definition. The device may detect the association between the class and the relationship indicator based on accessing the instance of the class. The device may update the instance of the class, using the second class definition, based on detecting the association between the class and the relationship indicator.

Posted Content
TL;DR: In this paper, the authors apply a Berberian's technique to asymmetric Putnam-Fuglede theorems for paranormal operators, and give a new counterexample for an asymmetric PGF theorem for the class of p-hyponormal operators.
Abstract: We present how to apply a Berberian's technique to asymmetric Putnam-Fuglede theorems. In particular, we proved that if $A, B \in B(H)$ belong to the union of classes of $*$-paranormal operators, p-hyponormal operators, dominant operators and operators of class Y and $AX = XB^*$ for some $X \in B(H)$, then $A^*X = XB$. Moreover, we gave a new counterexample for an asymmetric Putnam-Fuglede theorem for paranormal operators

01 Jan 2014
TL;DR: In this article, the authors propose a solution to solve the problem of the problem: this article..., and the solution is presented in Table 1.1.1]
Abstract: .......................................................................................... vi

Journal ArticleDOI
TL;DR: In this paper, Brandom's use theory of meaning, which he also calls "semantic pragmatism" and "pragmatic semantics," is defined as the assumption that the meaning (or propositional content) of an assertion depends on its inferential role, that is, on its function as a premise or conclusion in inferences.
Abstract: 1 Inference and RepresentationThe view that the meaning of a word or sentence is identical to its use in language, and that language is primarily a social practice, is mainly associated with the name of Ludwig Wittgenstein (1958, §§ 30 and 43) It is opposed to semantic theories that construe meanings as abstract entities (for example, propositions) or as somehow determined by mental entities (for example, intentions) As is well known, however, Wittgenstein left it to others to develop a systematic "use theory" of meaning One of these "others" is Robert Brandom, whose thoughts about meaning, language, and social practices in general, have attracted much attention in the past two decades, more precisely, since the publication of his book Making It Explicit in 1994Brandom's use theory, which he also calls "semantic pragmatism" and "pragmatic semantics," sticks out among other approaches of this sort because of its sheer size and richness of details It is part of a comprehensive system that also deals with normativity, truth, and intentionality, to mention just three important themes1 The core of Brandom's semantics is the assumption that the meaning (or propositional content) of an assertion depends on its inferential role, that is, on its function as a premise or conclusion in inferences2 One understands an assertion if one is able to draw the relevant inferences Brandom emphasizes that drawing inferences is a kind of knowing-how, a practical activity or capacity, rather than some kind of theoretical knowledgeFurthermore, Brandom also leaves no doubt that the main opponent of inferentialism is "representationalism," which is his term for approaches that start with the concept of representation and use this for defining the concept of inference A representationalist theory would typically describe how words and sentences refer to things and facts in the world, how the truth value of a sentence depends on the reference of the words appearing in the sentence, and finally, how true conclusions can be inferred from true premisesBrandom turns this explanatory strategy upside down For him, the basic notion is that of inference3 Inferences between sentences determine the meanings of these sentences and of the words contained in them There are "good" inferences and "bad" ones Roughly speaking, neglecting a lot of details, we may say that the good inferences are those which are endorsed by the community "Truth" is introduced into the theory only at a later stage, being defined as that which is preserved in the transition from premises to conclusions in good inferencesAs to semantic reference, this is not a relation between language and the world, such as between the word "dog" and a class of hairy animals It is rather a relation between the word and another part of discourse Imagine, for example, a dialogue about someone's pet At some point in this dialogue, the pet would perhaps be specified as a "dog," which would establish a semantic relation between the word "dog" and previous occurrences of the word "pet" in the same dialogue Brandom calls this an anaphoric account of "refers" Anaphoric reference is intralinguistic reference and is not to be confused with extralinguistic reference, which does not exist in Brandom's system The purpose is "to show how an analysis in terms of anaphoric mechanisms can provide the resources for a purely intralinguistic account of the use of the English sentences by means of which philosophers make assertions about extralinguistic referential relations" (Brandom 1994, 306)In order to better understand this "anti-representationalist" treatment of representational concepts, it may be useful to distinguish three kinds of representationalism: In a first sense, a theory can be said to be representationalist if it uses representational concepts such as "reference" at all, whether these figure as basic or as derived concepts and however they may be defined within the theory …

Patent
16 Jul 2014
TL;DR: In this article, a system for building an ontology-based enterprise architecture consisting of a class unit where each data for building the enterprise architecture and a viewpoint including a plurality of views is set in a class type; a relation extracting unit for extracting relations between views selected for building enterprise architecture; and a data definition unit for defining data related to the views based on the relation extracted by the relation extraction unit.
Abstract: The present invention provides a system for building an ontology-based enterprise architecture comprising: a class unit where each data for building the enterprise architecture and a viewpoint including a plurality of views is set in a class type; a relation extracting unit for extracting relations between views selected for building the enterprise architecture; and a data definition unit for defining data related to the views based on the relation extracted by the relation extracting unit, wherein the relation extracting unit extracts a view to be built first among the views included in the viewpoint as a prior view and extracts a view having data overlapping the data to be defined in the prior view as a posterior view.

01 Jan 2014
TL;DR: In this paper, Hayakawa et al. focus on actions that demonstrate an awareness of the verbal levels of abstraction: what we describe is not what we sense, and what it means, what we interpret, is not actually what we see.
Abstract: Premature judgment often prevents us from seeing what is directly in front of us.-S. I. Hayakawa & A. R. Hayakawa (1990, p. 27)Definition: Inference-Observation ConfusionIn previous chapters, we learned how to silently add etcetera to our thinking processes, fully aware that "a map does not cover all of the territory." It is now time to apply another one of Korzybski's (2000) premises: "a map is self-reflexive" (p. 58). This means we take note of how we are using language from different levels of abstraction. To put this awareness into practice, we need to revisit the structural differential in order to distinguish direct observations ("D" level descriptions) from inferences ("I" level assumptions). In this section, we focus on actions that demonstrate an awareness of the verbal levels of abstraction: what we describe is not what we sense and what it means is not what we describe.Because our language does not readily distinguish between observation and inference levels, we rarely speak with "facts" resulting from observations. Kenneth Johnson (2004) proposed that facts are statements "made after direct observation ... [and are] confined to what one observes," whereas inferences may be constructed "before, during, or after an observation ... [and] go beyond what we observe" (p. 14).Consider the following situation: Upon reading about the transgressions of a politician or professional athlete, we often relay the "facts" to our friends and family. In doing so, we blindly assume that news outlets have checked their "facts." We do not even consider that we may be acting on the basis of assumptions. However, because we did not personally observe the actions, nor verify the validity of the media claims, our conclusions about the athlete may merely be a misguided inference.We need to know what contributes to levels confusion, specifically those between inferences and observations. What follows is an explanation of the contributing factors and correctives needed to proactively address inferenceobservation confusion.Contributing Factors: Inference-Observation ConfusionHaney (1992) explained that when people make an inference but fail to label it as one, they ignore the risks involved. Perhaps we take these uncalculated risks because we forget how little our inferences represent WIGO. We rarely credit the abstraction process for taking uncalculated risks, even though we might acknowledge physiological factors (hunger and fatigue) and psychological factors (values and needs) for inaccurate inferences (Haney, p. 239).For example, after no students answer questions posed during class, I might infer that students have not read the assigned material. I may have indeed heard no voices and observed no one raising a hand to answer, but this does not necessarily mean students have not read the material. Because I was not physically present when they prepared for class, I should not trust my abstraction; perhaps students have read the material but are too exhausted to think clearly because of an exam that they took during the preceding class period.As the previous example illustrates, language often contains no grammatical markers to indicate whether we actually observed the conclusions that we share with others. I can state as a fact ("D"-level) that "no student answered a question in class today," but I cannot accurately infer that it was because "students did not read" ('T'-level). Without language structure to help us distinguish between fact and inference, we must employ other techniques to keep us from acting on inferences as if they were facts.Correctives: Inference-Observation ConfusionHaney (1992) proposed that even if we cannot completely avoid making inferences, we can become more alert to the risks that we are taking. First, we need to detect and change inferential statements.Detect Source, Scope, and TimingTo detect our inferences, we need to ask ourselves three important questions:1. …