scispace - formally typeset
Search or ask a question

Showing papers on "Semantic similarity published in 1973"


Journal ArticleDOI
TL;DR: Four experiments dealt with the verification of semantic relations and indicated that semantic distance could predict RTs in another categorization task and choices in an analogies task and place constraints on a theory of semantic memory.

859 citations




Book ChapterDOI
01 Jan 1973
TL;DR: The authors reviewed the structural relationship between syntactic and semantic in children's utterances and found that children start their syntactic careers by learning simple order rules for combining words, which, in their understanding, perform semantic functions such as agent, action, and object acted upon.
Abstract: Publisher Summary This chapter reviews the structural relationship between syntactic and semantic in children's utterances. According to the view of language acquisition, the linguistic knowledge that lies behind children's initial attempts at word combining may not and need not include information about the basic grammatical relations or the constituent structure they entail. There is, in any event, no compelling evidence as yet that it does. The characteristics of cross-linguistic data suggest the alternative view that children launch their syntactic careers by learning simple order rules for combining words, which, in their understanding, perform semantic functions such as agent, action, and object acted upon, or perhaps other even less abstract semantic functions. Through additional linguistic experience, children may begin to recognize similarities in the way different semantic concepts are formally dealt with and to gradually reorganize their knowledge according to the more abstract grammatical relationships, which are functional in the particular language they are learning.

82 citations


Journal ArticleDOI
TL;DR: It was concluded that deep structure similarity had potent effects but that a more complete description of the data required the postulation of additional factors such as “propositional structure” and “semantic structure.”
Abstract: The present study investigated the effects of deep, lexical, and surface structure relationships between sentences on judgments of these sentences' semantic similarity. Ten sentence conditions, four paraphrases and six nonparaphrases, were derived from a base sentence. The four paraphrase types weretransformational (T), a passive form of the base,lexical (L), containing synonyms for base content words,formalexic (F), a combination ofT andL types, andparasyntactic (P), one of several alternative interpretations of the base. The six nonparaphrases consisted of three sets of two sentences each: the falsepermutation sentences retained the base lexicon, thefalse synonymous sentences contained synonyms, and theunrelated sentences' lexicon was completely unrelated to the base. One sentence in each nonparaphrase set retained the base surface form and the other, a passivization, did not. Using a modified paired comparisons task, the following rank order of conditions, in terms of preference, was obtained:T>L>F>P>false permutation>false synonymous>unrelated. It was concluded that deep structure similarity had potent effects but that a more complete description of the data required the postulation of additional factors such as "propositional structure" and "semantic structure."

35 citations


Journal ArticleDOI
TL;DR: The selection of scales and concepts and the format for administration are discussed in terms of an underyling linear model and three methods of calculating correlations are evaluated.
Abstract: The semantic differential technique can be used to structure attitude domains. In the present paper, the selection of scales and concepts and the format for administration are discussed in terms of an underyling linear model. Three methods of calculating correlations are evaluated with the method known as “stringing out” being preferred. Suggestions for factor analysis and factor matching are made. Some applications of the exploratory use of the semantic differential are described.

29 citations


Journal ArticleDOI
TL;DR: A semantic description of English is a description of language that describes language in terms of words, sentences, and meanings, as well as sentences themselves, which are descriptions of language itself.
Abstract: (1973). Towards a semantic description of English. English Studies: Vol. 54, No. 4, pp. 347-357.

27 citations


Proceedings Article
20 Aug 1973
TL;DR: A natural language question answering system that generates natural language responses to questions by "parsing" syntactic rules retrieved from the lexicon using semantic nets and STRIPS-like operators.
Abstract: A natural language question answering system is presented. The system's parser maps semantic paraphrases into a single deep structure characterized by a canonical verb. A modeling scheme using semantic nets and STRIPS-like operators assimilates the sequence of input information. Natural language responses to questions are generated from a data base of semantic nets by "parsing" syntactic rules retrieved from the lexicon.

22 citations


Proceedings Article
01 Jan 1973

21 citations



Journal ArticleDOI
TL;DR: The authors found that accurate specific recognition memory for sentences occurs when semantic intersentential relations are not present but virtually disappears when they are present, and that the relationship between specific memory and integrated recognition memory is ambiguous.
Abstract: Semantic 'relatedness' of sentences was manipulated in an attempt to clarify the relationship between specific recognition memory (Shepard, 1967) and integrated recognition memory (Bransford and Franks, 1970, 1971). Results indicate that accurate specific recognition memory for sentences occurs when semantic intersentential relations are not present but virtually disappears when they are present.

Journal ArticleDOI
TL;DR: In this paper, an 18-word list was constructed so that each word belonged to both a semantically related category and an acoustically related category, and the list was presented, either orally or visually, in a multitrial free-recall task.
Abstract: An 18-word list was constructed so that each word belonged to both a semantically related category and an acoustically related category. The list was presented, either orally or visually, in a multitrial free-recall task. The results from 40 Ss showed that the organization of recall was dominated by the acoustic properties of the words.

Book ChapterDOI
01 Jan 1973
TL;DR: It is said that in conversation the authors refer to nothing more familiarly or knowingly than time, and surely they understand it when they speak of it; they understand also when they hear another speak ofIt.
Abstract: For what is time? Who can easily and briefly explain it? Who can even comprehend it in thought or put the answer into words? Yet is it not true that in conversation we refer to nothing more familiarly or knowingly than time? And surely we understand it when we speak of it; we understand it also when we hear another speak of it. What then, is time? If no one asks me, I know what it is. If I wish to explain it to him who asks me, I do not know.1

Journal ArticleDOI
TL;DR: This paper attempts to define the term 'semantic representation', so as to render non-circular linguists' approaches to semantics, e.g., the generative v. interpretive semantics debate, easier to understand.
Abstract: A generative grammar is a device for enumerating sentences, and a sentence is a pairing of a phonetic shape with a meaning. So a theory of grammar has to provide for representing both sound and meaning. The representation of speech sound involves only scientific problems; but the notion of 'representing meaning' presents philosophical difficulties, which must be faced by linguists before they can meaningfully discuss the empirical questions at issue. This paper attempts to define the term 'semantic representation', so as to render non-circular linguists' approaches to semantics, e.g., the generative v. interpretive semantics debate. 1. THE QUESTION POSED A generative grammar is a device that enumerates all and only the sentences of a language. The type of grammar described in Syntactic Structures was to achieve this by specifying a set of phoneme sequences (Chomsky, 1957:13). However, since a sentence of a natural language is a pairing of a phonetic shape with a meaning, it has been held for a number of years that a complete generative grammar must specify not only a set of 'phonetic representations' but also, paired with them, 1 Shorter versions of this paper were read to the Autumn Meeting of the Linguistics Association of Great Britain, University of East Anglia, 1 November 1970, to the Linguistics Circle of Oxford, and to the Oxford Psycholinguistics Seminar. I am particularly grateful to L. Jonathan Cohen and to John Lyons for discussion, although they should not be taken to agree with my conclusions.

Proceedings ArticleDOI
27 Aug 1973
TL;DR: This paper is concerned with semantic structures of the type used in programs which involve natural language processing, such as question-answering systems, and describes two examples of the use of constructibility as a design criterion for semantic structures.
Abstract: This paper is concerned with semantic (or lexical) structures of the type used in programs which involve natural language processing, such as question-answering systems. These structures, which generally take the form of graphs representing semantic relations defined on word senses, must satisfy some rather self-evident requirements relating to their linguistic significance and adequacy and to their suita-bility for computational implementation. In addition, if the lexical subsets in question are to be nontrivial in size, the structures must be constructible in some systematic, consistent way, preferably with the aid of a computer. The structures which have been used in existing experim.ental systems, such as those reported in M. (1970), have generally been very restricted, and it has been argued (S. Y. S~D~OW, 1972) that it is their lack of constructibility which has precluded the possibility of extending them. The aim of this paper is to describe two examples of the use of constructibility as a design criterion for semantic structures. The semantic relations of hyponymy and compatibility are introduced in the following section, and suitable representations for them are developed in sections 3 and 4, respectively. Throughout the discussion, the con-structibility of a representation is to be interpreted as its amenability to the use of a computational discovery algorithm which would build the structure and have the properties of semi-automatic operation, simple input data, efficiency in the quantity of input, consistency maintenance , and monotonic refinement of the growing structure. The meaning of this terminology will be made more clear in the course of the discussion.