scispace - formally typeset
Search or ask a question

Showing papers on "Natural language published in 1969"


Book
01 Jul 1969
TL;DR: Testing English as a second language as discussed by the authors, testing English as second language, Testing English ASL, Testing English as an second language, مرکز فناوری اطلاعات و اسلاز رسانی, ک
Abstract: Testing English as a second language , Testing English as a second language , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی

597 citations


Book
01 Jan 1969

190 citations


Journal ArticleDOI
09 May 1969-Science
TL;DR: The development of language in the child may be elucidated by applying to it the conceptual framework of developmental biology, and the schema of physical maturation is applicable to the study of language development.
Abstract: form of description, not of scattered facts but of the dynamic interrelations-the operating principlesfound in nature. Chomsky and his students have done this. Their aim has been to develop algorithms for specific languages, primarily English, that make explicit the series of computations that may account for the structure of sentences. The fact that these attempts have only been partially successful is irrelevant to the argument here. (Since every native speaker of English can tell a well-formed sentence from an illformed one, it is evident that some principles must exist; the question is merely whether the Chomskyites have discovered the correct ones.) The development of algorithms is only one province of mathematics, and in the eyes of many mathematicians a relatively limited one. There is a more exciting prospect; once we know sornething about the basic relational operating principles underlying a few languages, it should be possible to characterize formally the abstract system language as a whole. If our assumption of the existence of basic, structural language universals is correct, one ought to be able to adduce rigorous proof for the existence of homeomorphisms between any natural languages, that is, any of the systems characterized formally. If a category calculus were developed for this sort of thing, there would be one level of generality on which a common denominator could be found; this may be done trivially (for instance by using the product of all systems). However, our present knowledge of the relations, and the relations of relations, found in the languages so far investigated in depth encourages us to expect a significant solution. Environment and Maturation Everything in life, including behavior and language, is interaction of the individual with its milieu. But the milieu 9 MAY 1969 is not constant. The organism itself helps to shape it (this is true of cells and organs as much as of animals and man). Thus, the organism and its environment is a dynamic system and, phylogenetically, developed as such. The development of language in the child may be elucidated by applying to it the conceptual framework of developmental biology. Maturation may be characterized as a sequence of states. At each state, the growing organism is capable of accepting some specific input; this it breaks down and resynthesizes in such a way that it makes itself develop into a new state. This new state makes the organism sensitive to new and different types of input, whose acceptance transforms it to yet a further state, which opens the way to still different input, and so on. This is called epigenesis. It is the story of embryological development observable in the forniation of the body, as well as in certain aspects of behavior. At various epigenetic states, the organism may be susceptible to more than one sort of input-it may be susceptible to two or more distinct kinds or even to an infinite variety of inputs, as long as they are within determined limits-and the developmental history varies with the nature of the input accepted. In other words, the organism, during development, comes to crossroads; if condition A is present, it goes one way; if condition B is present, it goes another. We speak of states here, but this is, of course, an abstraction. Every stage of maturation is unstable. It is prone to change into specific directions, but requires a trigger from the environment. When language acquisition in the child is studied from the point of view of developmental biology, one makes an effort to describe developmental stages together with their tendencies for change and the conditions that bring about that change. I believe that the schema of physical maturation is applicable to the study of language development because children appear to be sensitive to successively different aspects of the language environment. The child first reacts only to intonation patterns. With continued exposure to these patterns as they occur in a given language, mechanisms develop that allow him to process the patterns, and in most instances to reproduce them (although the latter is not a necessary condition for further development). This changes him so that he reaches a new state, a new potential for language development. Now he becomes aware of certain articulatory aspects, can process them and possibly also reproduce them, and so on. A similar sequence of acceptance, synthesis, and state of new acceptance can be demonstrated on the level of semantics and syntax. That the embryological concepts of differentiation, as well as of determination and regulation, are applicable to the brain processes associated with language development is best illustrated by the material discussed above under the headings "brain correlates" and "critical age for language acquisition." Furthermore, the correlation between language development and other maturational indices suggests that there are anatornical and physiological processes whose maturation sets the pace for both cognitive and language development; it is to these maturational processes that the concept differentiation refers. We often transfer the meaning of the word to the verbal behavior itself, which is not unreasonable, although, strictly speaking, it is the physical correlates only that diffe~entiate.

166 citations


Journal ArticleDOI
01 Dec 1969-Language
TL;DR: The authors argued that a finite state device is adequate to describe the data of natural language, including English embedded relative clauses, and proposed a solution to handle multiple-branching constructions such as are found in coordination, and describes the intonation pattern of sentences containing strings of relative clauses.
Abstract: Fallacious reasoning has led transformationalists to conclude that natural language cannot be produced by a finite state device. An alternate argument is proposed, based on a distinction between two types of generative mechanisms: iteration and recursion to a depth of one. A device which employs these mechanisms is a finite state device; and this paper contends that such a device is adequate to describe the data of natural language, including English embedded relative clauses. The proposed solution also handles multiple-branching constructions such as are found in coordination, and describes the intonation pattern of sentences containing strings of relative clauses.

102 citations


Book
01 Jan 1969
TL;DR: A model of natural language mediation, hypothesized to work sequentially through a stack of transformations until some one T succeeded in turning the CVC into a familiar word, argues for the abandonment of the traditional rote-learning orientation.
Abstract: “Natural language mediation” refers to the covert process by which an English-speaking S encodes an unfamiliar stimulus into English (that is, into a natural language mediator or NLM) and then decodes the NLM back into its original form. To externalize this normally covert process, Ss were required to write NLMs to CVC stimuli, then reconstruct the original stimuli from their written NLMs. From data gathered in this way, a model of natural language mediation was constructed: to produce an NLM, S was hypothesized to work sequentially through a stack of transformations (T stack) until some one T succeeded in turning the CVC into a familiar word. The T-stack model was able to predict response measures such as latency and probability of free associations. In further tests of validity, the model successfully predicted: ( a ) learning rate as well as the nature of intrusion errors in paired-associate learning, ( b ) CVC memorability in short-term memory tasks, and ( c ) conventional as well as set-manipulated pronounceability ratings. The model's success in explaining and predicting diverse behaviors in a variety of tasks argues for the abandonment of the traditional rote-learning orientation.

61 citations


Proceedings ArticleDOI
01 Sep 1969
TL;DR: An operable automatic parser for natural language that identifies and disambiguates the concepts derivable from that input and places them into a network that explicates their inter-relations with respect to the unambiguous meaning of the input.
Abstract: This paper describes an operable automatic parser for natural language. The parser is not concerned with producing the syntactic structure of an input sentence. Instead, it is a conceptual parser, concerned with determining the underlying meaning of the input. Given a natural language input, the parser identifies and disambiguates the concepts derivable from that input and places them into a network that explicates their inter-relations with respect to the unambiguous meaning of the input.The parser utilizes a conceptually-oriented dependency grammar that has as its highest level the network which represents the underlying conceptual structure of a linguistic input. The parser also incorporates a language-free semantics that checks all possible conceptual dependencies with its own knowledge of the world.The parser is capable of learning new words and new constructions. It presently has a vocabulary of a few hundred words which enables it to operate in a psychiatric interviewing program without placing any restriction on the linguistic input.The theory behind the conceptual dependency is outlined in this paper and the parsing algorithm is explained in some detail.

53 citations


Journal ArticleDOI
TL;DR: In this paper some other classifications of grammars and languages are investigated, chosen in such a way as to characterize some aspects of the intuitive notion about complexity (of the description of the description) of gramMars and Languages and their intrinsic structure.
Abstract: The basic definitions and notations of the theory of context-free grammars and languages (briefly grammars and languages) used in this paper are as in Ginsburg (1966). The classification of languages L according to the minimal number of variables in grammars for L was studied in Gruska (1967). In this paper some other classifications of grammars and languages are investigated. They are chosen in such a way as to characterize some aspects of our intuitive notion about complexity (of the description) of grammars and languages and their intrinsic structure. The classifications of languages are indicated by those of grammars. The intrinsic structure of a grammar G is characterized by the number and by the depth of the grammatical levels of G. A grammatical level Go of a grammar G is a maximal set of productions of G the left-side symbols of which are mutually dependent. The basic concepts of grammatical levels and classifications of grammars and languages are given in Sections 2 and 3. Only such classifications K are considered here, wherein for every grammar G (language L) K(G) (K(L)) is an integer. In this paper only nonnegative integers will be considered. A classification K is said to be connected in an alphabet Z if for every integer n there is a language L c Z* such that K(L) = n. Sections 4 to 6 provide the proofs that the classifications according to the number of variables, the number of productions, the number of grammatical levels, the number of non-elementary grammatical levels (that is, the grammatical levels with at least two variables) and the maximal depth of grammatical levels (that is, according to the maximal number of variables in grammatical levels) are connected in any alphabet with

51 citations


Book
01 Jan 1969

46 citations


Proceedings ArticleDOI
05 May 1969
TL;DR: In this paper, a language is shown to be context-free if and only if there is a finite set of context-sensitive rules which parse this language; i.e., if andonly if there are a collection of trees whose terminal strings are this language and a finiteSet of context theorems which analyze exactly these trees.
Abstract: The ability of context-sensitive grammars to generate non-context-free languages is well-known. However, phrase structure rules are often used in both natural and artificial languages, not to generate sentences, but rather to analyze or parse given putative sentences. Linguistic arguments have been advanced that this is the more fruitful use of context-sensitive rules for natural languages, and that, further, it is the purported phrase-structure tree which is presented and analyzed, rather than merely the terminal string itself. In this paper, a language is shown to be context-free if and only if there is a finite set of context-sensitive rules which parse this language; i.e., if and only if there is a collection of trees whose terminal strings are this language and a finite set of context-sensitive rules which analyze exactly these trees.

41 citations


Proceedings Article
07 May 1969
TL;DR: This paper describes an operable automatic parser for natural language, concerned with determining the underlying meaning of the input utilizing a network of concepts explicating the beliefs inherent in a piece of discourse.
Abstract: This paper describes an operable automatic parser for natural language. It is a conceptual parser, concerned with determining the underlying meaning of the input utilizing a network of concepts explicating the beliefs inherent in a piece of discourse.

41 citations


Journal ArticleDOI
TL;DR: The task confronting the linguist, or grammarian, has been posed in a number of ways as discussed by the authors : the traditional grammarians attempted to establish correlations between the physical facts of the language and the elements of a metaphysical, logical, scheme based on an analysis of the characteristics of human reason.
Abstract: The task confronting the linguist, or grammarian, has been posed in a number of ways. The traditional grammarian attempted to establish correlations between the physical facts of the language — the morphology — and the elements of a metaphysical, logical, scheme based on an analysis of the characteristics of human reason. The descriptive linguist attempts to describe language as a corpus, without recourse to a prioristic speculation or external theories. The generative grammarian has attempted to deal with natural language as though it were a particular instance of a mathematical language and can be generated in the manner of an arithmetic; and he increasingly tends to return to the development of metaphysical schemes that are independent of the physical characteristics of any particular language.

Book ChapterDOI
01 Jan 1969
TL;DR: Some hitherto poorly considered problems of linguistic theory are explored and some concepts that might be useful for further and more detailed investigations are stipulated that may shed new light on certain properties of lexical readings and lexical systems of natural languages in general.
Abstract: At first glance the problem of definitions in natural language seems to be a more or less marginal question. This is, however, far from being true. While discussing definitions we shall be forced to touch upon some fairly intricate and central problems of linguistic theory. Though even the formal properties of definitional sentences are far from being clear, the more difficult problems arise with respect to their semantic interpretation, their role in introducing new terms into a given language, and their relation to non-definitional generic sentences. Questions of this type may shed new light on certain properties of lexical readings and lexical systems of natural languages in general and on the relation between analytic and empirical generic sentences based on these properties in particular. We cannot answer these questions within the limits of the present article. We only intend to explore some hitherto poorly considered problems of linguistic theory and to stipulate some concepts that might be useful for further and more detailed investigations.

Journal ArticleDOI
TL;DR: This paper showed that matching 22 pairs of antonyms in five foreign languages would occur correctly beyond a chance level (α ≤ 0.05), independently of the particular language concerned.

Journal ArticleDOI
TL;DR: Twelve recently developed memory systems which appear to have a high potential utility for information storage and retrieval and for natural language processing are described and analyzed with the aid of the diagrams and tables.
Abstract: An attempt is made to define certain basic concepts involved in the representation of data and processes to be performed upon data within a high-speed, random-access computer memory. An important distinction is made between “data structures” and “storage structures,” and it is suggested that the programmer or problem-solver will find his effort amply repaid if he makes a thorough initial study of the data for his problem in terms of various alternative “data structures,” or patterns of mutual accessibility among elements of data, defined independently of any particular computer order code or programming language. An artificial problem is analyzed, and nine different solutions, involving different “data structures,” are described and compared to illustrate the application of the concepts and definitions provided. The formation of operands from particular data structures and the construction of new data structures during the course of a problem solution are then discussed. A method is described for representing data structures in diagrams and tables, and for representing storage structures in diagrams, in accordance with the concepts and definitions previously introduced. Twelve recently developed memory systems which appear to have a high potential utility for information storage and retrieval and for natural language processing are described and analyzed with the aid of the diagrams and tables. Some tentative conclusions are drawn concerning the comparative utility of the ten memory systems and programming languages for problems involving storage, manipulation, and retrieval of information from natural language text.


Journal ArticleDOI
TL;DR: In this paper, the analysis of transfer effects in second language learning should be molecular and have a systematic character that could potentially organize the knowledge language teachers have developed through their experience in the classroom.
Abstract: Extrapolation from laboratory experiments to real life learning situations is not a scientifically rigorous process. Nevertheless, it is possible to be systematic without at the same time being rigid about the manner in which additions to knowledge are made. The analysis of transfer effects in second language learning should be molecular and have a systematic character that could potentially organize the knowledge language teachers have developed through their experience in the classroom. The analysis presented in the paper outlines some specific transfer expectations under four different conditions of second language acquisition: coordinate and compound training for related and unrelated languages. Some unexpected predictions are generated; for example, with related languages a compound setting will yield more positive transfer (hence be more facilitative) than a coordinate setting. Similarities between two languages in terms of their surface features are more relevant to the operation of transfer effects than deep structure relations. A distinction must be made between structural factors based on contrastive analyses and non-structural factors pertaining to the learner's attitudes and the sociolinguistic context of the learning situation. Transfer effects operate at various levels of language functioning (e.g., mechanical skills, semantic sensitivity, communicative competence) and measures to assess these effects are suggested.

Journal ArticleDOI
TL;DR: In this article, a discussion of restricted and unlabeled codewords is presented. But this discussion is limited to two categories: restricted and obfuscated codeword definitions.
Abstract: (1969). A DISCUSSION OF RESTRICTED AND ELABORATED CODES. Educational Review: Vol. 22, No. 1, pp. 38-50.


Proceedings Article
07 May 1969
TL;DR: An overview of the present status and future plans of a research project aimed at communicating in natural language with an intelligent automaton, a computer-controlled mobile robot capable of autonomously acquiring information about its environment and performing tasks normally requiring human supervision.
Abstract: This paper gives an overview of the present status and future plans of a research project aimed at communicating in natural language with an intelligent automaton. The automaton in question is a computer-controlled mobile robot capable of autonomously acquiring information about its environment and performing tasks normally requiring human supervision. By natural language communication is meant the ability of a human to successfully engage the robot in a dialog using simple English declarative, interrogative, and imperative sentences. Communication is accomplished by means of a natural language interpretive question-answering system (ENGROB) consisting of six distinct components: a syntax analyser, a semantic interpreter, a model of the robot's environment, a deductive, automatic theorem proving system, an English output generator, and a repertoire of basic robot capabilities for sensing and manipulating the environment. An example is given that illustrates the type of processing done by each component, and the nature of component interactions.

Proceedings ArticleDOI
01 Sep 1969
TL;DR: An approach based on an operational theory of meaning is proposed as one that could lead to an adequate semantic theory for natural language semantics, formulated in terms of a nondeterministic programming system which operates by goal-directed heuristic search and evaluation.
Abstract: The formalization of natural language semantics is a problem central to a number of major academic and practical concerns. A semantic theory requires a formalized representation of messages, arrangements of morphological units, and the processes of encoding and docoding that relate them. Formal logic has provided a paradigm for semantics based on the notions of model, extension, and intention; with certain changes and additions, this paradigm indicates what is needed for a theory of natural language semantics. Possible computational avenues of approach to developing a semantic theory include machine translation, data management and information retrieval, language and picture processing, psychological modeling, natural-language and picture processing, psychological modeling, natural-language CAI, and natural-language programming. Several linguists have developed semantic descriptions based on transformational grammar; the earliest of these regarded semantic interpretation as being derived from syntactic deep structure, while the more recent have regarded deep structure itself as being semantically meaningful. Computational approaches to date have treated semantics as a problem of translating natural language into predicate-calculus formulas, relational structures, or statements in a formal procedural language; the most significant of these approaches are those of Thompson, Simmons et al , Woods, and Kellogg. Considered individually, none of these approaches have produced adequate semantic theories for natural language, but all contribute something towards the formulation of an adequate approach.An approach based on an operational theory of meaning is proposed as one that could lead to an adequate semantic theory. The approach is formulated in terms of a nondeterministic programming system which operates by goal-directed heuristic search and evaluation. Models are formulated in terms of hierarchical situation structures and operations on them, messages are programs in the system, and decoding and encoding are nondeterministic procedures programmed in the system. Possible implications of the approach for philosophy, linguistics, psychology, and computational applications are discussed.



Journal ArticleDOI
TL;DR: This article used a linguistically-oriented materials for teaching writing for ESL teachers, which was originally developed primarily for the purpose of teaching oral English to students interested in learning to converse in the language.
Abstract: Linguistically-oriented materials for teaching writing are of relatively recent interest to ESL teachers. The linguistic method was originally developed primarily for the purpose of teaching oral English to students interested in learning to converse in the language. Those students interested in the written language have for the most part continued to learn it via the translation method, going through Theodore Dreiser or Joseph Conrad, dictionaries in hand. Until very recently, then, non-native students of English have learned to speak the language or to read it, but seldom to write it.

Journal ArticleDOI
17 Mar 1969-JAMA
TL;DR: An automated method for extracting kernels of information from the text of surgical operative reports is developed, which seems to possess five properties which are desirable for any proposed unit of information.
Abstract: Existing units of "information" fail to have any close correspondence with what we normally mean by the word information. We propose a new measure, the "kernel" which seems to possess five properties which are desirable for any proposed unit of information. We have developed an automated method for extracting kernels of information from the text of surgical operative reports. We can retrieve the information by asking questions in ordinary English. The first step is a dictionary look-up of the form class (similar to "parts of speech") and the seme number (identifying synonymous words) of each work in the operative report. A syntactic analysis and a transformational analysis is then performed to extract the kernels of information. To retrieve information, the questions are analyzed in the same way. The kernels thus obtained are matched to the kernels that have previously been catalogued. This retrieves the specific information that answers the question.

Proceedings Article
07 May 1969
TL;DR: A system for "remembering" a story or passage in English and for the subsequent retrieval of responses to questions, which functions as an information retrieval system and is intended as a model for long-term human memory.
Abstract: This paper describes a system for "remembering" a story or passage in English and for the subsequent retrieval of responses to questions. Although it functions as an information retrieval system, it is also intended as a model for long-term human memory. Information is stored in the form of predicates and propertylists. In translating from natural language, the system carries out a transformational analysis of each sentence and identifies its deep structure interpretation. This grammatical analysisis essential to the identification of the predicates as well as relationships among them. The first two subsystems carry out the grammatical analysis. A third identifies the predicates and propertylists, determines a time and priority structure, forms a logical map of relationships among predicates, and eliminates some predicates based on an assignment of priority. The fourth subsystem is used to answer inquiries about the stored information. The method is illustrated with references to one story that has been processed and with examples of questions that might be asked.



Proceedings ArticleDOI
01 Sep 1969
TL;DR: The paper describes parts of correlational syntax and shows how a highly differentiated syntax can be used to establish word classes for which an intensional semantic definition can then be found.
Abstract: Traditional grammars classify words according to generic syntactic functions or morphological characteristics. For teaching humans and for descriptive linguistics this seemed sufficient. The advent of computers has changed the situation. Since machines are devoid of experiential knowledge, they need a more explicit grammar to handle natural language. Correlational Grammar is an attempt in that direction. The paper describes parts of correlational syntax and shows how a highly differentiated syntax can be used to establish word classes for which an intensional semantic definition can then be found. It examplifies this approach in two ares of grammar: predicative adjectives and transitive verbs. The classification serves to eliminate ambiguity and spurious computer interpretations of natural language sentences.


06 Jan 1969
TL;DR: Problems involved in the description of the algorithmic language and its translation into other languages are described and the several methods for defining syntactic concepts proposed in the literature are reviewed.
Abstract: : This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms 'letter, word, alphabet' are defined and described. The concept of the algorithm is defined and the relation between the algorithm and the alphabet is explained. Normal algorithms and the process of normalization are described. The relation between the algorithm, as a computational process, and the algorithmic language, which is a means of coding the meaning of the process, is explained. Problems involved in the description of the algorithmic language and its translation into other languages (machine language, natural language, meta-language) are then described and the several methods for defining syntactic concepts proposed in the literature are reviewed.