Geoffrey K. Pullum
Bio: Geoffrey K. Pullum is an academic researcher from University of Edinburgh. The author has contributed to research in topics: Generative grammar & Syntax. The author has an hindex of 44, co-authored 157 publications receiving 11551 citations. Previous affiliations of Geoffrey K. Pullum include University College London & Hewlett-Packard.
Papers published on a yearly basis
•15 Apr 2002
TL;DR: Huddleston as discussed by the authors discusses relative clauses and unbounded dependencies, and discusses non-finite and verbless clauses, including content clauses and reported speech clauses, with a focus on adjectives and adverbs.
Abstract: 1. Preliminaries Geoffrey K. Pullum and Rodney Huddleston 2. Syntactic overview Rodney Huddleston 3. The verb Rodney Huddleston 4. The clause, I: mainly complements Rodney Huddleston 5. Nouns and noun phrases John Payne and Rodney Huddleston 6. Adjectives and adverbs Geoffrey K. Pullum and Rodney Huddleston 7. Prepositions and preposition phrases Geoffrey K. Pullum and Rodney Huddleston 8. The clause, II: mainly adjuncts Anita Mittwoch, Rodney Huddleston and Peter Collins 9. Negation Geoffrey K. Pullum and Rodney Huddleston 10. Clause type and illocutionary force Rodney Huddleston 11. Content clauses and reported speech Rodney Huddleston 12. Relative clauses and unbounded dependencies Rodney Huddleston, Geoffrey K. Pullum and Peter G. Peterson 13. Comparative constructions Rodney Huddleston 14. Non-finite and verbless clauses Rodney Huddleston 15. Coordination and supplementation Rodney Huddleston, John Payne and Peter G. Peterson 16. Information packaging Gregory Ward, Betty Birner and Rodney Huddleston 17. Deixis and anaphora Lesley Stirling and Rodney Huddleston 18. Inflectional morphology and related matters F. R. Palmer, Rodney Huddleston and Geoffrey K. Pullum 19. Lexical word-formation Laurie Bauer and Rodney Huddleston 20. Punctuation Geoffrey Nunberg, Ted Briscoe and Rodney Huddleston Further reading Index.
01 Jan 1985
TL;DR: "Generalized Phrase Structure Grammar" provides the definitive exposition of the theory of grammar originally proposed by Gerald Gazdar and developed during half a dozen years' work with his colleagues Ewan Klein, Geoffrey Pullum, and Ivan Sag.
Abstract: "Generalized Phrase Structure Grammar" provides the definitive exposition of the theory of grammar originally proposed by Gerald Gazdar and developed during half a dozen years' work with his colleagues Ewan Klein, Geoffrey Pullum, and Ivan Sag. This long-awaited book contains both detailed specifications of the theory and extensive illustrations of its power to describe large parts of English grammar. Experts who wish to evaluate the theory and students learning GPSP for the first time will find this book an invaluable guide.The initial chapters lay out the theoretical machinery of GPSP in a readily intelligible way. Combining informal discussion with precise formalization, the authors describe all major aspects of their grammatical system, including a complete theory of syntactic features, phrase structure rules, meta rules, and feature instantiation principles. The book then shows just what a GPSP analysis of English syntax can accomplish. Topics include the internal structure of phrases, unbounded dependency constructions of many varieties, and coordinate conjunction a construction long considered the sticking point for phrase structure approaches to syntax.The book concludes with a well developed proposal for a model theoretic semantic system to go along with GPSP syntax. Throughout, the authors maintain the highest standards of explicitness and rigor in developing and assessing their grammatical system. Their aim is to provide the best possible test of the hypothesis that syntactic description can be accomplished in a single-level system. And more generally, it is their intention to formulate a grammatical framework in which linguistic universals follow directly from the form of the system and therefore require no explicit statement. Their book sets new methodological standards for work in generative grammar while presenting a grammatical system of extraordinary scope."
01 Jan 2006
TL;DR: This paper present a new and comprehensive descriptive grammar of English, written by the principal authors in collaboration with an international research team of a dozen linguists in five countries, which is based on a sounder and more consistent descriptive framework than previous large-scale grammars, and includes much more explanation of grammatical terms and concepts.
Abstract: This book presents a new and comprehensive descriptive grammar of English, written by the principal authors in collaboration with an international research team of a dozen linguists in five countries. It represents a major advance over previous grammars by virtue of drawing systematically on the linguistic research carried out on English during the last forty years. It incorporates insights from the theoretical literature but presents them in a way that is accessible to readers without formal training in linguistics. It is based on a sounder and more consistent descriptive framework than previous large-scale grammars, and includes much more explanation of grammatical terms and concepts, together with justification for the ways in which the analysis differs from traditional grammar. The book contains twenty chapters and a guide to further reading. Its usefulness is enhanced by diagrams of sentence structure, cross-references between sections, a comprehensive index, and user-friendly design and typography throughout.
TL;DR: The authors examines a type of argument for linguistic nativism that takes the following form: (i) a fact about some natural language is exhibited that allegedly could not be learned from experience without access to a certain kind of (positive) data; (ii) it is claimed that data of the type in question are not found in normal linguistic experience; hence, it is concluded that people cannot be learning the language from mere exposure to language use.
Abstract: This article examines a type of argument for linguistic nativism that takes the following form: (i) a fact about some natural language is exhibited that allegedly could not be learned from experience without access to a certain kind of (positive) data; (ii) it is claimed that data of the type in question are not found in normal linguistic experience; hence (iii) it is concluded that people cannot be learning the language from mere exposure to language use. We analyze the components of this sort of argument carefully, and examine four exemplars, none of which hold up. We conclude that linguists have some additional work to do if they wish to sustain their claims about having provided support for linguistic nativism, and we offer some reasons for thinking that the relevant kind of future work on this issue is likely to further undermine the linguistic nativist position.
01 Jan 1999
TL;DR: In Sorting Things Out, Bowker and Star as mentioned in this paper explore the role of categories and standards in shaping the modern world and examine how categories are made and kept invisible, and how people can change this invisibility when necessary.
Abstract: What do a seventeenth-century mortality table (whose causes of death include "fainted in a bath," "frighted," and "itch"); the identification of South Africans during apartheid as European, Asian, colored, or black; and the separation of machine- from hand-washables have in common? All are examples of classification -- the scaffolding of information infrastructures. In Sorting Things Out, Geoffrey C. Bowker and Susan Leigh Star explore the role of categories and standards in shaping the modern world. In a clear and lively style, they investigate a variety of classification systems, including the International Classification of Diseases, the Nursing Interventions Classification, race classification under apartheid in South Africa, and the classification of viruses and of tuberculosis. The authors emphasize the role of invisibility in the process by which classification orders human interaction. They examine how categories are made and kept invisible, and how people can change this invisibility when necessary. They also explore systems of classification as part of the built information environment. Much as an urban historian would review highway permits and zoning decisions to tell a city's story, the authors review archives of classification design to understand how decisions have been made. Sorting Things Out has a moral agenda, for each standard and category valorizes some point of view and silences another. Standards and classifications produce advantage or suffering. Jobs are made and lost; some regions benefit at the expense of others. How these choices are made and how we think about that process are at the moral and political core of this work. The book is an important empirical source for understanding the building of information infrastructures.
01 Jan 1994
TL;DR: This book presents the most complete exposition of the theory of head-driven phrase structure grammar, introduced in the authors' "Information-Based Syntax and Semantics," and demonstrates the applicability of the HPSG approach to a wide range of empirical problems.
Abstract: This book presents the most complete exposition of the theory of head-driven phrase structure grammar (HPSG), introduced in the authors' "Information-Based Syntax and Semantics." HPSG provides an integration of key ideas from the various disciplines of cognitive science, drawing on results from diverse approaches to syntactic theory, situation semantics, data type theory, and knowledge representation. The result is a conception of grammar as a set of declarative and order-independent constraints, a conception well suited to modelling human language processing. This self-contained volume demonstrates the applicability of the HPSG approach to a wide range of empirical problems, including a number which have occupied center-stage within syntactic theory for well over twenty years: the control of "understood" subjects, long-distance dependencies conventionally treated in terms of "wh"-movement, and syntactic constraints on the relationship between various kinds of pronouns and their antecedents. The authors make clear how their approach compares with and improves upon approaches undertaken in other frameworks, including in particular the government-binding theory of Noam Chomsky.
•12 Jun 2009
TL;DR: This book offers a highly accessible introduction to natural language processing, the field that supports a variety of language technologies, from predictive text and email filtering to automatic summarization and translation.
Abstract: This book offers a highly accessible introduction to natural language processing, the field that supports a variety of language technologies, from predictive text and email filtering to automatic summarization and translation. With it, you'll learn how to write Python programs that work with large collections of unstructured text. You'll access richly annotated datasets using a comprehensive range of linguistic data structures, and you'll understand the main algorithms for analyzing the content and structure of written communication. Packed with examples and exercises, Natural Language Processing with Python will help you: Extract information from unstructured text, either to guess the topic or identify "named entities" Analyze linguistic structure in text, including parsing and semantic analysis Access popular linguistic databases, including WordNet and treebanks Integrate techniques drawn from fields as diverse as linguistics and artificial intelligence This book will help you gain practical skills in natural language processing using the Python programming language and the Natural Language Toolkit (NLTK) open source library. If you're interested in developing web applications, analyzing multilingual news sources, or documenting endangered languages -- or if you're simply curious to have a programmer's perspective on how human language works -- you'll find Natural Language Processing with Python both fascinating and immensely useful.
•23 Oct 1995
TL;DR: It is argued that lexical decomposition is possible if it is performed generatively and a theory of lexical inheritance is outlined, which provides the necessary principles of global organization for the lexicon, enabling us to fully integrate the authors' natural language lexicon into a conceptual whole.
Abstract: In this paper, I will discuss four major topics relating to current research in lexical semantics: methodology, descriptive coverage, adequacy of the representation, and the computational usefulness of representations. In addressing these issues, I will discuss what I think are some of the central problems facing the lexical semantics community, and suggest ways of best approaching these issues. Then, I will provide a method for the decomposition of lexical categories and outline a theory of lexical semantics embodying a notion of cocompositionality and type coercion, as well as several levels of semantic description, where the semantic load is spread more evenly throughout the lexicon. I argue that lexical decomposition is possible if it is performed generatively. Rather than assuming a fixed set of primitives. I will assume a fixed number of generative devices that can be seen as constructing semantic expressions. I develop a theory of Qualia Structure, a representation language for lexical items, which renders much lexical ambiguity in the lexicon unnecessary, while still explaining the systematic polysemy that words carry. Finally, I discuss how individual lexical structures can be integrated into the larger lexical knowledge base through a theory of lexical inheritance. This provides us with the necessary principles of global organization for the lexicon, enabling us to fully integrate our natural language lexicon into a conceptual whole.