scispace - formally typeset
Search or ask a question
Topic

Antecedent (grammar)

About: Antecedent (grammar) is a research topic. Over the lifetime, 1392 publications have been published within this topic receiving 41824 citations.


Papers
More filters
Proceedings ArticleDOI
05 Aug 1996
TL;DR: By showing that pragmatic inferences may be necessary, the limits of syntactic restrictions are elucidated and it is discussed how these constraints can be incorporated adequately in an anaphor resolution algorithm.
Abstract: An anaphor resolution algorithm is presented which relies on a combination of strategies for narrowing down and selecting from antecedent sets for reflexive pronouns, nonreflexive pronouns, and common nouns. The work focuses on syntactic restrictions which are derived from Chomsky's Binding Theory. It is discussed how these constraints can be incorporated adequately in an anaphor resolution algorithm. Moreover, by showing that pragmatic inferences may be necessary, the limits of syntactic restrictions are elucidated.

8 citations

Book ChapterDOI
06 Jul 2010
TL;DR: This paper considers English anaphora in categorial grammar including reference to the binding principles, and invoke displacement calculus, modal categorial calculus, categorialculus with limited contraction, and entertain addition of negation as failure.
Abstract: In type logical categorial grammar the analysis of an expression is a resource-conscious proof Anaphora represents a particular challenge to this approach in that the antecedent resource is multiplied in the semantics This duplication, which corresponds logically to the structural rule of contraction, may be treated lexically or syntactically Furthermore, anaphora is subject to constraints, which Chomsky (1981)[1] formulated as Binding Principles A, B, and C In this paper we consider English anaphora in categorial grammar including reference to the binding principles We invoke displacement calculus, modal categorial calculus, categorial calculus with limited contraction, and entertain addition of negation as failure

8 citations

Journal ArticleDOI
03 Sep 2013-PLOS ONE
TL;DR: It is argued that mixed readings were due to manifold, interlocking and conflicting perspectives taken by the participants, and cases of multiple occurrences of ziji taking distinct antecedents are illicit in Chinese syntax, since the speaker can select only one P(erspective)-Center that referentially denotes the psychological perspective in which the sentence is situated.
Abstract: Theoretical linguists claim that the notorious reflexive ziji ‘self’ in Mandarin Chinese, if occurring more than once in a single sentence, can take distinct antecedents. This study tackles possibly the most interesting puzzle in the linguistic literature, investigating how two occurrences of ziji in a single sentence are interpreted and whether or not there are mixed readings, i.e., these zijis are interpretively bound by distinct antecedents. Using 15 Chinese sentences each having two zijis, we conducted two sentence reading experiments based on a modified self-paced reading paradigm. The general interpretation patterns observed showed that the majority of participants associated both zijis with the same local antecedent, which was consistent with Principle A of the Standard Binding Theory and previous experimental findings involving a single ziji. In addition, mixed readings also occurred, but did not pattern as claimed in the theoretical linguistic literature (i.e., one ziji is bound by a long-distance antecedent and the other by a local antecedent). Based on these results, we argue that: (i) mixed readings were due to manifold, interlocking and conflicting perspectives taken by the participants; and (ii) cases of multiple occurrences of ziji taking distinct antecedents are illicit in Chinese syntax, since the speaker, when expressing a sentence, can select only one P(erspective)-Center that referentially denotes the psychological perspective in which the sentence is situated.

8 citations

Journal ArticleDOI
TL;DR: This article provided a unified semantics for the Classical Greek particle ἄν in its uses both in and outside of conditional sentences, arguing that it is a universal quantifier over situations.
Abstract: In this paper, we provide a unifijied semantics for t he Classical Greek particle ἄν in its uses both in and outside of conditional sentences. Specifijically, working within the framework provided by formal semantic treatments of conditionals in Stalnaker (1968); Lewis (1973); Kratzer (1981) and subsequent work, we propose that ἄν is a universal quantifijier over situations—parts of possible worlds. We also detail the interactions between ἄν and the tense and mood features in a clause, arguing, for example, that the semantics of ἄν in combination with a ‘fake’ past tense morphology (Iatridou 2000), which reflects the presence of an exclusion feature in C, gives rise to a counterfactual implicature. Additionally, we addres s the issue of the surface distribution of ἄν in the antecedents of some types of conditionals and the consequents of others and argue that, despite its surface distribution, ἄν is always merged into the consequent of a conditional but sometimes undergoes displacement such that it appears to be located within the antecedent. Our proposal not only illuminates a complex phenomenon in Classical Greek, but also contributes to the understanding of the morpho-semantics of mood, conditi onals, and counterfactuality in natural language.

8 citations

Posted Content
TL;DR: A mention-ranking model that learns how abstract anaphors relate to their antecedents with an LSTM-Siamese Net is proposed and its model outperforms state-of-the-art results on shell noun resolution and reports first benchmark results on an abstractAnaphora subset of the ARRAU corpus.
Abstract: Resolving abstract anaphora is an important, but difficult task for text understanding. Yet, with recent advances in representation learning this task becomes a more tangible aim. A central property of abstract anaphora is that it establishes a relation between the anaphor embedded in the anaphoric sentence and its (typically non-nominal) antecedent. We propose a mention-ranking model that learns how abstract anaphors relate to their antecedents with an LSTM-Siamese Net. We overcome the lack of training data by generating artificial anaphoric sentence--antecedent pairs. Our model outperforms state-of-the-art results on shell noun resolution. We also report first benchmark results on an abstract anaphora subset of the ARRAU corpus. This corpus presents a greater challenge due to a mixture of nominal and pronominal anaphors and a greater range of confounders. We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors. Our model selects syntactically plausible candidates and -- if disregarding syntax -- discriminates candidates using deeper features.

8 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
202159
202052
201957
201863
201762