scispace - formally typeset
Search or ask a question
Author

Diogo Almeida

Bio: Diogo Almeida is an academic researcher from Royal Institute of Technology. The author has contributed to research in topics: Electron transfer & Fragmentation (mass spectrometry). The author has an hindex of 22, co-authored 69 publications receiving 1502 citations. Previous affiliations of Diogo Almeida include University of Maryland, College Park & New York University Abu Dhabi.


Papers
More filters
Journal ArticleDOI
01 Sep 2013-Lingua
TL;DR: The authors compared the performance of informal and formal judgment collection methods and reported a convergence rate of 95% with a margin of error of 5.3-5.8% between the two methods, and discussed the implications of this convergence rate for future research into syntactic methodology.

159 citations

Journal ArticleDOI
TL;DR: The authors empirically assess the reliability of acceptability judgment data in syntax for at least 50 years and empirically test all 469 (unique, US-English) syntactical data points.
Abstract: There has been a consistent pattern of criticism of the reliability of acceptability judgment data in syntax for at least 50 years (e.g., Hill 1961), culminating in several high-profile criticisms within the past ten years (Edelman & Christiansen 2003, Ferreira 2005, Wasow & Arnold 2005, Gibson & Fedorenko 2010, in press). The fundamental claim of these critics is that traditional acceptability judgment collection methods, which tend to be relatively informal compared to methods from experimental psychology, lead to an intolerably high number of false positive results. In this paper we empirically assess this claim by formally testing all 469 (unique, US-English)

127 citations

Journal ArticleDOI
TL;DR: The results suggest that the N400 effect does not reflect semantic integration difficulty, and are consistent with an account in which N400 reduction reflects facilitated access of lexical information.

126 citations

Journal ArticleDOI
TL;DR: In this article, a study has been partially supported by the following research projects and institutions: Ministerio Project No. FIS2006-00702, Consejo de Seguridad Nuclear CSN, European Science Foundation COST Action CM0601 and EIPAM Project, Acciones Integradas Hispano-Portuguesas Project No this article 2006-0042
Abstract: de Educacion y Ciencia Plan Nacional de Fisica, This study has been partially supported by the following research projects and institutions: Ministerio Project No. FIS2006-00702, Consejo de Seguridad Nuclear CSN, European Science Foundation COST Action CM0601 and EIPAM Project, Acciones Integradas Hispano-Portuguesas Project No. HP2006-0042

74 citations

Journal ArticleDOI
TL;DR: In this paper, an approach to incorporate into radiation damage models the effect of low and intermediate energy (0-100 eV) electrons and positrons, slowing down in biologically relevant ma- terials (water and representative biomolecules).
Abstract: This colloquium describes an approach to incorporate into radiation damage models the effect of low and intermediate energy (0-100 eV) electrons and positrons, slowing down in biologically relevant ma- terials (water and representative biomolecules). The core of the modelling procedure is a C++ computing programme named "Low Energy Particle Track Simulation (LEPTS)", which is compatible with available general purpose Monte Carlo packages. Input parameters are carefully selected from theoretical and ex- perimental cross section data and energy loss distribution functions. Data sources used for this purpose are reviewed showing examples of electron and positron cross section and energy loss data for interactions with different media of increasing complexity: atoms, molecules, clusters and condense matter. Finally, we show how such a model can be used to develop an effective dosimetric tool at the molecular level (i.e. nanodosimetry). Recent experimental developments to study the fragmentation induced in biologically material by charge transfer from neutrals and negative ions are also included.

73 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The effectiveness of the N400 as a dependent variable for examining almost every aspect of language processing is emphasized and its expanding use to probe semantic memory is highlighted to determine how the neurocognitive system dynamically and flexibly uses bottom-up and top-down information to make sense of the world.
Abstract: We review the discovery, characterization, and evolving use of the N400, an event-related brain potential response linked to meaning processing. We describe the elicitation of N400s by an impressive range of stimulus types—including written, spoken, and signed words or pseudowords; drawings, photos, and videos of faces, objects, and actions; sounds; and mathematical symbols—and outline the sensitivity of N400 amplitude (as its latency is remarkably constant) to linguistic and nonlinguistic manipulations. We emphasize the effectiveness of the N400 as a dependent variable for examining almost every aspect of language processing and highlight its expanding use to probe semantic memory and to determine how the neurocognitive system dynamically and flexibly uses bottom-up and top-down information to make sense of the world. We conclude with different theories of the N400’s functional significance and offer an N400-inspired reconceptualization of how meaning processing might unfold.

3,164 citations

Journal ArticleDOI
TL;DR: The author guides the reader in about 350 pages from descriptive and basic statistical methods over classification and clustering to (generalised) linear and mixed models to enable researchers and students alike to reproduce the analyses and learn by doing.
Abstract: The complete title of this book runs ‘Analyzing Linguistic Data: A Practical Introduction to Statistics using R’ and as such it very well reflects the purpose and spirit of the book. The author guides the reader in about 350 pages from descriptive and basic statistical methods over classification and clustering to (generalised) linear and mixed models. Each of the methods is introduced in the context of concrete linguistic problems and demonstrated on exciting datasets from current research in the language sciences. In line with its practical orientation, the book focuses primarily on using the methods and interpreting the results. This implies that the mathematical treatment of the techniques is held at a minimum if not absent from the book. In return, the reader is provided with very detailed explanations on how to conduct the analyses using R [1]. The first chapter sets the tone being a 20-page introduction to R. For this and all subsequent chapters, the R code is intertwined with the chapter text and the datasets and functions used are conveniently packaged in the languageR package that is available on the Comprehensive R Archive Network (CRAN). With this approach, the author has done an excellent job in enabling researchers and students alike to reproduce the analyses and learn by doing. Another quality as a textbook is the fact that every chapter ends with Workbook sections where the user is invited to exercise his or her analysis skills on supplemental datasets. Full solutions including code, results and comments are given in Appendix A (30 pages). Instructors are therefore very well served by this text, although they might want to balance the book with some more mathematical treatment depending on the target audience. After the introductory chapter on R, the book opens on graphical data exploration. Chapter 3 treats probability distributions and common sampling distributions. Under basic statistical methods (Chapter 4), distribution tests and tests on means and variances are covered. Chapter 5 deals with clustering and classification. Strangely enough, the clustering section has material on PCA, factor analysis, correspondence analysis and includes only one subsection on clustering, devoted notably to hierarchical partitioning methods. The classification part deals with decision trees, discriminant analysis and support vector machines. The regression chapter (Chapter 6) treats linear models, generalised linear models, piecewise linear models and a substantial section on models for lexical richness. The final chapter on mixed models is particularly interesting as it is one of the few text book accounts that introduce the reader to using the (innovative) lme4 package of Douglas Bates which implements linear mixed-effects models. Moreover, the case studies included in this

1,679 citations

Book
01 Jan 1968

1,644 citations

Journal ArticleDOI
TL;DR: This article used a corpus of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature to test the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence.
Abstract: This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence. We introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature. As baselines, we train several recurrent neural network models on acceptability classification, and find that our models outperform unsupervised models by Lau et al. (2016) on CoLA. Error-analysis on specific grammatical phenomena reveals that both Lau et al.’s models and ours learn systematic generalizations like subject-verb-object order. However, all models we test perform far below human level on a wide range of grammatical constructions.

903 citations

Journal ArticleDOI
TL;DR: A wide-angle view of reading comprehension is reintroduced, the Reading Systems Framework, which places word knowledge in the center of the picture, taking into account the progress made in comprehension research and theory.
Abstract: We reintroduce a wide-angle view of reading comprehension, the Reading Systems Framework, which places word knowledge in the center of the picture, taking into account the progress made in comprehension research and theory. Within this framework, word-to-text integration processes can serve as a model for the study of local comprehension processes, that is, those that make sense out of short stretches of text. These processes require linkage between the word identification system and the comprehension system, with the lexicon in the linking role. Studies of these processes examine the influence of one sentence on the reading of a single word in a second sentence, which enables the integration of the word meaning into the reader's mental model of the text. Skilled comprehenders, more than less skilled, show immediate use of word meanings in the integration process. Other evidence is also consistent with the assumption that word meaning processes are causal components in comprehension skill.

771 citations