An amorphous model for morphological processing in visual comprehension based on naive discriminative learning.
read more
Citations
Regression Diagnostics: Identifying Influential Data and Sources of Collinearity
An integrated theory of language production and comprehension
Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning:
SUBTLEX-UK: A new and improved word frequency database for British English
Explaining human performance in psycholinguistic tasks with models of semantic similarity based on prediction and counting : A review and empirical validation
References
A mathematical theory of communication
Binary codes capable of correcting deletions, insertions, and reversals
Mixed-effects modeling with crossed random effects for subjects and items
Regression Diagnostics: Identifying Influential Data and Sources of Collinearity
Related Papers (5)
The broth in my brother's brothel: morpho-orthographic segmentation in visual word recognition.
A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge.
Frequently Asked Questions (16)
Q2. What are the future works mentioned in the paper "An amorphous model for morphological processing in visual comprehension based on naive discriminative learning" ?
Nevertheless, the authors think it is worth considering that the simpler explanation may be on the right track. One of the central questions for the cognition of language that they put forward is whether the very different language systems of the world can be acquired by the same general learning strategies ( p. 447 ). Finally, even for responses in visual lexical decision, the naive discriminative reader provides a high-level characterization of contextual learning that at the level of cortical learning may be more adequately modeled by hierarchical temporal memory systems ( Hawkins & Blakeslee, 2004 ; Numenta, 2010 ).
Q3. Why is the response latency predicted to be shorter?
Due to greater activation of its lexical meaning (and its grammatical meanings), the response latency to a longer word is predicted to be shorter.
Q4. What do the authors need for modeling morphological effects?
All the authors need for modeling morphological effects is a (symbolic) layer of orthographic nodes (unigrams and bigrams) and a (symbolic) layer of meanings.
Q5. What was the only predictor for which by-participant random slopes were supported?
Trial was the only predictor for which by-participant random slopes (for the quadratic term of Trial only) were supported by a likelihood ratio test.
Q6. What is the effect of low token frequencies and many types on item-specific learning?
low token frequencies and many types lead to reduced item-specific learning, with as flip side better generalisation to previously unseen words.
Q7. Why is the naive discriminative reader predicting shorter response latencies?
The naive discriminative reader predicts that bigram troughs also should give rise to shorter response latencies, but not because morphological decomposition would pro-ceed more effectively.
Q8. What is the effect of orthographic familiarity on the reading time measures?
Orthographic familiarity has a significant (albeit small) facilitatory effect on several reading time measures, independently of word-frequency effects.
Q9. What is the probability that a marble is drawn from the vase without replacement?
When a marble is drawn from the vase without replacement, the likelihood that its color occurs once only is equal to the ratio of the number of colors with frequency 1 (V1) to the total number of marbles (N), for the present example leading to the probability (2/40).
Q10. How many distances are needed to estimate the posterior probabilities of the discriminative learning model?
Yet no fewer than 2,238,324,000,000 distances would have to be evaluated to estimate the posterior probabilities of just these phrases in the Bayesian Reader approach.
Q11. How many weights are needed to compute the discriminative learning model?
Even for the small data set of Serbian nouns, the number of distances the Easy Bayesian Reader has to compute is already 15 times the number of weights that need to be set in the discriminative learning model.
Q12. What is the number of representations required for a naive discriminative reader?
The naive discriminative reader is also sparse in the number of representations required: at the orthographic level, letter unigrams and bigrams, and at the semantic level, meaning representations for simple words, inflectional meanings such as case and number, and the meanings of derivational affixes.
Q13. What is the effect of multiple fixations and saccades on the processing costs for longer words?
The increased processing costs for longer words are, in the present approach, the straightforward consequence of multiple fixations and saccades, a physiological factor unrelated to discriminative learning.
Q14. What is the meaning of the adjective fruitless?
The adjective fruitless is opaque when considered in isolation: the meaning ‘in vain’, ‘unprofitable’ seems unrelated to the meaning of the base, fruit.
Q15. What does the naive discriminative learning framework require to evaluate?
The naive discriminative learning framework, in which relative entropy effects emerge naturally, by contrast, imposes very limited demands on memory, and also does not require a separate process evaluating an exemplar’s distance to the prototype.
Q16. What is the degree of productivity of the derivational process?
Inflectional morphology tends to be quite regular (the irregular past tenses of English being exceptional), but derivational processes are characterized by degrees of productivity.