scispace - formally typeset
Search or ask a question
Author

Dale J. Barr

Bio: Dale J. Barr is an academic researcher from University of Glasgow. The author has contributed to research in topics: Conversation & Pragmatics. The author has an hindex of 29, co-authored 56 publications receiving 9872 citations. Previous affiliations of Dale J. Barr include University of California, Santa Cruz & University of California, Riverside.


Papers
More filters
Journal ArticleDOI
TL;DR: It is argued that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades, and it is shown thatLMEMs generalize best when they include the maximal random effects structure justified by the design.

6,878 citations

Journal ArticleDOI
TL;DR: It is argued that people occasionally use an egocentric heuristic, which is successful in reducing ambiguity, though it could lead to a systematic error.
Abstract: When people interpret language, they can reduce the ambiguity of linguistic expressions by using information about perspective: the speaker's, their own, or a shared perspective. In order to investigate the mental processes that underlie such perspective taking, we tracked people's eye movements while they were following instructions to manipulate objects. The eye fixation data in two experiments demonstrate that people do not restrict the search for referents to mutually known objects. Eye movements indicated that addressees considered objects as potential referents even when the speaker could not see those objects, requiring addressees to use mutual knowledge to correct their interpretation. Thus, people occasionally use an egocentric heuristic when they comprehend. We argue that this egocentric heuristic is successful in reducing ambiguity, though it could lead to a systematic error.

663 citations

Journal ArticleDOI
TL;DR: A stark dissociation is shown between an ability to reflectively distinguish one's own beliefs from others' and the routine deployment of this ability in interpreting the actions of others, suggesting important elements of the adult's theory of mind are not fully incorporated into the human comprehension system.

630 citations

Journal ArticleDOI
TL;DR: A new framework is offered that uses multilevel logistic regression (MLR) to analyze data from ‘visual world’ eyetracking experiments used in psycholinguistic research, making it possible to incorporate time as a continuous variable and gaze location as a categorical dependent variable.

547 citations

Journal ArticleDOI
TL;DR: The following new guideline is proposed: models testing interactions in designs with replications should include random slopes for the highest-order combination of within-unit factors subsumed by each interaction, coming from the logic of mixed-model ANOVA.
Abstract: In a recent paper on mixed-effects models for confirmatory analysis, Barr et al. (2013) offered the following guideline for testing interactions: “one should have by-unit [subject or item] random slopes for any interactions where all factors comprising the interaction are within-unit; if any one factor involved in the interaction is between-unit, then the random slope associated with that interaction cannot be estimated, and is not needed” (p. 275). Although this guideline is technically correct, it is inadequate for many situations, including mixed factorial designs. The following new guideline is therefore proposed: models testing interactions in designs with replications should include random slopes for the highest-order combination of within-unit factors subsumed by each interaction. Designs with replications are designs where there are multiple observations per sampling unit per cell. Psychological experiments typically involve replicated observations, because multiple stimulus items are usually presented to the same subjects within a single condition. If observations are not replicated (i.e., there is only a single observation per unit per cell), random slope variance cannot be distinguished from random error variance and thus random slopes need not be included. This new guideline implies that a model testing AB in a 2 × 2 design where A is between and B within should include a random slope for B. Likewise, a model testing all two- and three- way interactions in a 2 × 2 × 2 design where A is between and B, C are within should include random slopes for B, C, and BC. The justification for the guideline comes from the logic of mixed-model ANOVA. In an ANOVA analysis of the 2 × 2 design described above, the appropriate error term for the test of AB is MSUB, the mean squares for the unit-by-B interaction (e.g., the subjects-by-B or items-by-B interaction). For the 2 × 2 × 2 design, the appropriate error term for ABC and BC is MSUBC, the unit-by-BC interaction; for AB, it is MSUB; and for AC, it is MSUC. To what extent is this ANOVA logic applicable to tests of interactions in mixed-effects models? To address this question, Monte Carlo simulations were performed using R (R Core Team, 2013). Models were estimated using the lmer() function of lme4 (Bates et al., 2013), with p-values derived from model comparison (α = 0.05). The performance of mixed-effects models (in terms of Type I error and power) was assessed over two sets of simulations, one for each of two different mixed factorial designs. The first set focused on the test of the AB interaction in a 2 × 2 design with A between and B within; the second focused on the test of the ABC interaction in a 2 × 2 × 2 design with A between and B, C within. For simplicity all datasets included only a single source of random effect variance (e.g., by-subject but not by-item variance). The number of replications per cell was 4, 8, or 16. Predictors were coded using deviation (−0.5, 0.5) coding; identical results were obtained using treatment coding. In the rare case (~2%) that a model did not converge, it was removed from the analysis. Power was reported with and without adjustment for Type I error rate, using the adjustment method reported in Barr et al. (2013). For each set of simulations at each of the three replication levels, 10,000 datasets were randomly generated, each with 24 sampled units (e.g., subjects). The dependent variable was continuous and normally distributed, with all data-generating parameters drawn from uniform distributions. Fixed effects were either between −2 and −1 or between 1 and 2 (with equal probability). The error variance was fixed at 6, and the random effects variance/covariance matrix had variances ranging from 0 to 3 and covariances corresponding to correlations ranging from −0.9 to 0.9. For the 2 × 2 design, mixed-effects models with two different random effects structures were fit to the data: (1) by-unit random intercept but no random slope for B (“RI”), and (2) a maximal model including a slope for B in addition to the random intercept (“Max”). For comparison purposes, a test of the interaction using mixed-model ANOVA (“AOV”) was performed using R's aov() function. Results for the test of the AB interaction in the 2 × 2 design are in Tables ​Tables11 and ​and2.2. As expected, the Type I error rate for ANOVA and maximal models were very close to the stated α-level of 0.05. In contrast, models lacking the random slope for B (“RI”) showed unacceptably high Type I error rates, increasing with the number of replications. Adjusted power was comparable for all three types of analyses (Table ​(Table2),2), albeit with a slight overall advantage for RI. Table 1 Type I error rate for the test of AB in the 2 × 2 design. Table 2 Power for the test of AB in the 2 × 2 design, Adjusted (Raw) p-values. The test of the ABC interaction in the 2 × 2 design was evaluated under four different random effects structures, all including a random intercept but varying in which random slopes were included. The models were: (1) random intercept only (“RI”); (2) slopes for B and C but not for BC (“nBC”); (3) slope for BC but not for B nor C (“BC”); and (4) maximal (slopes for B, C, and BC; “Max”). For the test of the ABC interaction, ANOVA and maximal models both yielded acceptable Type I performance (Table ​(Table3);3); the model with the BC slope alone (“BC”) was comparably good. However, the model excluding the BC slope had unacceptably high Type I error rates; surprisingly, omitting this random slope may be even worse than a random-intercept-only model. Adjusted power was comparable across all analyses (Table ​(Table44). Table 3 Type I error rate for test of ABC in 2 × 2 × 2 design. Table 4 Power for test of ABC in 2 × 2 × 2 design, Adjusted (Raw) p-values. To summarize: when testing interactions in mixed designs with replications, it is critical to include the random slope corresponding to the highest-order combination of within-subject factors subsumed by each interaction of interest. It is just as important to attend to this guideline when one seeks to simplify a non-converging model as when one is deciding on what structure to fit in the first place. Failing to include the critical slope in the test of an interaction can yield unacceptably high Type I error rates. Indeed, a model that includes all relevant random slopes except for the single critical slope may perform just as badly as (or possibly even worse than) a random-intercepts-only model, even though such a model is nearly maximal. Finally, note that including only the critical random slope in the model was sufficient to obtain acceptable performance, as illustrated by the “BC” model in the 2 × 2 × 2 design. Although the current simulations only considered interactions between categorical variables, the guideline applies whenever there are replicated observations, regardless of what types of variables are involved in an interaction (e.g., continuous only, or a mix of categorical and continuous). For example, consider a design with two independent groups of subjects, where there are observations at multiple time points for each subject. When testing the time-by-group interaction, the model should include a random slope for the continuous variable of time; if time is modeled using multiple terms of a polynomial, then there should be a slope for each of the terms in the polynomial that interact with group. For instance, if the effect of time is modeled as Y = β0 + β1 t + β2 t2 and the interest is in whether the β0 and β1 parameters vary across group, then the random effects structure should include slopes for both the group-by-t and group-by-t2 interactions.

459 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is argued that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades, and it is shown thatLMEMs generalize best when they include the maximal random effects structure justified by the design.

6,878 citations

Journal ArticleDOI
TL;DR: It is argued and present evidence that great apes understand the basics of intentional action, but they still do not participate in activities involving joint intentions and attention (shared intentionality), and children's skills of shared intentionality develop gradually during the first 14 months of life.
Abstract: We propose that the crucial difference between human cognition and that of other species is the ability to participate with others in collaborative activities with shared goals and intentions: shared intentionality. Participation in such activities requires not only especially powerful forms of intention reading and cultural learning, but also a unique motivation to share psychological states with oth- ers and unique forms of cognitive representation for doing so. The result of participating in these activities is species-unique forms of cultural cognition and evolution, enabling everything from the creation and use of linguistic symbols to the construction of social norms and individual beliefs to the establishment of social institutions. In support of this proposal we argue and present evidence that great apes (and some children with autism) understand the basics of intentional action, but they still do not participate in activities involving joint intentions and attention (shared intentionality). Human children's skills of shared intentionality develop gradually during the first 14 months of life as two ontogenetic pathways intertwine: (1) the general ape line of understanding others as animate, goal-directed, and intentional agents; and (2) a species-unique motivation to share emotions, experience, and activities with other persons. The develop- mental outcome is children's ability to construct dialogic cognitive representations, which enable them to participate in earnest in the collectivity that is human cognition.

3,660 citations

Book
01 Jul 2002
TL;DR: In this article, a review is presented of the book "Heuristics and Biases: The Psychology of Intuitive Judgment, edited by Thomas Gilovich, Dale Griffin, and Daniel Kahneman".
Abstract: A review is presented of the book “Heuristics and Biases: The Psychology of Intuitive Judgment,” edited by Thomas Gilovich, Dale Griffin, and Daniel Kahneman.

3,642 citations

Journal ArticleDOI
TL;DR: The sixth claim has received the least attention in the literature on embodied cognition, but it may in fact be the best documented and most powerful of the six claims.
Abstract: The emerging viewpoint of embodied cognition holds that cognitive processes are deeply rooted in the body’s interactions with the world. This position actually houses a number of distinct claims, some of which are more controversial than others. This paper distinguishes and evaluates the following six claims: (1) cognition is situated; (2) cognition is time-pressured; (3) we off-load cognitive work onto the environment; (4) the environment is part of the cognitive system; (5) cognition is for action; (6) offline cognition is body based. Of these, the first three and the fifth appear to be at least partially true, and their usefulness is best evaluated in terms of the range of their applicability. The fourth claim, I argue, is deeply problematic. The sixth claim has received the least attention in the literature on embodied cognition, but it may in fact be the best documented and most powerful of the six claims.

3,387 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examined the implica- tions of individual differences in performance for each of the four explanations of the normative/descriptive gap, including performance errors, computational limitations, the wrong norm being applied by the experi- menter, and a different construal of the task by the subject.
Abstract: Much research in the last two decades has demon- strated that human responses deviate from the performance deemed normative according to various models of decision mak- ing and rational judgment (e.g., the basic axioms of utility theory). This gap between the normative and the descriptive can be inter- preted as indicating systematic irrationalities in human cognition. However, four alternative interpretations preserve the assumption that human behavior and cognition is largely rational. These posit that the gap is due to (1) performance errors, (2) computational limitations, (3) the wrong norm being applied by the experi- menter, and (4) a different construal of the task by the subject. In the debates about the viability of these alternative explanations, attention has been focused too narrowly on the modal response. In a series of experiments involving most of the classic tasks in the heuristics and biases literature, we have examined the implica- tions of individual differences in performance for each of the four explanations of the normative/descriptive gap. Performance er- rors are a minor factor in the gap; computational limitations un- derlie non-normative responding on several tasks, particularly those that involve some type of cognitive decontextualization. Un- expected patterns of covariance can suggest when the wrong norm is being applied to a task or when an alternative construal of the task should be considered appropriate.

3,068 citations