scispace - formally typeset
Search or ask a question
Author

Matthew J. C. Crump

Bio: Matthew J. C. Crump is an academic researcher from City University of New York. The author has contributed to research in topics: Stroop effect & Cognition. The author has an hindex of 22, co-authored 47 publications receiving 2902 citations. Previous affiliations of Matthew J. C. Crump include Brooklyn College & McMaster University.

Papers
More filters
Journal ArticleDOI
13 Mar 2013-PLOS ONE
TL;DR: This paper replicates a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT.
Abstract: Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to traditional laboratory studies, many aspects of the testing environment are not under the experimenter's control. In this paper, we attempt to empirically evaluate the fidelity of the AMT system for use in cognitive behavioral experiments. These types of experiment differ from simple surveys in that they require multiple trials, sustained attention from participants, comprehension of complex instructions, and millisecond accuracy for response recording and stimulus presentation. We replicate a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT. While most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser, others revealed disparity between laboratory results and online results. A number of important lessons were encountered in the process of conducting these replications that should be of value to other researchers.

1,378 citations

Journal ArticleDOI
TL;DR: This review chronologically tracks the expansion of the PC manipulation from its initial implementation at the list-wide level, to more recent implementations at the item-specific and context-specific levels, and discusses the utility of PC manipulations for exploring the distinction between voluntary control and stimulus-driven control in other relevant paradigms.
Abstract: Cognitive control is by now a large umbrella term referring collectively to multiple processes that plan and coordinate actions to meet task goals. A common feature of paradigms that engage cognitive control is the task requirement to select relevant information despite a habitual tendency (or bias) to select goal-irrelevant information. At least since the 70s, researchers have employed proportion congruent manipulations to experimentally establish selection biases and evaluate the mechanisms used to control attention. Proportion congruent manipulations vary the frequency with which irrelevant information conflicts (i.e., is incongruent) with relevant information. The purpose of this review is to summarize the growing body of literature on proportion congruent effects across selective attention paradigms, beginning first with Stroop, and then describing parallel effects in flanker and task-switching paradigms. The review chronologically tracks the expansion of the proportion congruent manipulation from its initial implementation at the list-wide level, to more recent implementations at the item-specific and context-specific levels. An important theoretical aim is demonstrating that proportion congruent effects at different levels (e.g., list-wide vs. item or context-specific) support a distinction between voluntary forms of cognitive control, which operate based on anticipatory information, and relatively automatic or reflexive forms of cognitive control, which are rapidly triggered by the processing of particular stimuli or stimulus features. A further aim is to highlight those proportion congruent manipulations that allow researchers to dissociate stimulus-driven control from other stimulus-driven processes (e.g., S-R responding; episodic retrieval). We conclude by discussing the utility of proportion congruent manipulations for exploring the distinction between voluntary control and stimulus-driven control in other relevant paradigms.

246 citations

Journal ArticleDOI
TL;DR: The results suggest that processes other than learning of word—response associations can produce contextual control over Stroop interference, and an item-specific proportion congruent effect that cannot be explained by such word— response associations is demonstrated.
Abstract: The Stroop effect has been shown to depend on the relative proportion of congruent and incongruent trials. This effect is commonly attributed to experiment-wide word-reading strategies that change as a function of proportion congruent. Recently, Jacoby, Lindsay, and Hessels (2003) reported an itemspecific proportion congruent effect that cannot be due to these strategies and instead may reflect rapid, stimulus driven control over word-reading processes. However, an item-specific proportion congruent effect may also reflect learned associations between color word identities and responses. In two experiments, we demonstrate a context-specific proportion congruent effect that cannot be explained by such word—response associations. Our results suggest that processes other than learning of word—response associations can produce contextual control over Stroop interference.

219 citations

Journal ArticleDOI
TL;DR: The results of four experiments provide evidence for controlled processing in the absence of awareness by showing that the contingency effect results from behavioural control and not from semantic association or stimulus familiarity.

147 citations

Journal ArticleDOI
TL;DR: A consensus view among opposing theorists is presented that specifies how researchers can measure four hallmark indices of adaptive control (the congruency sequence effect, and list-wide, context-specific, and item-specific proportion congruencies effects) while minimizing easy-to-overlook confounds.

135 citations


Cited by
More filters
28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

01 Jan 1964
TL;DR: In this paper, the notion of a collective unconscious was introduced as a theory of remembering in social psychology, and a study of remembering as a study in Social Psychology was carried out.
Abstract: Part I. Experimental Studies: 2. Experiment in psychology 3. Experiments on perceiving III Experiments on imaging 4-8. Experiments on remembering: (a) The method of description (b) The method of repeated reproduction (c) The method of picture writing (d) The method of serial reproduction (e) The method of serial reproduction picture material 9. Perceiving, recognizing, remembering 10. A theory of remembering 11. Images and their functions 12. Meaning Part II. Remembering as a Study in Social Psychology: 13. Social psychology 14. Social psychology and the matter of recall 15. Social psychology and the manner of recall 16. Conventionalism 17. The notion of a collective unconscious 18. The basis of social recall 19. A summary and some conclusions.

5,690 citations

Posted Content
TL;DR: A new dataset of human perceptual similarity judgments is introduced and it is found that deep features outperform all previous metrics by large margins on this dataset, and suggests that perceptual similarity is an emergent property shared across deep visual representations.
Abstract: While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

3,838 citations

Proceedings ArticleDOI
11 Jan 2018
TL;DR: In this paper, the authors introduce a new dataset of human perceptual similarity judgments, and systematically evaluate deep features across different architectures and tasks and compare them with classic metrics, finding that deep features outperform all previous metrics by large margins on their dataset.
Abstract: While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

3,322 citations

Journal ArticleDOI
11 Dec 2015-Science
TL;DR: A computational model is described that learns in a similar fashion and does so better than current deep learning algorithms and can generate new letters of the alphabet that look “right” as judged by Turing-like tests of the model's output in comparison to what real humans produce.
Abstract: People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.

2,364 citations