scispace - formally typeset
Search or ask a question
Author

Jonathan W. Peirce

Bio: Jonathan W. Peirce is an academic researcher from University of Nottingham. The author has contributed to research in topics: Contrast (vision) & Visual cortex. The author has an hindex of 21, co-authored 46 publications receiving 6253 citations. Previous affiliations of Jonathan W. Peirce include Babraham Institute & New York University.

Papers
More filters
Journal ArticleDOI
TL;DR: A new free suite of software tools designed to make this task easier, using the latest advances in hardware and software, written in the Python interpreted language using entirely free libraries are described.

3,575 citations

Journal ArticleDOI
TL;DR: The most notable addition has been that Builder interface, allowing users to create studies with minimal or no programming, while also allowing the insertion of Python code for maximal flexibility.
Abstract: PsychoPy is an application for the creation of experiments in behavioral science (psychology, neuroscience, linguistics, etc.) with precise spatial control and timing of stimuli. It now provides a choice of interface; users can write scripts in Python if they choose, while those who prefer to construct experiments graphically can use the new Builder interface. Here we describe the features that have been added over the last 10 years of its development. The most notable addition has been that Builder interface, allowing users to create studies with minimal or no programming, while also allowing the insertion of Python code for maximal flexibility. We also present some of the other new features, including further stimulus options, asynchronous time-stamped hardware polling, and better support for open science and reproducibility. Tens of thousands of users now launch PsychoPy every month, and more than 90 people have contributed to the code. We discuss the current state of the project, as well as plans for the future.

1,747 citations

Journal ArticleDOI
TL;DR: The range of tools and stimuli that PsychoPy provides, using OpenGL to generate very precise visual stimuli on standard personal computers, and the environment in which experiments are conducted are described.
Abstract: PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.

1,321 citations

Journal ArticleDOI
08 Apr 2004-Neuron
TL;DR: The observations confirm that contrast adaptation occurs at multiple levels in the visual system, and they provide a new way to reveal the function and perceptual significance of the M pathway.

276 citations

Journal ArticleDOI
20 Jul 2020-PeerJ
TL;DR: A wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit, which highlights the wide range of timing qualities that can occur even in these dedicated software packages for the task.
Abstract: Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioral experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit. We compared a range of popular packages: PsychoPy, E-Prime®, NBS Presentation®, Psychophysics Toolbox, OpenSesame, Expyriment, Gorilla, jsPsych, Lab.js and Testable. Where possible, the packages were tested on Windows, macOS, and Ubuntu, and in a range of browsers for the online studies, to try to identify common patterns in performance. Among the lab-based experiments, Psychtoolbox, PsychoPy, Presentation and E-Prime provided the best timing, all with mean precision under 1 millisecond across the visual, audio and response measures. OpenSesame had slightly less precision across the board, but most notably in audio stimuli and Expyriment had rather poor precision. Across operating systems, the pattern was that precision was generally very slightly better under Ubuntu than Windows, and that macOS was the worst, at least for visual stimuli, for all packages. Online studies did not deliver the same level of precision as lab-based systems, with slightly more variability in all measurements. That said, PsychoPy and Gorilla, broadly the best performers, were achieving very close to millisecond precision on several browser/operating system combinations. For response times (measured using a high-performance button box), most of the packages achieved precision at least under 10 ms in all browsers, with PsychoPy achieving a precision under 3.5 ms in all. There was considerable variability between OS/browser combinations, especially in audio-visual synchrony which is the least precise aspect of the browser-based experiments. Nonetheless, the data indicate that online methods can be suitable for a wide range of studies, with due thought about the sources of variability that result. The results, from over 110,000 trials, highlight the wide range of timing qualities that can occur even in these dedicated software packages for the task. We stress the importance of scientists making their own timing validation measurements for their own stimuli and computer configuration.

193 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The present article presents the freely available Radboud Faces Database, a face database in which displayed expressions, gaze direction, and head orientation are parametrically varied in a complete factorial design, containing both Caucasian adult and children images.
Abstract: Many research fields concerned with the processing of information contained in human faces would benefit from face stimulus sets in which specific facial characteristics are systematically varied while other important picture characteristics are kept constant. Specifically, a face database in which displayed expressions, gaze direction, and head orientation are parametrically varied in a complete factorial design would be highly useful in many research domains. Furthermore, these stimuli should be standardised in several important, technical aspects. The present article presents the freely available Radboud Faces Database offering such a stimulus set, containing both Caucasian adult and children images. This face database is described both procedurally and in terms of content, and a validation study concerning its most important characteristics is presented. In the validation study, all frontal images were rated with respect to the shown facial expression, intensity of expression, clarity of expression, genuineness of expression, attractiveness, and valence. The results show very high recognition of the intended facial expressions.

2,041 citations

Journal ArticleDOI
TL;DR: The most notable addition has been that Builder interface, allowing users to create studies with minimal or no programming, while also allowing the insertion of Python code for maximal flexibility.
Abstract: PsychoPy is an application for the creation of experiments in behavioral science (psychology, neuroscience, linguistics, etc.) with precise spatial control and timing of stimuli. It now provides a choice of interface; users can write scripts in Python if they choose, while those who prefer to construct experiments graphically can use the new Builder interface. Here we describe the features that have been added over the last 10 years of its development. The most notable addition has been that Builder interface, allowing users to create studies with minimal or no programming, while also allowing the insertion of Python code for maximal flexibility. We also present some of the other new features, including further stimulus options, asynchronous time-stamped hardware polling, and better support for open science and reproducibility. Tens of thousands of users now launch PsychoPy every month, and more than 90 people have contributed to the code. We discuss the current state of the project, as well as plans for the future.

1,747 citations

Journal ArticleDOI
TL;DR: OpenSesame is a graphical experiment builder for the social sciences that features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks.
Abstract: In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.

1,666 citations

Journal ArticleDOI
TL;DR: A formal Bayesian definition of surprise is proposed to capture subjective aspects of sensory information and it is shown that Bayesian surprise is a strong attractor of human attention, with 72% of all gaze shifts directed towards locations more surprising than the average.

1,407 citations