scispace - formally typeset
Search or ask a question

Showing papers by "James L. McClelland published in 2021"


Journal ArticleDOI
TL;DR: Though behavioral data can sometimes exhibit approximate adherence to Weber’s law, the findings suggest that such adherence is not a fixed characteristic of the mechanisms whereby humans and animals estimate numerosity, and reflects an adaptive ensemble of mechanisms composed to optimize performance under these circumstances.
Abstract: Both humans and nonhuman animals can exhibit sensitivity to the approximate number of items in a visual array or events in a sequence, and across various paradigms, uncertainty in numerosity judgments increases with the number estimated or produced. The pattern of increase is usually described as exhibiting approximate adherence to Weber's law, such that uncertainty increases proportionally to the mean estimate, resulting in a constant coefficient of variation. Such a pattern has been proposed to be a signature characteristic of an innate "number sense." We reexamine published behavioral data from two studies that have been cited as prototypical evidence of adherence to Weber's law and observe that in both cases variability increases less than this account would predict, as indicated by a decreasing coefficient of variation with an increase in number. We also consider evidence from numerosity discrimination studies that show deviations from the constant coefficient of variation pattern. Though behavioral data can sometimes exhibit approximate adherence to Weber's law, our findings suggest that such adherence is not a fixed characteristic of the mechanisms whereby humans and animals estimate numerosity. We suggest instead that the observed pattern of increase in variability with number depends on the circumstances of the task and stimuli, and reflects an adaptive ensemble of mechanisms composed to optimize performance under these circumstances.

7 citations


Posted ContentDOI
14 Oct 2021-bioRxiv
TL;DR: In this article, the authors consider how an underlying parallel mechanism simultaneously influenced by proximal and future information may be at work when decision makers confront both types of situations and find that participants tend to place somewhat more weight on factors relevant to the immediate next action than on future considerations, and suboptimally place some weight on task-irrelevant factors.
Abstract: When we choose actions aimed at achieving long-range goals, proximal information cannot be exploited in a blindly myopic way, as relevant future information must often be taken into account. However, when long-range information is irrelevant to achieving proximal subgoals, it can be desirable to focus exclusively on subgoal-relevant considerations. Here, we consider how an underlying parallel mechanism simultaneously influenced by proximal and future information may be at work when decision makers confront both types of situations. Participants were asked to find the shortest path in a simple maze where the optimal path depended on both starting-point and goal-proximal constraints. This simple task was then embedded in a more complex maze where the same two constraints, but not the final goal position, determined the optimal path to the subgoal. In both tasks, initial choice responses predominantly reflected the joint influence from relevant immediate and future constraints, yet we also found systematic deviations from optimality. We modeled initial path choice as an evidence integration process and found that participants weighted the starting-point more than the equally relevant goal in the simple task. In the complex task, there was no evidence of a separate processing stage where participants first zeroed in on the subgoal as would be expected if task decomposition occurred strictly prior to choosing a path to the subgoal. Participants again placed slightly more weight on the starting point than the subgoal as in the simple task, and also placing some weight on the irrelevant final goal. These results suggest that optimizing decision making can be viewed as adjusting the weighting of constraints toward values that favor relevant ones in a given task context, and that the dynamic re-weighting of constraints at different points in a decision process can allow an inherently parallel process to exhibit approximate emergent hierarchical structure. Author SummaryOptimal approaches to achieving long-term goals often require considering relevant future information and, at other times, chunking a problem into subproblems that can be focused on one at a time. These two situations seemingly require separate modes of thinking. While simultaneous consideration allows proximal and future information to jointly guide our actions, tackling subgoals is often thought to require first coming up with a higher-level plan, then focusing on solving each subtask separately. In this study, we examine how both abilities might be explained by a shared mechanism. We conducted behavioral experiments and used computational modeling to understand how people weight various factors in choosing goal-reaching paths. We found that their weighting of task-relevant factors allowed them to approximate optimal path choices, but they tend to place somewhat more weight on factors relevant to the immediate next action than on future considerations, and suboptimally place some weight on task-irrelevant factors. These results open up the space for considering the role of situation-dependent constraint weighting as a mechanism that allows people to integrate multiple pieces of information in decision making in a flexible, context-sensitive manner in service of optimizing performance in reaching an overall goal.

Posted Content
TL;DR: This paper examined human adults' ability to learn an abstract reasoning task from a brief instructional tutorial and explanatory feedback for incorrect responses, demonstrating that human learning dynamics and ability to generalize outside the range of the training examples differ drastically from those of a representative neural network model, and that the model is brittle to changes in features not anticipated by its authors.
Abstract: Despite the groundbreaking successes of neural networks, contemporary models require extensive training with massive datasets and exhibit poor out-of-sample generalization. One proposed solution is to build systematicity and domain-specific constraints into the model, echoing the tenets of classical, symbolic cognitive architectures. In this paper, we consider the limitations of this approach by examining human adults' ability to learn an abstract reasoning task from a brief instructional tutorial and explanatory feedback for incorrect responses, demonstrating that human learning dynamics and ability to generalize outside the range of the training examples differ drastically from those of a representative neural network model, and that the model is brittle to changes in features not anticipated by its authors. We present further evidence from human data that the ability to consistently solve the puzzles was associated with education, particularly basic mathematics education, and with the ability to provide a reliably identifiable, valid description of the strategy used. We propose that rapid learning and systematic generalization in humans may depend on a gradual, experience-dependent process of learning-to-learn using instructions and explanations to guide the construction of explicit abstract rules that support generalizable inferences.