E
Eric Anthony Day
Researcher at University of Oklahoma
Publications - 62
Citations - 2485
Eric Anthony Day is an academic researcher from University of Oklahoma. The author has contributed to research in topics: Dreyfus model of skill acquisition & Task (project management). The author has an hindex of 22, co-authored 62 publications receiving 2261 citations. Previous affiliations of Eric Anthony Day include Texas A&M University & Ohio State University.
Papers
More filters
Journal ArticleDOI
A meta‐analysis of the criterion‐related validity of assessment center dimensions
TL;DR: In this paper, the authors used meta-analytic procedures to investigate the criterion-related validity of assessment center dimension ratings, focusing on dimension-level information, and they were able to assess the extent to which specific constructs account for the criterion related validities of assessment centers.
Journal ArticleDOI
Relationships among team ability composition, team mental models, and team performance.
TL;DR: Although similarity and accuracy of team mental models were significantly related, accuracy was a stronger predictor of team performance than the similarity, and team ability was more strongly related to the accuracy.
Journal ArticleDOI
Knowledge structures and the acquisition of a complex skill.
TL;DR: Findings indicated that the similarity of trainees' knowledge structures to an expert structure was correlated with skill acquisition and was predictive of skill retention and skill transfer.
Journal ArticleDOI
Social identity and individual productivity within groups
TL;DR: Results supported the general prediction that group productivity would be enhanced by factors that increase group categorization and the importance of the group to members' social identities, but productivity in groups was not influenced by perceptions of the task or identifiability of performance.
Journal ArticleDOI
Large-scale investigation of the role of trait activation theory for understanding assessment center convergent and discriminant validity.
TL;DR: Overall, convergence among assessment center ratings was better between exercises that provided an opportunity to observe behavior related to the same trait, and discrimination among ratings within exercises was generally better for dimensions that were not expressions of the same underlying traits.