Journal ArticleDOI
Accuracy of teachers' judgments of students' cognitive abilities: A meta-analysis
TLDR
In this paper, a meta-analysis of teachers' judgments of students' cognitive abilities is presented, including intelligence, giftedness, other cognitive abilities, and creativity, with a mean judgment accuracy of 0.43.About:
This article is published in Educational Research Review.The article was published on 2016-11-01. It has received 79 citations till now. The article focuses on the topics: Cognition & Academic achievement.read more
Citations
More filters
Journal ArticleDOI
Effective differentiation Practices:A systematic review and meta-analysis of studies on the cognitive effects of differentiation practices in primary education
TL;DR: The authors found that using computerized systems as a differentiation tool and using differentiation as part of a broader program or reform had small to moderate positive effects on students' performance in primary education.
Journal ArticleDOI
The application of meta-analytic (multi-level) models with multiple random effects: A systematic review
Belén Fernández-Castilla,Laleh Jamshidi,Lies Declercq,S. Natasha Beretvas,Patrick Onghena,Wim Van Den Noortgate +5 more
TL;DR: Results showed that four- or five-level or cross-classified random effects models are not often used although they might account better for the meta-analytic data structure of the analyzed datasets and it was found that the simulation studies done on multilevel meta-analysis with multiple random factors could have used more realistic simulation factor conditions.
Journal ArticleDOI
A Review on the Accuracy of Teacher Judgments
Detlef Urhahne,Lisette Wijnia +1 more
TL;DR: In this article, the authors synthesize the methodological, empirical, theoretical, and practical knowledge from 40 years of research on the accuracy of teacher judgments and differentiate the term from other related constructs.
Journal ArticleDOI
Are Teachers’ Ratings of Students’ Creativity Related to Students’ Divergent Thinking? A Meta-Analysis
Jacek Gralewski,Maciej Karwowski +1 more
TL;DR: In this article, the authors meta-analytically examined the relationship between teachers' ratings of students' creativity and students' creative abilities as measured with divergent thinking tests and found a statistically significant, yet weak-to-moderate (r r = 1.23) relationship between students' divergent reasoning and teachers's ratings of student's creativity.
Journal ArticleDOI
What Do Teachers Think About Their Students' Inclusion? Consistency of Students' Self-Reports and Teacher Ratings.
TL;DR: A correlated trait-correlated method minus one model provided evidence that the method-specificity of teacher ratings was larger than the consistency between the self-reports and teacher ratings.
References
More filters
Book
Statistical Power Analysis for the Behavioral Sciences
TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Journal ArticleDOI
The Power of Feedback
John Hattie,Helen Timperley +1 more
TL;DR: This paper provided a conceptual analysis of feedback and reviewed the evidence related to its impact on learning and achievement, and suggested ways in which feedback can be used to enhance its effectiveness in classrooms.
Book
Mindset: The New Psychology of Success
TL;DR: Mindset is a simple idea discovered by world-renowned Stanford University psychologist Carol Dweck in decades of research on achievement and success as discussed by the authors, and it has been shown to increase motivation and productivity in the worlds of business, education and sports.
Book
Human Cognitive Abilities: A Survey of Factor-Analytic Studies
TL;DR: A survey of correlational and factor-analytic research on cognitive abilities can be found in this paper, with a focus on the three-stratum theory of cognitive abilities and higher order factors of cognitive ability.
Calculating, Interpreting, And Reporting Cronbach’s Alpha Reliability Coefficient For Likert-Type Scales
TL;DR: This paper showed that single-item questions pertaining to a construct are not reliable and should not be used in drawing conclusions, and compared the reliability of a summated, multi-item scale versus a single item question.