scispace - formally typeset
Open Access

"Systems of Assessments" for Deeper Learning of Computational Thinking in K-12

Reads0
Chats0
TLDR
The need for multiple measures or “systems of assessments” that are complementary, attend to cognitive and non-cognitive aspects of CT, and contribute to a comprehensive picture of student learning are argued.
Abstract
As K-12 educators move to introduce computing curricula, the issue of assessing student learning of computational concepts remains largely unresolved. This is central, however, if the goal is to help students develop deeper, transferable computational thinking (CT) skills that prepare them for success in future computing experiences. This paper argues for the need for multiple measures or “systems of assessments” that are complementary, attend to cognitive and non-cognitive aspects of CT, and contribute to a comprehensive picture of student learning. It also describes the multiple forms of assessments used in a middle school computing curriculum, including formative assessments such as multiple-choice quizzes, and directed as well as open-ended programming assignments, and summative assessments to measure growth and transfer of CT. Objectives “Deeper learning” (Pellegrino and Hilton, 2012) is increasingly seen as an imperative for helping students develop robust, transferable knowledge and skills for the 21-century. The phrase acknowledges the cognitive, intrapersonal and interpersonal dimensions of learning, while also underscoring the need for learners to be able to transfer learning to future contexts. Ideas of deeper learning find resonance in How People Learn (Bransford, Brown & Cocking, 2000)—the seminal treatise that explicated the need for learning environments to be assessment-centered in addition to learner-, knowledgeand community-centered. Computational Thinking (CT) is widely recognized today as a necessary skill for today’s generation of learners (Wing 2006, 2011), and a consensus has been building around the view that all children must be offered experiences with computer science (CS) in their K-12 years (Grover & Pea, 2013). Without attention to assessment, computing can have little hope of making its way successfully into K–12 school education settings at scale. Using the deeper learning lens for assessing computational learning, this paper reviews the shortcomings in commonly used assessments of CT, and argues for employing “systems of assessments” (Conley & Darling-Hammond, 2013) to assess the development of CT’s cognitive and non-cognitive aspects. As a case in point, it describes the systems of assessments employed in an introductory computer science course for middle school students, and the results of empirical inquiries of its use. Framing the Research Despite the many efforts aimed specifically at tackling the issue of CT assessment (e.g., Fields, Searle, Kafai & Min, 2012; Ioannidou, Repenning, & Webb, 2010; Meerbaum-Salant et al., 2010; Werner et al., 2012), assessing the learning of computational concepts and constructs in these programming environments remains a challenge. Furthermore, few studies at the K-12 level, if any, have looked at the issue of transfer of CT skills to future learning contexts. New approaches to transfer such as Preparation for Future Learning (Schwartz, Bransford & Sears, 2005) have shown promise in the context of science and mathematics learning at the secondary level (Dede, 2009; Schwartz & Martin, 2004). Interventions in CS education could similarly benefit from these emergent ideas from the learning sciences. In the context of introductory programming, Werner, et al. (2012) conducted a series of investigations involving game programming in Alice with middle school students. Their “Fairy Assessment” requires students to code parts of a pre-designed program to accomplish specific tasks. This assessment is Alice-based, and grading is subjective and time-consuming—a perennial challenge for assessing student code. Looking at student-created programs alone could also provide an inaccurate sense of students’ computational competencies (Brennan & Resnick, 2012). Although timeconsuming, “artifact-based interviews” can help provide a more accurate picture of student understanding of their programming projects (Barron et al., 2002). There is a thus a need for more objective assessment instruments as well that can illuminate student understanding of specific computing concepts. Cooper created a multiple-choice instrument for measuring learning of Alice programming concepts (Moskal, Lurie & Cooper, 2004), but it has not been used to measure student learning in K-12 education. SRI International (2013)’s effort to create systematic frameworks for assessing CT—Principled Assessment of Computational Thinking (PACT)—focuses on assessing CS Concepts, Inquiry Skills, and Communication & Collaboration Skills as key elements of CT practices in the context of the high school ECS curriculum. Outside the US, introductory computing curricula at the elementary and middle school levels in countries such as the UK (Scott, 2013) and Israel ((Zur Bargury, 2012) also provide useful ideas for assessment. The Israeli effort uses multiple-choice assessments and attendant rubrics (Zur Bargury, Pârv & Lanzberg, 2013) that make it easier to measure learning in a large-scale setting than open-ended student projects. Barron and Darling-Hammond (2008) contend that robust assessments for meaningful learning must include: (1) intellectually ambitious performance assessments that require application of desired concepts and skills in disciplined ways; (2) rubrics that define what constitutes good work; and (3) frequent formative assessments to guide feedback to students and teachers’ instructional decisions. Conley and Darling-Hammond (2013) assert that assessments for deeper learning must measure: 1. Higher-order cognitive skills, and more importantly, skills that support transferable learning, and 2. Abilities such as collaboration, complex problem solving, planning, reflection, and communication of these ideas through use of appropriate vocabulary of the domain in addition to presentation of artifacts to a broader audience. These assessments are in addition to those that measure key subject matter concepts. This assertion implies the need for multiple measures or “systems of assessments” that are complementary, encourage and reflect deeper learning, and contribute to a comprehensive picture of student learning. Few to none of the prior efforts described above nor current computing curricula (such as those promoted by Code.org or Khan Academy) include such comprehensive assessments, if they assess at all.

read more

Content maybe subject to copyright    Report

Citations
More filters

Computational Thinking 計算論的思考

TL;DR: In this article, a universally applicable attitude and skill set for computer science is presented, which is a set of skills and attitudes that everyone would be eager to learn and use, not just computer scientists.
Journal ArticleDOI

Teaching Computational Thinking Using Agile Software Engineering Methods: A Framework for Middle Schools

TL;DR: The results show that Agile software engineering methods are effective at teaching CT in middle schools, after the addition of some tasks to allow students to explore, project, and experience the potential product before using the software tools at hand.
Book ChapterDOI

Combining Assessment Tools for a Comprehensive Evaluation of Computational Thinking Interventions

TL;DR: A comprehensive model is proposed to assess CT along every cognitive level of Bloom’s taxonomy and throughout the various stages of typical educational interventions and may lead scholars and policy-makers to perform accurate evaluation designs of CT according to their inquiry goals.
Journal ArticleDOI

Extending the nomological network of computational thinking with non-cognitive factors

TL;DR: The study of the correlations between CT, self-efficacy and the several dimensions from the ‘Big Five’ model of human personality, which corroborate the existence of a non-cognitive side of CT that should be taken into account by educational policies and interventions aimed at fostering CT.
Journal ArticleDOI

Computational Thinking and Literacy

TL;DR: In this article, a three dimensional framework for exploring the relationship between computational thinking and literacy is proposed, where the authors situate computational thinking in the literature as a literacy, outlining mechanisms by which students' existing literacy skills can be leveraged to foster computational thinking, and elaborating ways in which computational thinking skills facilitate literacy development.
References
More filters
Book

How people learn: Brain, mind, experience, and school.

TL;DR: New developments in the science of learning as mentioned in this paper overview mind and brain how experts differ from novices how children learn learning and transfer the learning environment curriculum, instruction and commnity effective teaching.
Journal ArticleDOI

Computational thinking

TL;DR: In this paper, a universally applicable attitude and skill set for computer science is presented, which is a set of skills and attitudes that everyone would be eager to learn and use, not just computer scientists.
Journal Article

"Kappan Classic": Inside the Black Box--Raising Standards through Classroom Assessment.

Paul Black, +1 more
- 01 Sep 2010 - 
TL;DR: In this paper, the authors present an essential component of classroom work and can raise student achievement, which can be seen as a formative assessment, and can be used as a reward.
Book

Inside the Black Box: Raising Standards Through Classroom Assessment

TL;DR: In this article, the authors present an essential component of classroom work and can raise student achievement, which can be seen as a formative assessment, and can be used as a reward.
Related Papers (5)