K-12 CS Teacher Takeaways from SIGCSE 2020 – Part 3: Assessment
Posted by Bryan Twarek on March 28, 2020
Classroom ResourcesCS Research
Posted by Bryan Twarek on May 28, 2020
SIGCSE provides an important forum for computer science education research, and it is unfortunate that this year’s conference was canceled. In this four-part series, I’m excited to share some learnings and practical takeaways relevant to K-12 CS teachers to help ensure practitioners benefit from this great work. This third part focuses on K-12 assessment.
Assessment in K-12 computer science is a nascent area of research. Fortunately, several significant studies were featured in the SIGCSE program.
Assessing Middle School Student Projects for Program Evaluation
Yvonne Kao, Irene Nolan, and Andrew Rothman presented their scoring process and rubric for assessing creative student projects as a means of program-level evaluation of student learning. They used rubrics to assess holistic review (program correctness, usability, project scale and complexity, and creativity), programming fundamentals, and programming style (use of comments, code readability).
See an example rubric for one concept (variables) and screenshot of a student project used to evaluate use of variables:
CodeMaster Automated Evaluation of Student Projects
Nathalia da Cruz Alves, Igor Solecki, and their colleagues developed the CodeMaster rubric to automate the analysis of App Inventor and Snap! programs that are created by students as part of complex, open-ended learning activities. Their CodeMaster tool automates the performance-based analysis of (1) algorithms and programming concepts and (2) user interface design concepts (i.e., usability and aesthetics of the applications). Their team evaluated the reliability and validity of the rubric based on the tests using thousands of projects in the App Inventor Gallery.
See an excerpt from their rubric used to evaluate use of CS concepts:
Personalized Assessment Worksheets for Scratch (3-6)
Jean Salac, Diana Franklin, and their colleagues created the Personalized Assessment Worksheets for Scratch (PAWS) Tool, in order to create a scalable, automated technique to assess student comprehension of their own code. Upper elementary students all receive the same assessment items to measure their understanding of core CS concepts. When possible, generic assessment items are personalized by integrating segments of students’ code from their own Scratch projects. See sample items:
The authors found that:
When asked multiple-choice questions about their scripts or partial scripts in which the original meaning is retained, students answer similarly to or better than students receiving generic questions.
When explaining their code, students are more likely to answer the question, but they often do not describe the individual blocks as thoroughly as students receiving generic questions.
Satabdi Basu and her colleagues presented their computational thinking assessment for 4th to 6th grade students in Hong Kong. Their FKSAs and associated assessment items are based on the framework described by Brennan and Resnick (2012): reusing and remixing; algorithmic thinking; abstraction and modularization; and testing and debugging. Here is an example item for algorithmic thinking:
Associated FKSAs:
Ability to summarize the behavior of an algorithm when given a specific set of inputs.
Ability to compare the trade-offs between multiple approaches/techniques to implementing a problem based on certain evaluation criteria.
The first item asks students which of the three methods will take the robot from the Start to the Finish square. The subsequent items increase the number of constraints. The second item asks students to select the method that represents the fastest route from Start to Finish, while the third item has students selecting the method that incurs the least cost from Start to Finish. The fourth item combines time and cost constraints and asks students to select an option that will meet all the given criteria.
Markeya Peteranetz, Patrick Morrow, and Leen-Kiat Soh developed and validated a Computational Thinking Concepts and Skills Test (CTCAST) to measure core computational thinking knowledge among undergraduate students. Assessed concepts include problem decomposition, pattern recognition, abstraction, generalization, algorithm design, and evaluation. They compared scores to another multiple choice assessment they developed called the Nebraska Assessment of Computing Knowledge (NACK), which contains both conceptual and application questions. While the CTCAST has only been administered to undergraduate students, the authors believe it might be useful to high school students. See two example items:
Abstraction
Assume that the U.S. Census Bureau wants to compute the average household income values for different household sizes and income categories, which of the following data is relevant in their computation?
I. Yearly household income for each household
II. The number of members for each household
III. Gender of each household member
IV. Job/employment status of each household member
V. Age of each household member
A. I & II only
B. I, II, & IV only
C. I, II, & V only
D. All data
Pattern Decomposition
Consider the following modules and their required inputs, outputs, and time to execute:
Module
Inputs
Outputs
Time Required (s)
A
p
q
5
B
s, r
t
3
C
q, t
u
7
D
p, u
v
2
Given the inputs p, s, r, and the above four modules, what is the minimal number of seconds needed to generate the output v?
While not presented at SIGCSE, Karen Brennan, Paulina Haduong, and Emily Veno from the Creative Computing Lab just published a new report called Assessing Creativity in Computing Classrooms. They interviewed 80 K-12 computing teachers to answer a guiding question: How do K-12 computing teachers assess creative programming work? Across these interviews, they found five key principles that guide teachers’ assessment:
Foster a classroom culture that values assessment.
See student process as well as product.
Understand what is creative for the student.
Support students by incorporating feedback from multiple perspectives (e.g., peers, families, other authentic audiences).
Scaffold opportunities for students to develop judgment of their own work.
Their report includes four detailed case studies and a selection of 50 assessments. For example, a high school teacher gathers multiple perspectives on projects, including the students’ own self-reflections, peer assessment, and evaluation from visiting community members:
This is only a small glimpse of the content prepared for SIGCSE 2020. See Parts 1 and 2 synthesizing a variety of research takeaways and instructional strategies. Part 4 on curricula, tools, and platforms will be published tomorrow. If you want to learn more, view SIGCSE 2020 Online and the entire Proceedings in the ACM Digital Library, which is currently free and open to everyone through June 30, 2020.
Please let us know what you find useful and what we’ve missed by writing to @csteachersorg and @btwarek.