Evaluating a new exam question
Paul Denny, Andrew Luxton-Reilly, Beth Simon
Zu finden in: ICER 2008 (Seite 113 bis 124), 2008
Common exam practice centres around two question types: code tracing (reading) and code writing. It is commonly believed that code tracing is easier than code writing, but it seems obvious that different skills are needed for each. These problems also differ in their value on an exam. Pedagogically, code tracing on paper is an authentic task whereas code writing on paper is less so. Yet, few instructors are willing to forgo the code writing question on an exam. Is there another way, easier to grade, that captures the "problem solving through code creation process" we wish to examine? In this paper we propose Parson's puzzle-style problems for this purpose. We explore their potential both qualitatively, through interviews, and quantitatively through a set of CS1 exams. We find notable correlation between Parsons scores and code writing scores. We find low correlation between code writing and tracing and between Parsons and tracing. We also make the case that marks from a Parsons problem make clear what students don't know (specifically, in both syntax and logic) much less ambiguously than marks from a code writing problem. We make recommendations on the design of Parsons problems for the exam setting, discuss their potential uses and urge further investigations of Parsons problems for assessment of CS1 students.
Volltext dieses Dokuments
|Evaluating a new exam question: Fulltext at the ACM Digital Library (: , 369 kByte; : 2017-06-28)|
|Verweise auf dieses Konferenz-Paper|
|Verweise von diesem Konferenz-Paper|
|Webzugriffe auf dieses Konferenz-Paper|
Falls Ihnen diese Seite gefallen hat
Besucher(06.17): 000002 *
Besucher Total : 000014 *
Beats Biblionetz ist Teilnehmer des Partnerprogramms von Amazon Europe S.à.r.l.
Beat Döbeli Honegger ist bei Google+
*(ohne Suchmaschinen und ohne Proxy-Verluste)