Automatically assessing the quality of student-written testsZalia Shams
Publikationsdatum:
Zu finden in: ICER 2013 (Seite 189 bis 190), 2013
|
|
Zusammenfassungen
Software testing is frequently being added to programming courses at many schools, but current assessment techniques for evaluating student-written software tests are imperfect. Code coverage measures are typically used in practice, but that approach does not assess how much of the expected behavior is checked by the tests and sometimes overestimates the true quality of the tests. Running one student's tests against others' code (known as all-pairs testing) and mutation analysis are better indicators of test quality but both of them posed a number of practical obstacles to classroom use. This proposal describes technical obstacles behind using these two approaches in automated grading. We propose novel and practical solutions to apply all-pairs testing and mutation analysis of student-written tests, especially in an automated grading context. Experimental results of applying our techniques in eight CS1 and CS2 assignments submitted by 147 students show feasibility of our solution. Finally, we discuss our plan to combine the approaches to evaluate tests of assignments having large amounts of design freedom and explain their evaluation plan.
Dieses Konferenz-Paper erwähnt ...
Anderswo finden
Volltext dieses Dokuments
Automatically assessing the quality of student-written tests: Fulltext at the ACM Digital Library (: , 103 kByte; : 2020-11-28) |
Anderswo suchen
Beat und dieses Konferenz-Paper
Beat hat Dieses Konferenz-Paper während seiner Zeit am Institut für Medien und Schule (IMS) ins Biblionetz aufgenommen. Beat besitzt kein physisches, aber ein digitales Exemplar. Eine digitale Version ist auf dem Internet verfügbar (s.o.). Aufgrund der wenigen Einträge im Biblionetz scheint er es nicht wirklich gelesen zu haben. Es gibt bisher auch nur wenige Objekte im Biblionetz, die dieses Werk zitieren.