Educators stand to benefit from advance predictions of their students' course performance based on learning process data collected in their courses. Indeed, such predictions can help educators not only to identify at-risk students, but also to better tailor their instructional methods. In computing education, at least two different measures, the Error Quotient and Watwin Score, have achieved modest success at predicting student course performance based solely on students' compilation attempts. We hypothesize that one can achieve even greater predictive power by considering students' programming activities more holistically. To that end, we derive the Normalized Programming State Model (NPSM), which characterizes students' programming activity in terms of the dynamically-changing syntactic and semantic correctness of their programs. In an empirical study, the NPSM accounted for 41% of the variance in students' programming assignment grades, and 36% of the variance in students' final course grades. We identify the components of the NPSM that contribute to its explanatory power, and derive a formula capable of predicting students' course programming performance with between 36 and 67 percent accuracy, depending on the quantity of programming process data.