Wednesday, February 26, 2014

I gave my students the exam questions ahead of time, and their grades dropped (kind of)

One of my professors in grad school used to say that he could put a photocopy of the exam outside his door a week before it was given, and he would still get a normal distribution of grades. But I have fond memories of an undergraduate psychology class where the professor gave us a bank of questions (without answers) as our "study guide", and then chose a number of them to be our exam. I never got below a 99% on any exam in that class.

This semester I experimented with my Introduction to Biological Anthropology class. Rather than giving the students a traditional study guide, as I have for the last six years, I gave them my question database, the database from which I had chosen questions for exams in previous years. They were given around 100 questions and told them that I would randomly pick 20 to be their exam. Questions included multiple choice, "activity" questions (for example, given a scenario, how would you use Hardy-Weinberg to predict the frequencies of genotypes in the next generation, if no evolution occurs), fill-in-the-blank, and essay questions. 

I chose a mixture of questions for the exam, including one essay. Compared to previous years, I noted three major changes in grade distributions, some of which I would have predicted, but others are...odd.

  1. More students got A's, and particularly high A's. Students very rarely got above 95% in previous years, but this year 2 students got 100%. The percentage of students who earned an A doubled, from around 10% in previous years to over 20%. This makes sense to me.
  2. But...far more students failed the exam. I usually have around 10% of the class fail, this year it was nearly 20%! I usually have a bimodal distribution of grades, but this year the bimodality was greatly exaggerated. Few students earned C's. They either did very well (over half the class got an A or B), or they did very poorly (38% of students earned below a 70%, with the majority of them failing). I find this baffling. My TA suggested that some students decided their time would be better spent reading over their notes and "winging it" on the test than actually going through all 100 questions. In other words, for students who didn't want to put much time into studying (for whatever reason, legit or not), the questions were a turn-off and they studied less.
  3. But...based on their essay answers, the students actually understood some core concepts better than in previous years. I don't know that this reflects the test format as much as the active learning activities and opportunities for feedback that I added to the course this year. Regardless, the essay answers showed that almost all students had mastered the basics, and more students had a deep understanding of evolution and race than in previous years. (The students were given essays on those two topics and asked to pick one). If the essay grades hadn't been so high, the bimodal distribution would have been even more exaggerated. On the basis of non-essay questions alone, around half the class received an A or B, but there would have been no C grades at all, and a full third of the class would have failed the exam.
I decided not to curve the exam. How do you curve an exam when the number of A's doubled? I'm hoping students will step up their studying for the next exam, but if I have the same distribution, I may go back to a standard study guide.

No comments:

Post a Comment