Wednesday, February 2, 2011

Oral exams, writing rubrics, and SBG

This semester I've switched my class around to Standards Based Grading.  I was strongly influenced by my growing PLN on Twitter that is populated by a lot of people who use it and sing its praises.  I decided with only four days to go before the semester to make the switch on my syllabus and today was my first class.  I can tell it's going to be quite a ride but I'm still pretty excited about it.

What I wanted to talk about in this post are the connections I see between standards based grading (hashtag #sbar or standards based assessment and reporting on twitter) and some of the assessment techniques I've used in the past, specifically writing rubrics and oral exams.

Last semester I taught a writing-intensive first year seminar.  The first year seminar is a cornerstone of the undergraduate curriculum at Hamline University but they're not all writing intensive.  The topic of the seminar was "Hamline Mythbusters" but that's a whole different post.  Passing the class did not automatically get a student out of our first year writing class.  Essentially I was able to post a grade and check off that requirement separately for each student.  For each writing assignment they had, I used an in-house writing rubric that I found very useful and flexible.  I told the students that to get the "Expository Writing" requirement checked off they had to turn in one of two major writing assignments and receive a "meets" or "exceeds expectations" on the rubric in all seven categories.  I told them they could turn in as many drafts as they'd like.  One student turned in 13 drafts of one paper until she finally had "exceeds" in every category.

I make the connection with SBG because the students knew they could always be reassessed.  Essentially I had seven writing standards I was using that were mostly separate from the rest of the class.  One thing that surprised me was the relatively low number of complaints I got about their scores on the rubric.  I have been assuming that this was because 1) it was a very well-thought-out rubric with good descriptors about the levels, and 2) they knew they could always work on it some more and turn it back in.  I think it also helped that I gave them a ton of feedback on every draft.

Oral exams
In my upper division courses (read: small class size) I use oral exams all the time.  What I really like about them is that I can pick the problems that I think are crucial to the class and have the students spend roughly a week honing them.  They come in for the exam (all together because there's points for audience questions) and randomly pick one of the assigned problems.  They work it on the board and lose points any time I have to unstick them.  They sometimes heave a sigh of relief when they find out which one they have to do but what I like is that they spend a week studying all of them.  SBG feels like the oral exam cycle all semester long!  They continually work on the standards, having them reassessed when they're ready and I know that they're working on exactly the things that I think are the most important.

I want to say a special thanks to Frank Noschese for helping me out over the last week.  He's a high school physics teacher who is passionate about teaching and learning and routinely gives me things to think about for my own work.


  1. Regarding the oral exams: Can you be more specific about the format? Thanks!

  2. Sure, happy to.

    I pick ~6 problems/derivations that can be done in 10 minutes and tell them about the choices about a week in advance. Typically these are homework problems they've done or derivations in the book.

    They then study them and practice using all the black/whiteboards in the department. I'm happy to give pointers throughout that week.

    At the final they come in and I randomly select someone who comes up and randomly selects one of the problems. They then get 10 minutes to "perform it" on the board. If they get stuck on something I ask if they need help. If they say yes they lose 5% of their score.

    When they're done the other students ask follow up questions like "what would happen if this was done on the moon instead of the earth" or "How do you know that particular step is true". The student at the front tries to answer with the same 5% rule. If they don't get it right, the questioner has to answer. Each "good" question is worth 5% of your score (for the questioner).

    Then the next student comes up and the process goes through all the students this way.

  3. This is neat. I can see doing this with my AP students. Thanks for sharing!

  4. Thanks for the blog - I've used "track changes" on word but not the audio feedback. One of my daughter's just graduated from Macalester and another is at Augsburg right now :)