I've been teaching with Standards Based Grading (sbg or #sbar on twitter) in my advanced-level Theoretical Mechanics course (sometimes called Classical Mechanics) for a few weeks now and I wanted to post some of my impressions early on in this pedagogy experiment.
A quick sbg primer first: I determine a set of standards that I want students to know. For this class you can see them the same way my students do. Each standard is assessed multiple times for each student with the last assessment acting as the current score for the grade book. Students can initiate reassessment at (nearly) any time and I can also initiate (re)assessments. That's the nuts and bolts. The philosophy is much broader and I'm greatly indebted to a passionate online community to help me wrap my brain around it. A recent collection of blog posts called the SBG Gala #5 is a great place to get a flavor of that great community.
My implementation: I put my content online ahead of time (sometimes called a flipped classroom or teaching naked) and use class time for problem solving practice and occasionally assessment. I've been teaching "flipped" like that for a while and you can check out a lot of my older posts to see some of how I do that. For assessment I've decided to focus on students' voices and so I've required all assessment to include some version of their voice. That means students are doing pencasts (using LiveScribe smartpens), screencasts (typically using Jing on their own computers), and in-person office visits. I'd like to also see online office visits and oral "exams" in class. I've changed my grade book so that students can see their progress with a Google Annotated Timeline flash plot of every assessment of every standard on one graph (note, seems not to work in Safari for some reason).
How it's going: So far I've had one student (of only 9 total - the reason I was willing to do this experiment on such short notice) who has really embraced the system. He's turned in at least one assessment for almost every standard (that is available - bold on the standards list) and many standards have seen multiple reassessments. With one exception his reassessments have done as well or better on the rubric than the last and he has told me privately that he really likes the system. A few middle-of-the-roaders have turned in a few assessments and are promising more and a few not-quite-sure-what-to-make-of-it-ers haven't really turned in anything yet. I required them all to turn in one screencast early on to make sure that they can get Jing and screencast.com to work so the technical details have been mostly solved. With the flipped classroom there is time for discussing how it's going with them and it seems that they're still giving it the benefit of the doubt. When I asked if anyone had any successful strategies for approaching the class a couple talked about when they watch my screencasts and when they do theirs. When I asked about not-so-successful strategies mostly I just heard the usual "I haven't gotten around to it yet" type comments.
Changes we had to make already: On my syllabus I had said that we'd also do group quizzes where the answer to a question had to be on the board after 10 or 15 minutes and then I'd ask follow up questions to random students. Then we'd all work together to determine the rubric assessment for the relevant standards for each individual. I've always loved those sorts of group quizzes in the past and was happy to include them in this system. We did one on the second day and it was fun to see the students interact with each other in working to solve a problem. At the end of the day, though, they didn't feel that they should each be assessed individually because they didn't get to show their own understanding on every step, rather they were just asked a few questions about a random step. So we've gone away from those for the moment.
I've also had to change my approach to my screencasts. I'm trying much harder now to make sure that everything I do is important for them to learn to help them do well on standards assessment. A recent example is some tangential info that I would have covered in the past only to say come test time that it wouldn't be on the test. I got away with it in the past because I typically wouldn't specifically say what was on a particular test until close to test day. Now the standards ARE the test and the students know all about them all the time. That doesn't mean I skip all tangential material but I think about whether it's a fair follow up type question for the in-person assessments. If it is, I leave it in.
What I like: I love hearing their voices! I started having students turn in screencasts/pencasts last semester in general physics and I love how you can judge their confidence and understanding when you hear them talk, especially when they're simultaneously pointing out the relevant equation or part of a figure. A lot of my standards require them to do a calculation in Mathematica and it's quite telling to watch them have to type their code in character by character while discussing the meaning/usefulness/physics of a particular portion. Having to watch them type code from scratch is a little tedious but I've told them that eventually I'll approve their Mathematica skills and they can start just doing screencasts of finished code, just walking me through it. I am starting out forcing them to start with a blank canvas, so to speak, because I need to make sure they're not just cutting and pasting code from someone else.
For non-Mathematica standards I like having them write it on a sheet of paper and then scanning it to do a screencast walking me through it. I also let them check out one of my LiveScribe smartpens and tablets to do it. To see and hear a student walk through the derivation of the Euler formula for the calculus of variations is just fabulous, especially compared to just looking at the written work, since often that's just copying from the book.
I'm really happy, too, about using screencasting to show them how to do Mathematica calculations. In the past I would just give them the .nb files and let them build on them but now all they have is my video so they have to type it themselves. That is a huge deal!
What I don't like (so far?): I don't like giving the students so much rope to hang themselves. By requiring their voice I can't do assessments in class without canceling the plan for that day. In a couple of weeks we've set aside days to do just that but by then they'll be pretty far behind. We'll see.
I also don't like not being able to grade on the bus. I ride my bike in to work and then take it on the bus home. In the past the bus was where I got almost all my grading done. Now I need a computer with internet to do it (although apparently you can watch screencast.com material on an Android phone so who knows.)
What I'm hopeful for: I really hope the students learn the material better this way. I've never been overly proud of my students' retention of material and I'm hoping that'll get much better this semester. I also hope they can realize that the practice we do in class and the reassessments are all about learning and that as long as they eventually "get it" the grade will come.
What I'd love: I'd love to get some more great feedback from you. This is a grand experiment of mine and, while it feels like it's going well now, it's really too early to tell if it's worth the effort.
Sunday, February 20, 2011
Wednesday, February 2, 2011
Oral exams, writing rubrics, and SBG
This semester I've switched my class around to Standards Based Grading. I was strongly influenced by my growing PLN on Twitter that is populated by a lot of people who use it and sing its praises. I decided with only four days to go before the semester to make the switch on my syllabus and today was my first class. I can tell it's going to be quite a ride but I'm still pretty excited about it.
What I wanted to talk about in this post are the connections I see between standards based grading (hashtag #sbar or standards based assessment and reporting on twitter) and some of the assessment techniques I've used in the past, specifically writing rubrics and oral exams.
Rubrics
Last semester I taught a writing-intensive first year seminar. The first year seminar is a cornerstone of the undergraduate curriculum at Hamline University but they're not all writing intensive. The topic of the seminar was "Hamline Mythbusters" but that's a whole different post. Passing the class did not automatically get a student out of our first year writing class. Essentially I was able to post a grade and check off that requirement separately for each student. For each writing assignment they had, I used an in-house writing rubric that I found very useful and flexible. I told the students that to get the "Expository Writing" requirement checked off they had to turn in one of two major writing assignments and receive a "meets" or "exceeds expectations" on the rubric in all seven categories. I told them they could turn in as many drafts as they'd like. One student turned in 13 drafts of one paper until she finally had "exceeds" in every category.
I make the connection with SBG because the students knew they could always be reassessed. Essentially I had seven writing standards I was using that were mostly separate from the rest of the class. One thing that surprised me was the relatively low number of complaints I got about their scores on the rubric. I have been assuming that this was because 1) it was a very well-thought-out rubric with good descriptors about the levels, and 2) they knew they could always work on it some more and turn it back in. I think it also helped that I gave them a ton of feedback on every draft.
Oral exams
In my upper division courses (read: small class size) I use oral exams all the time. What I really like about them is that I can pick the problems that I think are crucial to the class and have the students spend roughly a week honing them. They come in for the exam (all together because there's points for audience questions) and randomly pick one of the assigned problems. They work it on the board and lose points any time I have to unstick them. They sometimes heave a sigh of relief when they find out which one they have to do but what I like is that they spend a week studying all of them. SBG feels like the oral exam cycle all semester long! They continually work on the standards, having them reassessed when they're ready and I know that they're working on exactly the things that I think are the most important.
I want to say a special thanks to Frank Noschese for helping me out over the last week. He's a high school physics teacher who is passionate about teaching and learning and routinely gives me things to think about for my own work.
What I wanted to talk about in this post are the connections I see between standards based grading (hashtag #sbar or standards based assessment and reporting on twitter) and some of the assessment techniques I've used in the past, specifically writing rubrics and oral exams.
Rubrics
Last semester I taught a writing-intensive first year seminar. The first year seminar is a cornerstone of the undergraduate curriculum at Hamline University but they're not all writing intensive. The topic of the seminar was "Hamline Mythbusters" but that's a whole different post. Passing the class did not automatically get a student out of our first year writing class. Essentially I was able to post a grade and check off that requirement separately for each student. For each writing assignment they had, I used an in-house writing rubric that I found very useful and flexible. I told the students that to get the "Expository Writing" requirement checked off they had to turn in one of two major writing assignments and receive a "meets" or "exceeds expectations" on the rubric in all seven categories. I told them they could turn in as many drafts as they'd like. One student turned in 13 drafts of one paper until she finally had "exceeds" in every category.
I make the connection with SBG because the students knew they could always be reassessed. Essentially I had seven writing standards I was using that were mostly separate from the rest of the class. One thing that surprised me was the relatively low number of complaints I got about their scores on the rubric. I have been assuming that this was because 1) it was a very well-thought-out rubric with good descriptors about the levels, and 2) they knew they could always work on it some more and turn it back in. I think it also helped that I gave them a ton of feedback on every draft.
Oral exams
In my upper division courses (read: small class size) I use oral exams all the time. What I really like about them is that I can pick the problems that I think are crucial to the class and have the students spend roughly a week honing them. They come in for the exam (all together because there's points for audience questions) and randomly pick one of the assigned problems. They work it on the board and lose points any time I have to unstick them. They sometimes heave a sigh of relief when they find out which one they have to do but what I like is that they spend a week studying all of them. SBG feels like the oral exam cycle all semester long! They continually work on the standards, having them reassessed when they're ready and I know that they're working on exactly the things that I think are the most important.
I want to say a special thanks to Frank Noschese for helping me out over the last week. He's a high school physics teacher who is passionate about teaching and learning and routinely gives me things to think about for my own work.
Subscribe to:
Posts (Atom)