Friday, December 17, 2010

Momentum units

I've been teaching out of the Six Ideas that Shaped Physics for a while now.  One of the things I really like about it is the approach to Newton's laws.  Essentially he starts with conservation of momentum and ends with Newton's third law.  He focuses much of the series on the concept of interactions, making it clear that particles swap momentum during interactions while the particle pairs swap potential energy for kinetic energy.  I've found this fun to teach but I get caught up not having a simple unit for momentum.

I've been heard saying: "When that ball hits the other one, 7 kilogrammeterperseconds are swapped."  I say it really fast because I want the students to recognize that momentum has been transfered from one particle to the other.  I'm writing this post to throw out a suggestion: How about we use the unit "Pom" for momentum?  The word comes from a shortening of what I sometimes call momentum (pomentum) to remind them that the letter we use is p.

Why would this be useful?  Well, then I could say "When that ball hits the other one, 7 poms are transfered"  Rolls off the tongue, doesn't it?  Also, if you've read the Six Ideas books you'll know that he describes force as simply an accounting trick to keep track of a continuous transfer of poms.  That's why the units of force is poms per second (or, if you insist, the awkward "Newton").  If you want to know if something's going to hurt, you just have to see what the pom rate would be.

I've seen how students relate everything to Newtons, or force.  I think, though, and here I think Thomas Moore would agree, it can be very satisfying to try to find how the momentum is being transfered in all the various interactions involved in a physics event.  What do you say?

Friday, December 10, 2010

Grading with my voice

A few people have asked me for some more detail about how I use screencasting to grade papers so I thought I'd post this.

As a physics professor I don't grade nearly as many writing assignments as my colleagues in other disciplines but I do take seriously my job to help all my students improve their writing. When I first started grading such papers, I would write comments on each draft and provide a grade via some sort of rubric.  The rubrics started really as requirements that could be met in various ways and have evolved to fully fleshed out rubrics with careful descriptions of what it takes to get a check mark in columns like "meets basic expectations."

I quickly realized that giving the students the type of feedback I really thought they could use took a lot of time and effort, and I found that meeting with the students about their writing was one of the best ways to do this.  However, those appointments are hard to schedule, especially in a semester like I have now where all three of my lecture courses have heavy writing requirements (my First Year Seminar is writing intensive, my junior-level advanced lab course has writing a grant as 80% of the points, and my fully online course for teachers has a lesson plan as a major assignment).

I realized that the screencasting I was doing for my lectures could also be used to simulate the office experience with students.  I use my pen tablet mouse to mark up the documents digitally (using either Jarnal or Adobe Acrobat, or FoxIt reader) and I use Jing to record my voice while explaining my concerns with the paper.  Here's an example.   In that example you can see how I mix in discussion of content, style, and how it meets the expectations.  You'll also see that the 5 minute limit that Jing holds me to wasn't enough (so I just did a second one).  Typically I get my comments done in under the five minute limit (though the reading of a typical paper still takes me something like 20 minutes).

I ride the bus a lot and I like to use that time to grade.  What I've taken to doing is marking up the papers in regular ink with little notations to myself about what to say.  When I'm back in my office I then open the digital document, grab my pen tablet, and begin.  I transfer the marks that are necessary all the while holding a pseudo-conversation with the student.

I've gotten a lot of good feedback about this from my students.  They really like having the screencast at their disposal for pausing and rewinding.  One student told me that he opens his paper on his computer before playing the screencast and he makes changes immediately, while pausing the playback.  Others have said that they really feel they understand what I'm looking for after watching.  There's also been an interesting study on the preferences of students for the type of feedback they'd like, comparing regular comments, track-changes in Word, track changes with audio, and what I do.   The upshot is that my way was strongly preferred in their survey.

I'd love to hear how others use a similar system.  Feel free to drop me a line here in the comments or on twitter @arundquist.

Sunday, November 28, 2010

canceling terms

In my last post I talked about how I work with students with negative signs.  I thought I'd continue with some thoughts about canceling terms.  I'm a big proponent of having students solve problem algebraically first before plugging in numbers.  I usually reinforce that by asking follow up questions to problems like "what would happen if we increased the mass?", which are easier to answer if you can see whether it's in the numerator or denominator of the algebraic result.  On the way to an algebraic answer, though, there's many times when things cancel and I try to always be careful about those situations.

Here's an example: Consider a ball or block sliding down a frictionless ramp at the edge of a drop off.  The goal is to calculate the horizontal distance it will fly before hitting the ground.  Here's a quick sketch with the appropriate equations:
example problem with equations
Right at the point the particle leaves the ramp we can see that the mass cancels when considering the horizontal speed.  Right away with my students I would say "mass cancels!  That means you'd get the same answer with any block.  Cool!" or something like that.  The other major step in the problem is to figure out how long the flight will be.  Typically this is done by breaking the problem into horizontal and vertical components and finding when the vertical position has changed by the height of the drop off, as I've done on the left portion of the figure.  What's cool is that the final answer needs both major results (horizontal speed and time).  Both have "g" in them but together they cancel.  Here I'm often heard saying "g cancels! That means you'd get the same result on any planet.  Cool!" or something like that.

Doing the problem algebraically all the way before plugging in numbers lets students see what actually matters (in this case, the heights of the ramp and the drop off).  Students can then easily answer questions like "what would happen if you double h?" or "What would happen if you doubled the mass while tripling the strength of gravity?".  That last one causes groans for the students who've plugged their numbers in right away.

Another cool thing about this particular example is that the actual motion on different planets would be very different.  On the moon the horizontal speed would be very slow but the fall time would be long, exactly canceling each other!

When I push this method with the teachers that I teach (trying to get their physics license while already having another science license), most come to like the way that follow up questions are more fun and straightforward.  Some take a while to come around, though, as they like to grab their calculators right away.  The phrase "would it be the same on another planet?" usually works to get them to at least recognize the importance of canceled terms.

P.S. You get the same result for a ball that rolls without slipping on the ramp.  The horizontal speed decreases a little but mass still cancels (at least for spherically-symmetric balls) and the fall time stays the same.  "g" still goes away at the end.

Sunday, November 21, 2010

SocCourt as a myth?

SocCourt is my favorite sport. I play with one friend at least once a week and I've gotten good enough now that I feel safe in offering extra credit to students who can beat me.  This semester no students have even scored a point yet (that's through six games so far) but they're motivated because I say that if a student beats me the whole class gets extra points.

In my "Hamline Mythbusters" class recently it was time to vote on what the full-class myth to bust would be.  This was after they'd already worked, in groups, on five different myths and they were wanting to all work together for the second go round (soon I'll post the YouTube videos of the five myths).   One student said it would be fun to bust the myth that SuperFly can't be beat.  At first we all laughed but it ended up being a lot of fun brainstorming how it could be done.  We realized that there was a lot of science that could be done including angle of reflection with spin, accuracy needed for controlled juggling, speed of ball necessary to break a light (this has, ahem, been done once already), and on and on.  We also realized that it would be fun to have some people investigate the value of trash talking (some would say I'm good at lots of things but awesome at that).  And of course, I'd be able to play lots and lots of SocCourt.

Alas, it got voted down.  Instead my students are working on the best way to lift someone with balloons. Some students are investigating the lift and leaking qualities of various gases.  Others are determining the best way to attach all the balloons.  Still others are working out the issues regarding weather (rain, wind, etc).  It'll be fun, sure, especially with the high speed cameras capturing things like balloons popping when filled too much, but it's sure too bad that I can't whup up on some poor students in the SocCourt court.

Saturday, November 20, 2010

Negative signs

As a student I was very careful with my calculations.  I would hunt down every negative sign and make sure I didn't make any mistakes when doing homework and tests.  As a teacher I started out trying to get my students to do the same thing.  I would get crabby when noticing students trying to sneak in negative signs towards the end of a derivation instead of having them correct all along.  I found, however, that I wasn't teaching well when I did this.  This post is about how I teach now, specifically with those pesky negative signs.

For me, it's all about getting my students to trust their gut.  I figure if it's not in their gut, they haven't learned it.  So when I teach something with negative signs involved, I make a special effort to at least make sure the signs are in their gut, if not all the rest.  Here's an example: when deriving the potential formula for charged particles, you first have to teach about fields (possible negative signs), then about potential energy (more signs: should I integrate from infinity, is it me doing the work, is it the field doing the work?), then about potential (even more signs).  Now I just have students do the integral and separately teach about whether the result should be positive or negative.  I can be heard saying "don't write any signs down, just do the calculation and then ask yourself whether it should be positive or negative."

Knowing that like charges hate each other is often enough to help students get their signs right (though admittedly this is easier for potential energy than potential).  I've noticed that my students can get that concept pretty quickly but sometimes they'll still panic about how to go about doing a problem.  When they do I'll ask something like "does this charge want to be here?" or "would you have to move that charge or would it move that way on its own" and it's funny how often their confused faces turn to confident statements.

Charged particles are just one place where I do it (and have done it this semester) but I do the same thing in many situations.  I'd be interested to hear other's thoughts on this.

Wednesday, November 17, 2010

Recording students

This semester I'm trying two approaches in my teaching that both involve recording students. One has students using screencasting to turn in their homework and the other has students making pencasts of their group work.

Screencasting Homework

I've been teaching fully online classes for six years now. In the past my homework collection method has involved students scanning their homework and posting it to my Learning Management System (homebuilt using PHP/MySQL). This works pretty well but it was hard to ensure students were doing their own work. In my in-class courses I solve this problem with daily quizzes based on a randomly selected problem from the assigned set but I couldn't find an easy way to do this online. One option would be to do timed quizzes in Blackboard or something but students don't have nice pen mice like I have and so they could only type their answers without the ability to easily write equations and draw figures. This year I decided to do things a little differently.

At the beginning of the week I provide the students with screencast solutions to six problems from the chapter. I make myself available until Friday morning to answer questions in the discussion board and in my online office hours about the concepts of the chapter and the posted problems. Then on Friday morning I post a single homework problem that is due Monday. The students need to solve it, scan it, and then do a screencast of their solution to turn in.

Here are some of the benefits of this method:

  • I only have to grade one problem per student per week.
  • I can hear the students thought process about the problem.
  • Even if they work together or cheat somehow they still need to put it in their own words.

It's been very interesting to see how a screencast often gets a different grade than the plain scanned document would have. It goes both ways. Sometimes I see a paper that seems technically correct but I hear them describe certain aspects incorrectly. I've also heard a student say all the right things while what they have written isn't technically correct.

I've gotten some good feedback from the students doing this (who happen to be teachers working on their physics teaching license) so I think I'll continue the practice. I've also branched out to in-class students, offering this method as a way to make up for missed in-class quizzes.

Group work pencasts

My newest toy this semester is a LiveScribe smartpen (actually eight of them). These pens are incredible! They record both what you write on the page and the audio happening at the same time. When you go back and click on a word it'll queue up the audio from that moment. You can also post "pencasts" that work the same way only on a web page so that students can access them. Even since I got my first one I've found plenty of ways to use it in my work. I originally wanted one to help me take better notes in one-on-one meetings with students where, in the past, I've found that I sometimes lose track of promises made by both parties. I certainly use them for that but I've also used my pen at campus-wide speakers, doctors appointments, department meetings, and yes classes. What I want to write about here, though, is how I use them in class.

Here's a breakdown of my hour-long general physics class periods:

  • 10 minutes for a quiz on a randomly selected problem from the previous class period.
  • 10 minutes to recap the material for the day (often prompted by a randomly selected summary posted by one of my students).
  • 15 minutes to answer all the questions posted by my students on the material for the day.
  • 20 minutes for groups to work on the problems assigned (one of which will be randomly selected for the quiz next time).
  • 5 minutes for the groups to record a pencast of a roadmap (not a solution!) for the problem they worked on.

After class I post all the pencasts so that all the students have at least a sense of how to do all the problems when studying for the quiz. A typical day's daily outline will then have links to all the pencasts along with links to screencasts I've posted on the material and any resources I've found useful.

The students seem to have fun with these pens. They've made several suggestions for how best to use them including hitting the record button when I come around to their group as they're still trying to understand the problems. I now have eight pens in total and I look forward to finding more ways for students to use them in the future.

Tuesday, August 10, 2010

Mathematica: import and export calculations

I often do long calculations in Mathematica. If I've got the time, I'll keep the kernel active and do all the fun plotting, animating, basic playing-around with the results right when I'm thinking about it. However, often I need to quit and move on to other things on my todo list. That's what this post is all about.

.m files

Any result in Mathematica can be saved to a .m file. The syntax is pretty basic:

sol=First[NDSolve[{y''[t]==-y[t]-beta y'[t], y[0]==1, y'[0]==0}, y, {t,0,10}]];
Export["filepath . . ./cool.m", sol]

Then later you can import the file like so:

solnew=Import["same m file location"];
Plot[y[t]/.solnew, {t,0,10}]

Note how I saved the result of an NDSolve command and later was able to use it to make a plot.

web storage

What's extra cool is that you can save your .m files to a web server because you can do things like this:


and be able to use results of calculations anywhere. I especially like this when I'm going to work on a project both at home and at work as the import command syntax doesn't need to change. I used to do this with Dropbox where both my machines would have an up-to-date copy of the .m files but I still had to change the syntax because my dropbox folder tends to be in different locations on different machines.

Saturday, January 9, 2010

Clickers vs cards

I use peer instruction in most of my introductory courses.  When I first started I used colored cards for the students to use when answering multiple choice concept questions but when I had a chance to switch to clickers (personal response systems) I jumped at it.  I was the first person on campus to use them so I was sort of on my own with the hardware and software.  I lugged around the receivers and passed out the clickers on the first day of class, claiming that I'd charge the students $30 if they didn't turn them in.

I did that for around 4 or 5 years until one summer I realized how unexcited I was for a new class that was starting.  I tried to pin down my own emotions and realized that it was the lugging of the receivers and also having to always hook up my laptop at the beginning of the class.  So, just because I was lazy, I went back to colored cards.  I haven't switched back since and I thought I'd write a little about why.

First, why they compare so well for me.
I only ever used the clickers to collect and display answers for multiple choice questions.  I never tracked individual students (though I could have) and I never used the clickers to collect graded assignments like quizzes.  Since that's all I ever did, it's clear that colored cards don't have an immediate disadvantage.

Where you'd think clickers would win:
There are a few arguments in the pedagogy literature that talk about why student learning improves with clickers that wouldn't seem to work as well with cards.  The first is anonymity.  The students are unable to tell what their neighbors have voted on and this provides students cover to really think about what they're doing and not be intimidated by who they think the smart students are.  Scholars have also written about how students enjoy using clickers in class.  It keeps them engaged and it's fun to play with toys.  One more argument for clickers is that the students can see the histogram of the class vote projected.  This allows them to see that not all have said the same thing but that there are one or two answers where a lot of them have fallen.  This then tends to aid in the "convince your neighbors" portion of the peer instruction pedagogy.  There are more arguments for clickers but I'll just stick with those three for this post.

How do the cards stack up?
 First anonymity.  With cards students can watch what others do and be influenced.  I use cards that are only colored on one side so that at least the classmates behind you can't see what you're voting for.  In my experience I haven't seen a lot of influence happening in the first vote though I would say they use the cards to visually communicate with each other across the room during the "convince your neighbors" portion.  This is probably a win for clickers but not a huge win.

Next, the fun factor.
My students tend to have fun with the cards and appreciate the fact that they always work.  If they forget them they find interesting ways to vote like pointing at articles of clothing of the appropriate color.  Of course, sometimes I embarrass them in an effort to get them to bring them in the future by asking the students to stand and shout their vote instead.  The students have fun with cards primarily through my confidence level approach (see below).

Finally the histrogram: 
 When my student vote my eyes very quickly discern the color that is winning and I communicate that to them orally.  I feel, at least, that I'm able to provide them nearly as much detail about the distribution by simply describing it as they'd get by seeing it.  I admit I don't have much data to back that up, of course, but I will say I haven't heard complaints about it.

Where cards win:
It's very easy, of course, to administer the card approach.  I cut the cards, laminate them, and hand them out on the first day of class.  The cost is pretty low so if I don't get them back it's not a big deal.  There's no receiver to adjust, no software to play with, and no batteries to change or make available.  Laminating them makes them last for quite a while (3 years so far with no losses yet!).

Confidence level
The biggest unexpected benefit I've found with the cards is how the students can communicate their confidence level on a vote.  With clickers (at least the PRS ones I have) there are modifier buttons for the students to choose low, medium, or high confidence when voting and that is color coded in the histogram. With the cards I simply tell the students that their confidence level is the height they hold their cards above their head.  My students have a lot of fun with this.  Some have been known to stand on their desks and reach the ceiling with their cards while others will slouch and nearly drop their card to the ground.  I find this analog scale of confidence to be very useful to me as the instructor.  I often pay more attention to the confidence displayed than to the votes themselves.

Loss of anonymity
What's most interesting to me is that the confidence level is clearly not anonymous.  This creates a very interesting classroom dynamic as students can see that there is some confidence in the class in some cases and in others they can feel consoled by the lack of confidence anywhere.  I feel that this mix of anonymity (present, at least to some degree, with the vote itself while absent for the confidence level) is really useful in the classroom.  I especially like when there are more than one super confident students who don't agree with each other.  The class seems to get excited about the battle that shapes up with those "captains".

Final thoughts
In head to head comparisons looking at learning outcomes, cards and clickers are neck and neck. There are advocates on both sides (you can see where I stand) and I would encourage people to really think about which features they're looking for.  The confidence level aspect that I discuss here is something that I've only recently really put some thought into, especially the split anonymity, but I'd love to hear some differing opinions.