Search

SBG: Half a Year In.

If you’ve been checking in for longer than a little while (in which case, bless you heart!), you will be aware that I am in the middle of implementing a Standards-Based Grading (SBG) approach in my Honors Chemistry course.  This is detailed here, and also here.

The initiative continues apace.  I haven’t written about it in a while, but recently, I was reminded that I said I would, and I actually have something to write about.  Last week was the end of the second quarter, and so it was time to once again translate student Standards-Based scores into some sort of numerical equivalency for the purpose of determining quarter grades.  

This is not an easy thing to do in the best of circumstances.  SBG does not really dwell in the same kind of metric space that traditional grades do.  For instance, in our system, a 3 (out of 4) is the cutoff for students to demonstrate the expected level of proficiency for an honors chemistry student.  A 4 (“distinguished”) is a very rare score, reserved for those few and far between instances where a student goes well beyond the expectation.  

How is one to translate this in to a classical grade?  Certainly, the student who has all threes is not performing at “three-fourths” capacity.  Indeed, a student performing at all 3’s is doing quite well, meeting the expectation for performance that we have for students in the course.  I imagine the reader can understand the issue.  

As a subject, we had discussed the issue, and we did a bit of research.  Somehow, we stumbled upon this approach, which is referred to as the “bump & space" method.  Here is the method:

  • At the end of each marking period, the total points earned for all students in all standards is determined.
  • This “total points” score is used to create a histogram distribution of student scores in the course. 
  • This histogram is used as the basis for determining scores.  Instructors decide how many traditional points the highest performing student is to receive, and simply scales down from there to the lowest performing student.  
  • Done.

I have used this methodology for the past two quarters, and I have to say that I am super happy with it.  In both quarters, my highest performing students received SB “grades” in the high 90’s, and my lowest performers received scores in the low 80’s.  Furthermore, it becomes almost impossible for a student to argue their grade, once the methodology is explained.  As part of the process of determining an SB grade, we have our students complete a self-analysis, and almost without exception, their self-analysis score is either correlative or (more frequently) below the bump & space determination.  

The system does produce an interesting “quirk” compared to traditional grades, which I was only made aware of this last quarter.  By pegging student grades to the performance of the group, instead of the traditional mode of pegging student grades to an arbitrary “perfect” score, it is quite possible for a student to increase their effort in the class, and wind up with an SB grade that is similar to or even slightly lower than it was the prior quarter.  The explanation for this is that if a student improves in performance, but is outpaced in that improvement by the class as a whole, then they are in line to get a reduced SB grade.  This has happened in a couple of instances.  This is certainly a difference (and one that I didn’t realize would occur before it started happening), but I actually think it provides a more meaningful metric of student performance as a result, particularly when decisions about where students might be happiest next year need to be made.

So, that’s how we’ve made the translation from SBG to traditional grades.  All in all, I’d say it’s as good an approach as any.  It’s certainly had a lot of thought put in to it.

Sometimes I'm Not Nice.