Peer Evaluation
The peer evaluation process has been a relatively highly-debated topic within group and team-based classes and projects. Michaelsen provided specific guideline, and this topic has evolved in further literature and in specific applications and resources for implementation. Many of these processes are complicated, which I expect increases the risk of student error, or flippancy in response to the entire process. When I started working with Ann Smiley-Oyen, she had just recently transitioned from SparkPlus to OpeTBL. From my perspective, neither offered viable options (being too unwieldy, not customizatable, and often too expensive).
Importantly, students also did not indicate that any tool we used was intuitive or allowed evaluations in the dimensions they thought important. I started to create alternative strategies and techniques, and ran into multiple challenges that plagued our use of any of the aforementioned tools: Students would either over- or under-evaluate their peers, collapsing the value of the ranges we provided.
Through trial and error, I developed the following instrument to assess peer evaluations. Importantly, it consists of no numbers (to alleviate the risk of grade perceptions a tad) and requires quantitative and qualitative feedback from each teammate to each other. At the instrument's core, I ask each student five questions about each other. The first three utilize a modified Likert scale with the options of "Perfectly!" "Adequately" or "Could Improve":
- How well did the teammate prepare for class beforehand?
- How well did the teammate contribute to the success of the team?
- How well did the teammate facilitate the voice of others within the team?
The final two questions are short answer, and I require the student to answer both in full sentences:
- What is one thing that the teammate does well to contribute to the team's success?
- What is one thing that the teammate could do better to help the team?
No questions are optional, and students are not allowed to rate, or provide the same (generic) feedback to every teammate.
In order to create an environment where this instrument functions properly as an evaluative tool, it is necessary to set expectations of teams and teammates early in the course. I rely on the syllabus review and class activities to accomplish this, specifically demonstrating that no one (including me as an instructor!) is perfect in every way. In my experience, if this perspective is properly installed, student judiciously utilize the "Perfectly!" option on the Likert scale, significantly reducing the risk of a ceiling effect in the evaluations.
In fact, this scale often results in more harsh grades than I would assess as an instructor. The qualitative short answer questions provide reflection and justification of the responses, but some type of rectification is still often required. In order to accomplish this, I assign numerical values to the Likert scale (Perfectly = 3, Could Improve = 1) and average the scores from each teammate for each of the first three questions. I then apply a log transform, with a base of 2.85 (in some courses, particularly lower division, I adjust this up) for a positively biased conversion to a percentage. These scores are reported to the student, as well as the aggregate average across the three. I also summarize the student short answer feedback and often separately include my own.
In some circumstances, individual students or entire teams are particularly harsh on their teammates. If these scores are divergent from my own assessments and observations of the functionality of the team, I re-center the scores. I am currently investigating if this is a technique I should always employ independent of the scores themselves.