Module objectives:
- Provide students with an opportunity to practice the FFA Creed and learn more about scoring before they recited it themselves for a grade (addresses student needs & curriculum of class).
- Give students an opportunity to generate, collect, analyze, graph, and interpret data (incorporates aspects of computational thinking, also reinforces basic math benchmarks).
This activity took 2 days.
Day 1:
We began by discussing the scoring rubric. We discussed reasons why a scoring rubric can be helpful and what each scoring category meant. We made sure to cover the scoring process in detail and allow time for the students to ask questions and get clarification. We gave lots of examples and the students described what speakers could do to earn high scores, and which things would result in low scores.
When everyone was comfortable with the scoring process, we had each group nominate a head judge to preside over their groups' scoring process. We watched videos of 4 speakers and the students scored each using the scoring rubric. For each speaker, some students acted as content judges (judging verbal and nonverbal communication skills) and at least 1 student per group acted as the accuracy judge. The accuracy judge followed a written version of the creed during the recitation and circled every inaccuracy - word changes, skipping words, or adding additional words all earned speakers deductions.
I selected 4 videos of students reciting the creed from youtube. To facilitate viewing I downloaded the videos so that we would not have to rely on the internet connection. To avoid unnecessarily biasing the judging, I also clipped the videos to take out titles referring to the level of competition or awards that the speaker earned. We used the following 4 youtube videos:
Speaker 1:
Speaker 2:
Speaker 3:
Speaker 4:
Students scored all of the speakers using official scorecards and protocols. Instead of listing instructions step by numbered step, I tried using a flowchart approach this week.
At the end of class, we had a little time for discussion. I asked the students to, without considering the number of points earned, vote for their "favorite" speaker. A few students voted for Speaker 1, but the class majority was pretty evenly divided between Speaker 3 and Speaker 4. We discussed specific things that the students liked about each of the presentation styles and related these to the scoring rubric categories.
Day 2:
Day 2 was a more quantitative day for the students. We borrowed calculators from the math department and students tallied up their scores, calculated average group scores, graphed their results, and compared group scores across the whole class.
Once students had calculated the group average score for each speaker and the range of the data, I passed out a worksheet with a blank graph for them to sketch in their data. Before they began, we discussed the type of graph that should be used. When asked, many students responded that we should use a bar graph (the correct response). I asked them to think about reasons why something like a line graph would not work well for our data. Several students knew that line graphs are suitable for showing change over time in values, and are not good for comparing across different categories of data. I sketched a quick bar graph on the board to model the graphing process for the students. I particularly stressed the process of adding error bars to show the range of scores. Several students remarked that they'd never had to show two things on the same graph before, but most were able to successfully sketch both averages and ranges on their graphs after the explanation.
At the end of the class period, I had one student from each group bring their group's graph to the front of the class to enable us to compare scores among groups. I first asked the students to determine the winning speaker in our competition. Speaker 3 was the consistent winner across all groups. I then asked the students to pick the group that they would want to have as judges if they were themselves competing. With a little consideration, students were able to pick the group that had the highest average scores for all speakers. I also asked students to identify the group that they would least like to have as graders - they successfully picked the group with the lowest average scores for all speakers. We ended the class with a short discussion about variability in the data. Some groups had discussed their scoring of speakers and had come to a consensus for each while scoring. These groups had little to no variation in their scores, reflected by small to absent error bars on their graphs. I asked the students to give me reasons why these groups had such low variability and we discussed the effect that judge training, experience, and discussion can have on the variability of scores. I concluded the class period by asking the students who had recently competed if they would go back and do anything differently this time. They picked out several aspects of their recitations that they could improve, even just by understanding the scoring methods better. We also discussed how many things are judged with scoring categories like these, from animals at FFA shows to homework assignments, and how understanding the scoring or judging criteria can aid in getting a higher score and losing fewer points.

No comments:
Post a Comment