While I have been enjoying the weekly readings so far this semester, because I am the discussion lead for this week, I paid particularly close attention to this week’s readings—same reading interest as before just much more precise notes! And I had to a lot to note! Once again, I find a lot of overlap between the assigned readings and my literature review topic. Martinez, specifically, gave me a lot of insight into the creation of reliable, valid assessment tools, which will ultimately be my goal after developing a research rationale through the literature review. Reliability looks at if the tool is consistent in its measurements. Validity tests the construct validity—does it positively and strongly correlate with existing, relevant frameworks—and predictive validity—can it accurately predict performance. It seems there are a lot of ways that these two measurements can mess up and impact the usability of that assessment—over- and under-representation, testwiseness, washback effect, consequential validity, etc. Is anyone aware of methods used or resources to explore further on measuring a test’s reliability and validity? Any suggestions are appreciated!
The articles were also thought provoking. The Van Zundert, Sluijsmans, and van Merriënboer (2010) article demonstrated a connection between peer and self-assessments. I have been reading a lot of the literature concerning self-assessments, so it was interesting to look at this topic through the lens of peer-based feedback. There is overlap, namely, in the lack of strong correlation between positive peer (and self-) assessment scores and psychometric tests as well as how the experience of the learners affects this correlation. To encourage this higher-order thinking and self-regulation, learners must be exposed to opportunities for reflection. I am beginning to see the self-assessment tool more as an exercise to encourage this level of thinking rather than an authentic assessment tool that measures information literacy skills. This direction of my literature review was also encouraged by the de Grez, Roozen, and Valcke (2012) article utilizing an observational learning construct for peer assessments of performance skills. The results were interesting in regard to how the rubrics were created for the peer assessments. I considered, if encouraging self-reflection is the goal, why not include the students in the creation of the rubric for the assignments. It would allow the learners to think reflectively about what “quality” looks like as well as provide some buy-in for the assignment and encourage intrinsic motivation. I believe this would fit under a constructivist approach to assessment—if such an approach to assessment exists! The Sadler (1989) article was also interesting, but a little older and less relevant to my topic. One big takeaway I had from this article, though, was that feedback—comments given directly to students meant to address the gaps in their understanding—is different from assessment in that assessment can completely bypass the learner. I can see how this distinction can often result in frustration on an instructor’s part who is constantly hearing about assessment from administrators, but who feel a disconnect between assessment and their goals in the classroom. I found the literature review outline really helped me shape the draft that is due this week. It definitely gave me an opportunity to put a form to the thoughts I have had so far this semester and make what I have read more manageable. This has been a truly great exercise in preparing us for publishing.
0 Comments
Leave a Reply. |
Kirsten HostetlerReflections and updates in learning and cognition for IDT860. Archives
April 2017
Categories |
Kirsten Hostetler