This past week in class, we started out by watching a video of Jane McGonigal giving a TED Talk on how gamers can save the world. Connecting to our previous class discussions, this was quite interesting because it highlighted the problem of transfer that we have discussed in other areas. How do we not only get learners to realize that what they’re learning has value outside of the current learning situation, but to actually be able to apply what they’ve learned in other, different situations? This is particularly interesting with gamers, in part, because they are incredibly good at something (gaming) and many think they are incredibly bad at another thing (the real world). How we can get them to transfer their amazing skills from the game world into the real world is an important, and so far unanswered, question for our time.
However, Kristin’s goal in showing us this video was not simply to get us to think about gaming and transfer of knowledge, but to actually, as an audience to the “talk,” start thinking about how we assess the teaching of others, as well as our own instruction interactions. We completed a short survey about the TED Talk, and then broke into groups to do a bit of a card-sorting exercise with some of the questions from the survey. As a group, we moved the questions around into different categories, and found ourselves discussing the different areas of the presentation that certain types of questions seemed to focus on, from the presenter herself, to the room we were in, to the topics in the presentation that may affect our personal and/or professional lives. We then proceeded to talk a bit more about assessment, including the types of questions important in conducting summative assessments (as surveys), and the different strategies available for conducting formative assessments on the fly.
I found all of this quite interesting to think about, even though I’ve had a pretty solid background so far in survey/interview design and administration. I’ve worked in psychology labs, taken several UX research courses here at SI, and even did some assessment activities in my grocery-store life. However, the common thread in almost all of these interactions has been that the subject being assessed was always external to me. The assessment situations we discussed in class have much more to do with turning the lens on me and my practice.
In my other experiences with assessment, I thought a lot about how to avoid introducing undue bias into my instruments. This wasn’t so hard, when, for instance, I was helping to design a survey for a psychology experiment to be given to children and their mothers. First of all, the subjects were all unknown to me; second, the outcomes were also relatively unknown to me. When it comes to designing an instrument to evaluate myself, however, the tables have turned. I know myself better than I know anyone, and, on top of it, the “outcome” of any assessment always seems known to me (whether I think I did well or poorly, I still think I know). This relationship between instructor, evaluator, and curriculum designer complicates the survey writing process in a special way, I think.
Don’t get me wrong: I don’t think it’s impossible (or even too difficult) to create robust, unbiased assessment instruments to be used on oneself. However, I do think that it requires not only a bit more insight into the survey writing process, but also, possibly, a bit more pilot testing of the instrument in order to root out any unintended and unnoticed bias. With a bit more care (and sometimes assistance), I think writing surveys to assess one’s own performance can actually provide insight into performance style and skills that will also go beyond simply reading what other people think of your performance.