(See this post for an introduction to this blog series.)
Well, this is the final post in the series, and I thought I would wrap up with one last thought, which is to point out that this entire series has been enabled by the minute paper assessments that I got back at the end of my two presentations.
Now, some conferences (including the leading national conference on information literacy instruction) ask attendees to complete course-evaluation-type instruments, either at the end of each session or at the end of the conference as a whole. And in the context of a professional conference, there certainly is value, both to the conference organizers and to the presenters, to getting answers to questions like “did the presenter talk too fast?” and “were the objectives for the session explained clearly?” (Never mind “could you read the slides?”)
But consider whether that kind of evaluative instrument could have gotten at the kinds of questions that the attendees raised in my minute papers, and which I have responded to in this series of blog posts. Would I have been able to continue the discussion and learning from the presentation without “one thing you still have questions about”?
So it’s not only in classroom settings — where “what the students learned” is as important or more important than “were they satisfied” — where assessment can be preferable to evaluation. It’s also in professional development and dialogue where asking “what are you still confused about” can be a useful question to ask, and (and this is critical) can better enable continued professional dialogue and learning.
And isn’t that what we’re all about?