Revisiting Library Instruction Assessment: The Prepping Process

This summer I laid out my goals for the year, and then the semester started... Needless to say, not a lot got accomplished. It's hard to think big picture when prepping, teaching, attending meetings, grading, reference-desking, grant-writing, etc.

What jolted me back a bit was the fact we're having mid-year reviews of our goals, and a conversation I had with one of my colleagues. She was reading the book Creative Instructional Design and came across an example form for prepping a class (figure 2.1, p. 14). She thought this would be a useful form for -at minimum- directing conversations with faculty members, to consider the goals and context for the library instruction session. This made me wonder about how we develop our own teaching strategies, our prepping process, if you will. I was surprised to learn she didn't use lesson plans, and didn't formally or informally plan out goals. It made me wonder if it was an anomaly or a widespread phenomenon. From an institutional perspective, not having lesson plans is a problem if someone would need to take over for courses others have taught - which then made me bring up this question at our last reference meeting (is there enough material, lesson plans, notes, slides, etc. that we could figure out what you did if you were out?). But, I think more interestingly, from a pedagogical perspective, how does our prepping process fit into our teaching reflections and self-improvement?

Mise en Place
Photo: Don LaVange
I really want to shift our assessment of our library instruction sessions away from disruptive and invasive data collection to reflective assessment for a variety of reasons:
  • At our institution, we already assess student information literacy skills at the 100-level, 300-level, and in the majors. 
  • We can only assess student learning from the library sessions we've taught, and the outcomes we assign to that session, like a micro-assessment.
  • Correlation is not causation - and causation is very challenging to prove. There are many, many confounding variables, and it is practically impossible to isolate the impact of library instruction sessions. 
  • The purpose of our assessment should be to improve student learning through our instruction, so I agree that some level of student assessment must happen, but I argue it should take the form that is the most useful to the librarian and the learning outcome, and does not need to be standardized. It might end up being formative assessment, a quiz at the end of class, authentic assessment of students' applying their knowledge, a pre and post test. A huge part of this should be reflection and observation on behalf of the instructor, coupled with have learning outcomes for the session.  
So, in my ideal department, everyone would have lesson plans that at a minimum list the outcomes or goals of the session, and the activities or plans for introducing or practicing that content (I do not think there is time, unless it is a multi-session course, for students to demonstrate mastery). My struggle is related to my last bullet point: the purpose of our assessment should be to improve student learning. However, as managing the people that are improving student learning, at some level, does assessment become about monitoring and auditing instructors? 

Could compliance come in the form of group learning? I'm slowly shifting our existing lesson plans from a shared network drive to Google Drive, and in the last meeting one of my other colleagues had the idea to upload three lesson plans/activities/assignments each month to the drive and share how we teach a topic.

Comments

Popular Posts