Resisting Assessment

As an ongoing project, I have been considering if and how we should assess library instruction sessions that take place during the course of the semester. Sparked by a short quote in Critical Information Literacy, and my own reflection on assessment practices in our program, I started researching criticisms to the "culture of assessment" that has been part of higher education since the early 2000s, and pushed in academic libraries with the publication of Megan Oakleaf's Value of Academic Libraries report.

Two articles that have really influenced my perspective:


Part of my job is assessing our information literacy program, which would include the library instruction sessions. After reading these articles, I wondered, do I get rid of assessment all together? One way is to consider the definition of assessment, which Wall et al. (2014) put into three main camps: examining student learning, examining programs, and determining institutional effectiveness. Both Wall and Bennett and Brady support genuine, faculty-driven curriculum development, but find assessment at large to be a problematic shift towards "monitoring and auditing educators, rather than a tool for teaching and learning" (Bennett & Brady, 2014, p. 47). Doing assessment for the goal of efficiency, revenue and prestige "serve a market rationale associated with the universities creating workers and knowledge for economic development, rather than primarily serving more abstract educational purposes" (Wall et al., 2014, p. 9).

If I had no self-control I would have highlighted both papers in their entirety, but somehow managed to limit myself to main points, with the underlines and stars and exclamation points peppered throughout. I found myself nodding along, making notes in the margins, and talking about these articles to anyone who wandered past my office.

But if I agree assessment is problematic, what can I do at my assessment-driven institution (and library culture as a whole)?

We have six information literacy program learning outcomes, to which Bennett and Brady point out: "The use of the term "learning outcome" for what is to be included in a whole program of study leading to a qualification such as a degree constitutes a misuse... [T]he further away from the student and the teacher in a classroom, the more remote, generalized and irrelevant statements of learning outcomes become" (p.47).

No only do we have six program learning outcomes, but those are mapped, in our university assessment software to the course learning outcomes in our freshmen information literacy course, and in our embedded writing courses, and in the flagged graduation requirement courses, and I'm standing here just supporting a neoliberal ideology and the capitalistic culture of higher education.

On a large scale, student outcomes that we obsessively measure in higher education (retention rates, grades, graduation rates, employment after-graduation) have less to do with learning practices in the classroom and our pedagogical inputs, and more to do with the large structural inequalities and inputs such as inadequate college preparation, student debt, and "lack of access to social, cultural, and economic capital" (Bennett & Brady, 2014, p. 150).    


So maybe instead of focusing on what we shouldn't be doing, I'll look at what we should be doing: We should be caring about student learning. And we should be caring about student learning because we care about student learning. Not because we are monitoring educators, or because we are practicing market accountability.

The best part of these two articles, which I'm recommending you read, is that after they layout the problems, the authors present solutions: "Situating assessment within evaluation as a socio-political practice" (Wall et. al, 2014); or, Bennett and Brady's (2014) concluding paragraph on how to resist. My main takeaway: Practice critical librarianship. By this I mean, question motives, consider power structures, looks at who benefits and who is harmed, and support meaningful and authentic curriculum development practices.

Specifically ask:
  • What is the purpose of this assessment? (i.e., not have the managerial purposes of efficiency, revenue or prestige)
  • What will we do with the information from this assessment? How will it be interpreted? 
  • What are the consequences of assessment? (performance review? accreditation? marketing? political positioning? publication goals? advance learning? identify program improvement?) 
  • Is your assessment a form of self-reflection, critique and learning? (spoiler, it should be)
  • Are you engaged in an assessment dialogue, that recognizes the complexity of the teaching and learning in higher education?
  • Who is implementing the assessment? What biases or stakes do they have?
  • Has the assessment process been mandated? Why? By whom?
  • What are the acceptable ways of conducting assessment at your institution? Why? 
  • Who has access to the results?
  • Is the assessment process transparent?
  • Who is not included in the assessment process? Why?
  • Are you responsibly interpreting the results by considering the context of the data, and drawing appropriate conclusions and recommendations?
Currently, as part of our program, we assess the freshmen year information literacy course, the writing course we're embedded in, and, through our general education council, the information literacy graduation requirement that exists in the majors. I would argue that this assessment is done with the focus of teaching and learning, as part of curriculum development, and to make sure student are introduced to, practicing, and performing these skills. Instructors are not singled out or punished -- instead the results provide a starting point to discuss what and how we teach, and what ways seem to be more receptive to student learning.

It would be wrong to not acknowledge that this program-level information literacy assessment was first inspired by the American Library Association's Assessment in Action program, and then reinforced by our institution's impending accreditation visit. Perhaps the capitalistic goals of assessment got everyone, begrudgingly, to do assessment and put course learning outcomes on every syllabus. And yes, it was top-down mandated that every department do program assessment, and submit the results into our assessment management software. And yes, I realize that there are people that do only what is minimally required, and say "SLOs" with such disdain in their voices in faculty meetings. But, optimistically, I would like to argue that it's those critical questions about assessment, the self-awareness of why and how and who's doing it, and that even though administration is using the results to serve their needs, that doesn't mutually exclude our ability to use those same results ethically and reflectively in our own teaching and learning practices. In our own department, we look at past student artifact assessment results to rethink what and how we emphasize in our class content. We have conversations about expectations of learning, and what we really want our students to take with them from our classes. In our general education council meetings, we've had numerous discussions about how we build in that reflective practice to assessment, and explicitly collect information focused on student learning -- not on faculty or for the purposes of faculty monitoring, because we realize, as a council, we are that distant body from the teacher-student relationship. We do, at a basic level, feed in to accountability, such that we observe if a student is learning the things we say they will learn in our general education program. Which does ultimately feed into administrations goals of prestige, revenue, marketing, and management.

So where does this leave me with my original question of if and how we should assess library instruction session? We currently track instruction in a stats form, which basically tell is who taught what, at what time of day, where, and how many students. We use the form to depict trends in order to manage scheduling (staff and space) and outreach (to other departments). While not explicitly stated anywhere, although it probably should be, the overall goal of the library instruction session model at our institution is to model and practice information literacy skills needed to accomplish a specific task or assignment.  

My conclusion is that, in my role, I should encourage reflective practice of teaching. We do not need a large scale assessment of student learning -- we could not responsibly make any sort of conclusions or recommendations based on the scattershot kind of on-demand instruction. The best we could do is on a hyper-local level, where an individual librarian does the evaluation s/he needs to improve the teaching and learning. I'm re-positioning this task, away from how do we assess library instruction sessions to how can I encourage reflective teaching practices to support learning and avoid the trappings of the neoliberal ideology in higher education?

Comments

Popular Posts