Earlier this week, an essay written by Adam Kotsko was published in InsideHigherEd called “Making the Best of Assessment.” Two key sections of his piece stood out to me:
Would I really object if someone suggested that my institution might want to clarify its goals, gather information about how it’s doing in meeting those goals, and change its practices if they are not working? I doubt that I would: in a certain sense it’s what every institution should be doing. Doing so systematically does bear significant costs in terms of time and energy — but then so does plugging away at something that’s not working. Paying a reasonable number of hours up front in the form of data collection seems like a reasonable hedge against wasting time on efforts or approaches that don’t contribute to our mission. By the same token, getting into the habit of explaining why we’re doing what we’re doing can help us to avoid making decisions based on institutional inertia.
Despite that overall optimism, however, I’m also sure that there are some things that we’re doing that aren’t working as well as they could, but we have no way of really knowing that currently. We all have limited energy and time, and so anything that can help us make sure we’re devoting our energy to things that are actually beneficial seems all to the good.
Recently, I experienced an assessment phenomenon that I’ve started to affectionately call “when data challenges our belief system.” (My colleague has also written about this here; when I was telling another colleague about this, she referred to it as “when reality interferes with our denial.”) I’ve been helping my colleagues work with assessment methods and their findings for many years now, but it only became apparent to me recently that sometimes we don’t want to believe what we see in our assessment findings or in other sources of data. Sometimes we’d prefer to just erase the evidence or the findings of a inquiry project rather than face the reality that we might be able to do something better, something different.
Joan Didion wrote a book with the title We tell ourselves stories in order to live.
In trying to build an evidence-informed culture for improvement in higher education, I have come to believe that our willingness to interrogate the stories we tell ourselves might be one of the biggest challenges we face. Resources? Yep: we certainly need those! Buy-in that the process is worthwhile? Totally important! Support and learning to enact effective assessment practices? Absolutely necessary. But …
We tell ourselves stories.
And guess what! I am totally guilty of this! Here’s a recent example: Using findings from a database-informed report I received almost 7 years ago, I believed that 95% of students who took an introductory course offered in my department went on to take many more courses at the university. This became my department’s story (because it was true — in 2006). It was a great story, until it wasn’t. When I requested and received an updated report, this is what we found out, and what we did:
Of 388 students from Summer 2009 – Fall 2013, 90 took only this course. Thus, 23% of students who took the course didn’t take anything else. And although this means 77% of students did take other courses (we can celebrate that – it could have been worse, after all), we needed to think about whether or not our original story (remember it? 95%?) still held water. My departmental colleagues and I discussed this report, and once we came to grips with the new story the data was telling us, we realized that we wanted to make some advising process improvements with the goal of increasing the number of students who go on to take more courses at the university.
What’s challenging from an assessment perspective is that this practice of storytelling can totally limit our ability to use information to make improvements; more significantly, it can limit our learning. Of course we should be analytical about our findings; we should understand their limitations, reliability and validity, the circumstances, the context. But if we believe everything to be perfectly fine — or even quite good — reliable findings that tell a different story can be hard to stomach. Sometimes what happens is folks blame the data; worse is that they blame the messenger; far, far worse is that they blame students.
We tell ourselves stories.
One of the greatest powers of assessment and of an evidence-informed, improvement-oriented culture is that it can foster critical reflection on practice, but only if we can be — if we’re willing to be — critically reflective. I also think assessment can foster really important conversations about students’ experiences and learning among colleagues (such as the great conversation we had in our department when the new data no longer supported our outdated story). In other words: assessment itself can foster our learning. But when we get stuck with our stories and we can’t see that there might be a different reality that’s out there, we shut learning out, and we shut out the opportunities that can result from learning.