At the midpoint of the academic year, many minds on campus turn towards assessment. And as they do, many other minds turn towards complaining about assessment. In turn, the poor suffering souls who serve on university assessment committees sigh deeply, say goodbye to family and friends, and trudge down the road to martyrdom. At least, this is the way it usually goes, more so when you allow a group of academics the opportunity to kvetch about it; to hear us talk,you’d think that we’re being asked to eat puppies while listening to Ted Cruz sing power ballads.
So why is it this way? It shouldn’t be. Assessment is one of the most vitally important things we do in higher education. In its fundamental sense, assessment is no more than proving that we’re doing what we say we are. All those flowery quotes in all of those glossy viewbooks about “discovering your passion” or “transformative learning” or “education with an impact?” Well…prove it. More directly, as individual teachers, we can’t know if we’re successful in any of our endeavors if we don’t assess. All the critics, the skeptics, the blithely ignorant edubro “reformers”–we can’t meet and defeat them without assessment.
The problem, though, is that for the most part, we’ve transformed assessment into a rundown of “outcomes” that we check either “yes” or “no” next to. Can the student write a research paper? Did they solve for x? Have they passed a licensure exam? It’s easy to do it that way–the data’s right there, and a simple yes-or-no answer leaves little room to screw up the math when we average the results. But of course this type of assessment tells us what happened, but not why.
“Only 62% of the students in this program passed the board certification exam.” “Why such a low rate?” “I Dunno.” “So what do we do?” “Maybe give them an extra exam?” “GENIUS! We’ll promote you to Associate Dean for Assessment and Innovation!”
And that, in a nutshell, is what happens on campuses across the country. And that’s why so many of us in higher ed break out in hives when we hear the word “assessment.” Because it seems so detached from what it is we actually do with our students. That detachment, I think, stems from an overemphasis on outcomes, defined too narrowly as definable, measurable end products. That’s a very appealing standard for those tasked with institutional-level oversight; one can then report to various external constituencies what percentage of “our” students have “accomplished” or “mastered” or “demonstrated” an all-important (which really means specifically-definable) outcome. As a result, assessment becomes inauthentic, an artificial-looking caricature of what it is we really do with and among our students. It becomes top-down rather than faculty-level-up because there’s no buy-in: teachers know it doesn’t represent what they do, so the practice of assessment defaults into some administrative unit’s purview, “assisted” by an unfortunate clutch of faculty who were out of the room when it came to choose members of the Assessment Committee.
As I’ve previously argued in this space, however, assessment is too important to suffer such an ignominious fate. Make no mistake: it languishes in captivity in part because we as faculty and academic staff have at least partially abdicated the conversation (or been forced to abdicate, in the worst-case scenarios). So what do we do? I think the vital first step is to redefine the concept of “outcome” by framing it as only part of a spectrum. In other words, we need to tie assessment and process together. My thoughts on this were crystallized over the weekend when I read a beautiful and touching tribute by Lara Dotson-Renta to the late John Rassias of Dartmouth College. Aptly titled “Humanizing the Humanities,” Dotson-Renta’s essay clearly engages this matter of outcomes and process:
I think of the importance of educators like John in the current climate, as the U.S. educational system faces profound challenges and its politics are increasingly debated in absolutes, absent shades of gray. As a culture, the country has come to place decreasing value on thoughtfulness, abstraction, and nuanced critical thinking that poses big (uncomfortable) questions rather than presuming answers. Those charged with overseeing learning often want “outcomes” rather than process, even if those outcomes are temporary, even if the picture they paint is incomplete. The labor of teaching—that hands-on, dynamic and most valuable of endeavors—is often shortchanged and even derided. The youngest of children are besieged by academic expectations rather than exploration early in life, and the nation’s college students and even the (often adjunct) faculty that teach them find themselves anxious about financial stability and the viability of higher education.
The key point here is that the “outcomes” that have become the gold standard of much of our conversation on assessment and educational performance by both students and institutions are indeed “temporary”; they are fleeting and illusory. Just as we now know that repeated reading of a textbook doesn’t aid understanding of the material so much as it promotes an illusion of mastery and a short-term increase in rote recall, so too can we see that “achievement” of some closely-defined “outcome” only gives a snapshot of a momentary performance, not any reliable data about how (or if) a student has actually learned in the deep and meaningful sense. We talk a good game about bringing students into a scholarly conversation, about building a foundation for lifelong learning, about opening up opportunities and hitherto-unknown intellectual landscapes…but then we assess our students on whether they complete assignment-based tasks like “can they write a 15-page research paper that uses Turabian style properly?” And if this sounds like an unfair generalization, double-check your programs’ and institutions’ assessment outcomes and make sure; you might be surprised.
Far better for all involved if we focus on process as an outcome in itself. We need to advocate for assessment that honestly reflects the fact that knowledge is processual instead of attainable in the final sense. We need to think deeply about aligning our learning objectives/outcomes/whatever we call them–from the individual class to entire curriculum–to what we see as essential for both our disciplines and for higher ed in general.
Let me offer an example: in my Medieval World History survey, one of the assessments I use for my student teams is an exercise where they have to frame an answer to an essentially unanswerable question by addressing a counterfactual scenario I pose to them.* The answer per se is not what I’m assessing–there’s no “right” or “wrong” answer to an unanswerable question. What I am assessing is the process the students use to create their answer. Are they looking at evidence and using it to buttress the claims their scenarios make? How are they grappling with the weighty matters like causation and contingency? How well are they able to articulate a complex, evidence-based argument? Did they “think like historians” (i.e., critically examine multiple viewpoints, weigh arguments and evidence against one another) as they performed the task? There’s no single specific outcome I’m looking for, because the process itself is the outcome. There’s no single specific answer I’m looking for, because I’m assessing the questions instead. Between what they produce for this assignment, and then a brief reflective paper they submit about the process of that creation, I’m able to do good, thorough assessment of the things that really matter in this course: critical thinking, information fluency, and an understanding of the complexities of contingency and causation.
What if we took that approach to the programmatic and institutional levels? It can be done, and the results can be powerful. To return assessment to the center of our academic work, to stop counting beans and start tracing progress, to better articulate what it is we do with students, consider the following:
- Empower units and departments to articulate their broad outcomes and the measures by which they can assess them. Support those departments in sticking to that plan and in their collection of meaningful, even if tentative and contingent, data.
- Focus less on “deliverables” and more on behaviors and habits of mind. We’re assessing student learning, not package delivery.
- Discern processes that are fundamental to the very nature of a discipline, or of a collegiate-level education in general, and create ways for students to work through them–and for faculty and staff to assess that working-through.
- Make assessment a conversation rather than a product. Don’t just collect annual assessment reports and dump them in some bureaucratic black hole. Share data between departments and units. Look for larger trends. Find common insights. Use them to ask better questions.
- Define assessment as a continuum of experiences rather than a series of disjointed snapshots. For data, context is everything. Meaningful data has several points of reference by which we can track change and growth over time.
- And remember the fundamental truth that learning is a process, not a destination. If our students are ever finished learning, then we aren’t doing it right. If we cling to the narrow definition of “outcomes,” we’ll never be able to accurately assess student learning.
Assessment is our story. It’s us telling the world–whether that’s our university administration, our students’ families, our field’s accreditors, or the legion of critics who charge us with malpractice–that we are indeed doing what we say we’ll do, that we’re building the habits of mind and tools for informed citizenship our students need. Higher education at its best–collaborative, communal, and meaningfully-supported and valued–absolutely depends upon us making sure we do that story justice. Because in today’s higher-ed landscape, if we don’t tell our story, someone else will.
It’s time for us to own our stories.
*I ask them to tell me what would have occurred if the Chinese, rather than ceasing their maritime endeavors and turning inwards in the 1450s, had instead expanded their program and made contact with Europe instead? They’re tasked to write an introductory section to a hypothetical textbook chapter on the Asian-European encounters of the fifteenth century; as a result they have to consider not just immediate effects, but long-term historical processes that would look remarkably different than they do now. Thus, they have to think deeply about-among other things-causation and temporal-spatial relationships. It’s great fun!