Assessment and Continuous Improvement in Higher Education

Posted by Joe Bauman, M.S. on Feb 13, 2019, 7:12:00 AM

It was more than a decade ago that the field of higher education started to focus on assessment. Educators were asking questions about what, and how much, students were learning throughout their experiences in higher education. At first, just assessing student learning and tracking the results was yeoman’s work.

Now, more than 10 years later, while documenting assessment results can still be no mean feat, just “doing assessment” is no longer enough. In fact, just “doing assessment” hasn’t been enough for some time from the perspective of the regional accrediting agencies in the United States.

Today, we must be able to show how we are using our assessment results to develop and implement meaningful action plans for continuous improvement. Software solutions, like the Assessment module of SPOL have been designed to help institutions of higher education address assessment needs: from identifying outcomes, to mapping outcomes to the courses in which they are taught, to capturing and aggregating results, to documenting continuous improvement efforts.

Why the focus on continuous improvement? Surely, good enough is good enough, right? The main reason for the focus on continuous improvement can be seen as the uniqueness of higher education institutions. Each institution has its own mission, and serves a unique student body with a unique set of faculty members. Because of this uniqueness, regulators (such as the U.S. Department of Education) and accrediting agencies have been reluctant to establish across-the-board minimum targets for important metrics such as graduation rate.

Rather than setting an external standard (which would quickly be criticized as arbitrary, meaningless, or unrealistic), the argument is that institutions of higher education are best compared to themselves over time, rather than to other institutions. The best way to show that an institution is effective, then, is to show that its performance is improving relative to its own past performance.

Aside from external accountability, we also have intrinsic motivations for pursuing continuous improvement; after all, higher education is a mission-driven field, and we want to serve our students as well as we can. Knowing that we have made a positive difference in our students’ lives is often the most powerful motivator we have in higher education.

The logic of improving ourselves over time also applies to programs within an institution. For example, faculty members in the History program would likely object if their graduation rates were compared to the Nursing program. Therefore, a program’s performance can best be judged by looking at how its current performance compares to its own past performance. We use assessment to measure the performance of a program, using the yardsticks that the program itself has defined when setting up the program’s outcomes (the program learning outcomes as well as any outcomes not directly related to student learning). We need documented – and implemented – action plans to improve these results over time, because as the saying goes, “just hoping things will get better is not a plan.”

A sound action plan for continuous improvement rarely occurs in isolation. In most cases, it requires the collaboration of colleagues to compare results, identify potential root causes of any trouble spots, brainstorm solutions, and select the best course of action. This process can be very satisfying work, and collaborating on an action plan can combat the sense of isolation that some educators feel when they have little interaction with their professional colleagues.

Software like SPOL’s Assessment module can make the collaboration process easier by giving the members of a program a common set of tools to record their results and capture their ideas, laying the groundwork so the face-to-face part of the process is as productive as possible.

Topics: Continuous Improvement, Institutional Effectiveness, Assessment