Developing a Metrics-Driven Culture within Student Affairs

illustration of online learning with computers and textbooks

Series: Managing the Student Lifecycle

This new series convenes expert perspectives on student success and predictive analytics. We hope to empower enrollment managers, student affairs professionals, deans, and faculty to think deeper about their student data, predictors of success, and managing the student lifecycle holistically from recruitment to retention to completion.

Earlier in this series:
Improving Student Success Can’t Be a One-Office Effort

Metrics-driven student affairs: Can it be done? Why is it difficult? How do we get there?

Enrollment management and student affairs offices tend to agree that managing the student lifecycle to promote greater levels of student success requires collaborative effort. Yet as enrollment and student affairs offices move to work more closely together, there can be cultural disconnects over the extent to which those offices rely on data and analysis in their day-to-day work. Managing the student lifecycle intentionally and effectively will require bridging that gap and adopting a more metrics-driven approach in student affairs.

Closing the Gap

For enrollment managers, metrics are already a part of their daily work. Enrollment management has been a data-driven culture for more than a decade. In fact, sometimes enrollment managers feel like coaches whose success gets measured by wins and losses. For enrollment managers, those highly visible “wins and losses” can consist of application volumes, admit and yield percentages, tuition discount rates, size of the class in relation to the target, and percentages of priority populations enrolled, such as underrepresented students, first generation students, women in STEM disciplines, full-pay students, high impact athletes, and perhaps even bassoon, oboe, or French horn players. Chief enrollment officers are keenly aware that those incoming class metrics are as important to presidents and trustees as wins and losses by the basketball, football, or hockey coaches are to athletic directors and boosters.

Student affairs offices, on the other hand, have traditionally operated in a less numbers-driven culture, one largely focused on different facets of student development, identity, and engagement. Much of that work has seemed to defy quantification. As a result, the chief student affairs officer has often been the analog cousin to the now fully digital enrollment manager. But the situation is changing. In this data-driven age, student affairs offices are becoming more analytically-focused and more assessment-oriented—in no small part because boards and funders increasingly want accountability and return on investment (ROI) for the resources they allocate.

Certainly there are metrics of success well known to student affairs offices, such as retention rates, graduation rates, and levels of engagement with co-curricular offerings. But a deeper look at each of these metrics reveals a need for greater specificity.  Consider retention rates. Many institutions point to their first to second year retention without examining subsequent year-to-year retention. But is high first-year retention a true mark of success at institutions where significant numbers of students leak out or get off track following the beginning of the sophomore year?

What about graduation rates?  Which ones are institutions trying to improve? Four year? Six year? Closing gaps between men and women, underrepresented and majority students, STEM and other academic programs, Pell eligible students and full payers, all of the above? Institutions must be clear and intentional about the graduation rates they are trying to improve.

In the realm of student engagement, does the institution know whether more engagement correlates with greater persistence to graduation? If so, which forms of engagement best predict success? Service learning, internships and cooperative experiences, study abroad, guided research with a professor? If more engagement does not lead to higher graduation rates, why doesn’t it?

A data-driven student success effort would likely rest on other measures, such as predictions of which students are on track to complete their major and graduate in four years, information about risky credit loads and course combinations, and data on the types of interventions that are most effective at keeping students from withdrawing. Getting a handle on such measures (which some call throughput metrics) would be a critical first step toward building an analytical culture in any student affairs office. Equally as valuable as throughput metrics are output metrics, such as student satisfaction and suggestions-for-improvement surveys. Universities commonly employ student satisfaction surveys when seniors are approaching or have recently crossed the graduation stage. But what about introducing how-are-we-doing, what-could-we-have-done-better surveys as early as sophomore or junior year?

A further complication is the basic fact that higher retention and completion rates are the sum total of many things. So explains Melinda Karp of the Research Center on Community Colleges at Teachers College of Columbia University. Karp believes that the challenge posed by attempting to develop useful metrics should not prevent student success offices from establishing metrics and working to refine them over time  A beginning step, Karp says, is to consider what am I trying to measure, and what will the measure tell me? She also suggests some key questions that institutions need to clarify as they move toward establishing student success metrics:

  1. What does a successful student look and act like at your institution?
  2. Conversely, what does an unsuccessful student look or act like?
  3. How does the metric align with institutional goals?
  4. What metrics do peer and aspirant institutions use effectively?
  5. Why have those institutions been more successful than your university?

Before embarking on an analytical overhaul of student affairs, Karp advises keeping in mind several key points. First, measures and metrics must count and chart things that really matter. Second, differential measures may be required for different types of students or campuses (in other words, metrics need to be a good fit). Third, be careful that in the rush to establish metrics, you don’t settle on measures that are reductionist or “look-good” benchmarks that upon closer inspection have little meaning.

How Do We Get There?

Right now, higher education has more questions than answers on this issue. As student affairs offices become more data-driven, will common metrics emerge for measuring performance? And that is not the only question that universities committed to student success will want to address.  Other issues and questions will need to be defined and addressed, such as what will those metrics be? How will institutions know that the measures are meaningful? And what will happen in a more success-driven student affairs environment to the ever-increasing compliance, risk, and crisis management demands placed on student affairs units? Are those avoidance-of-catastrophe functions of student affairs divisions congruent with the emerging emphasis on student success? If so, how to measure their effectiveness?

Still, some institutions are seeing success in moving toward a metrics-driven culture for student affairs.

Stony Brook’s Example
Stony Brook University (16,000 undergraduates) is an example of an institution with a high (90%) first-to-second-year retention rate but a relatively low (48%) four-year graduation rate. After setting its sights on a 60% four-year graduation rate, Stony Brook recognized that getting there would require significant improvements in retention beyond the first year. To improve year-to-year retention rates, Stony Brook created the Class of 2018 Advising Initiative, which provides all rising sophomores an adviser/mentor/coach who works with them until graduation. Stony Brook did this without an increase in resources by augmenting its 50 full-time advisers with 80 volunteer advisers, who receive intensive training. So far the Advising Initiative has been a success, exemplified by record retention rates for mid-year sophomores (87%) and first-semester juniors (84%). Another key measure of persistence suggests that Stony Brook may yet surpass its goal: the number of undeclared sophomores has declined from 395 in November 2015 to 68 in June 2016.
Purdue’s Examples
Purdue University, with 30,000 undergraduates enrolled across nine schools and colleges, has embedded a program assessment specialist in its student success division. That assessment specialist uses an evaluation template that makes the objectives and measures clear to all, while also providing a context for charting progress toward meeting institutional goals. Having an assessment office is also a way for Purdue to signal that student success programs and initiatives need to show their effectiveness in order to get continued support.

Another data-based intervention to promote student success at Purdue is a new predictive tool called Forecast. To develop Forecast, Purdue collected data over four years from 24,000 students—tracking behaviors such as utilization of the university’s learning management system, card use at certain campus facilities, and whether students registered for classes on time or took classes with friends. Using that data, Purdue determined which student behaviors correlate most with success and then developed Forecast, which will be released as a student app to promote success-oriented behaviors.

If your campus is instituting metrics to assess the effectiveness of your student affairs efforts, you are entering a realm of developing best practices as well as territory where there is significant skepticism that the work of student affairs offices can be quantified. This is a fluid space, where the rules are still being written. Yet as long as funders (state legislatures, foundations) and regulators (accrediting agencies, federal entities) continue to emphasize and reward student success and degree completion, institutions will need to figure out ways to measure the effectiveness of their efforts to achieve those goals.