Do the USNWR Rankings Limit Innovation in Higher Education?

Tablet with data

Last month I had the pleasure of speaking with Dr. Ricardo Azziz, former president of Georgia Regents University, about the US News and World Report (USNWR) higher education rankings. The conversation is especially timely as millions of students and parents are knee deep in the college application season this fall.

In our conversation we explored many inherent flaws with the rankings system – but in talking with a former college president, I was particularly interested in exploring how the importance and visibility of these rankings affect how university leaders make decisions. Do the rankings support the innovation and change needed in higher education or do they actually hinder our ability to take risks and try new things? Our conversation was wide-ranging, and we uncovered important insights for boards of trustees, policymakers, and university leaders.

In the end, we concluded that the rankings do indeed inhibit innovation in higher education. This happens because the rankings:

  1. Credit institutions that can spend more, not do more with less.
  2. Feed an unproductive competitiveness.
  3. Don’t accurately measure or even offer a reasonable proxy for quality.

1. Credit for Spending More, Not Doing More With Less

10% of an institution’s score in the rankings is driven directly by the amount of resources spent per student. An additional 20% of the score is driven by “faculty resources,” which includes items like the proportion of classes with fewer than 20 students, average faculty pay, and the proportion of faculty who are full-time (among other items).

All of these factors cost more money. The fact that this is where the rankings assign weight means that wealthier institutions with more resources naturally rank higher than other institutions.

In effect, 30% of an institution’s score can be attributed to wealth and financial resources. Do these factors equate to quality or outcomes? Not necessarily. And the rankings don’t differentiate between institutions that are spending in ways that have a successful impact on student outcomes, and institutions that are spending in ways that don’t. Or institutions that are spending less by providing more innovative ways to better serve their students. USNWR is basically telling institutions that the more you spend, regardless of the return on those dollars, the higher rank you’ll receive.

The significant weight put on financial expenditures takes away the incentive to try and do more with less – essentially to pursue those innovations that drive up student learning and graduation while simultaneously lowering the cost of doing so. Yet that is exactly the kind of innovation we need if we are to drive tuition lower and improve access (which USNWR also penalizes institutions for).

2. Rankings Feed an Unproductive Competitiveness

The consequences of rankings that directly incentivize spending more resources per faculty member and per student are compounded because college and university leaders are under significant pressure from boards, policy makers, parents and others to maintain or increase their institution’s ranking.

“Understand that rankings are elemental to the way we humans view competition,” Dr. Azziz remarks. “We are all very competitive; it’s part of our human nature. It’s easy to be competitive when you have a marker to tell you exactly how you’re doing, whether it’s the yardage on a football field or the number of goals on a soccer field. This is why sports is so attractive to people: you know right in the moment if you’re winning or losing. Rankings are like that. It’s easy to look at. ‘I’m now 73 and I’m going to move up to 72.’”

Boards can empower university leaders to resist the urge to play the rankings game, but most actually play a counter-productive role. Remember, most board members are generally not experts in education. According to AGB’s 2010 Policies, Practices, and Composition of Governing Boards Survey, only 13.1% of board members at private institutions have any professional background in education. These individuals come from highly competitive backgrounds—and rankings feed that competitive urge. As Dr. Azziz states, “There is no better rallying cry for a board than ‘let’s pile resources in to move up 10 points in the rankings.’”

And boards aren’t the only stakeholders pressuring college leaders to move their institutions up in the rankings. Because of the difficulty in defining and differentiating the value provided by various colleges and universities, most external stakeholders rely on the rankings system, which is easy to use and purports to provide that “at a glance” snapshot of which institutions offer greater value.

Competition by itself isn’t negative; in fact, competition for students and faculty can push institutions to improve the quality of their academic programs and the economic outcomes for their graduates. Unfortunately, the basis of competition in the industry is too heavily influenced by the USNWR rankings—rankings that ignore meaningful measures or proxies of quality.

3. USNWR Rankings Don’t Measure What Matters: Quality

Rather than offer meaningful data on quality to help students differentiate one institution from another (for example, by ranking on the basis of well-known high-impact learning practices, such as learning communities, writing-intensive courses, and undergraduate research), USNWR instead focuses primarily on two measures: academic reputation (the largest single weight in the score, 25%) and financial resources expended per student.

In effect, USNWR reinforces traditional assumptions about which institutions offer value. “If you go to the top 100 or even top 200 in USNWR,” Dr. Azziz notes, “there aren’t going to be a lot of surprises. The rankings have been a way to cement the old world order in higher education. The way to get a more holistic ranking – a ranking that actually matters, that can be used differently by different student populations, that is useful in educating those populations as they try to find their right fit in higher ed – is to embrace the opportunity for change.”

Yet because academic reputation—the best measure of status quo—is the single largest factor in the rankings, institutions are incentivized to maintain current educational practices. Institutions are not incentivized to risk innovating with the design and delivery of education in ways that might better serve low-income, first-generation, and at-risk students. In this way, the rankings perpetuate barriers to access.

The Rankings Hold Us Back When We Most Need to Leap Forward

The existing structure of ranking systems keep us from rethinking the university. Dr. Azziz astutely points out that “change does not come easy in higher ed, and we value what is known. Faculty, alumni, boards – all of these stakeholders want things to stay constant, the same. The mindset is: I’d rather keep what I have and not take any risks, even if these risks represent innovation. Higher ed is a very risk-averse environment. But that means you get stuck where you are in the rankings. You don’t want to risk losing your place in the line by trying something new. So the rankings often dissuade us from innovative, transformative initiatives.”

In an era when so much is demanded of higher education—educating a changing demographic, producing more graduates to fill the jobs of future, using technology to lower costs and expand access, etc.—we need incentives, not barriers, for innovation.