Understanding the U.S. News & World Report Rankings: Basics and Common Misconceptions

The U.S. News & World Report Best Colleges Rankings methodology is simultaneously clear and cloudy. Its basic formula can be found here. Visually represented, the assignment of weight to each score category is represented here:

Though this may seem straightforward, U.S. News does not publish what goes into every one of these categories. Further, the visual weighting shown above is somewhat deceiving due to the way U.S. News standardizes and scales each piece of data. Taken literally, one would assume that a raw score would reflect that each piece of data contributed their respective percentage to overall score — faculty salary making up 7% of the raw score, retention making up 4.4%, expert assessment making up 20%, etc. — similarly to how one might calculate an overall grade for a course from a number of assignments with different weights. That is not the case. Within each score factor, U.S. News scales and standardizes scores, and then weighs the resulting scores by their factor’s respective percentile. Unlike in our example of finding a grade for a course from a number of assignments, in the U.S. News formula, it is quite possible to achieve “grades” above 100% or below 0% for each given factor. In fact, this is quite common. This distorts the expected “weight” versus actual contribution of each factor, in many cases quite drastically. If a school reports far above average faculty salary for example, that factor may make up 8% of that school’s raw score, or 10%, or 15%. If it reports a below average faculty salary, that factor may make up 4% or 5% of their raw score — rather than the 7% one might assume from a cursory understanding of U.S. News rankings methodology.

Ultimately, scores well above the average for each factor are thus strongly rewarded, while scores below the average are severely punished. The rewards — or punishments — are stronger the larger or smaller the factor weight is, in accordance with the score multiplier factor shown in the pie chart above. It is this understanding that is key: schools that are the most highly ranked are those that have scored significantly above their competitors in a few key elements. And when examining why one or a few schools are ranked above another, it usually comes down to just a few specific categories wherein the higher ranked school outperformed both the national average, and those schools it is closely ranked to.

Princeton is a useful example of this. In most other categories, Princeton is in close competition with its similarly ranked schools (Harvard, Yale, Columbia, etc.). But Princeton's incredibly high Alumni Donor Rate gives it a significant advantage over all of its peers, including the next highest school Dartmouth. This overperformance helps Princeton secure its number one spot. If you think of this on a bell curve, Princeton is on the far forward end of the curve, and it is disproportionally awarded for being so. More broadly, you see this effect for most top undergraduate schools in the Instructional Budget category, where their spending separates them from the rest of the top but not quite elite schools.

It is important to recognize what this means for an individual school’s ranking. The U.S. News rankings are not static, and neither are the actions of schools that make them up. Almost every school seeks to improve itself: the quality of its students, its reputation, the outcomes for its graduates, etc.; and thus, the average and the overall bell curve of many of the U.S. News metrics go up over time. This is a frequent source of frustration among university administrators. They look at their regular increases in test scores, in money spent, in the quality of faculty hires and in other categories and wonder why their rankings are not improving in turn. Because of that drive to improve, overall averages for ranking factors tend to increase on a year-to-year basis, what you might think of as “rankings inflation.” This idea is not dissimilar to monetary inflation. Every year, a given school must improve more and more — higher test scores, better graduation rates, improved reputation, higher expenditures — to yield just the same rankings outcomes as they did in the past.

What does that mean for schools seeking to improve their rankings? It means that they must go above and beyond what other schools are doing, often to the outlier areas, in order to realize serious, continuous improvement — and that any efforts must be sustained, otherwise other schools will simply catch up. Those other schools, indeed, are absolutely trying to make those same moves at any given moment. In fact, we work with a number of such schools, as it is exceptionally difficult to realize substantial upward movement in the rankings without a sophisticated knowledge of how they work. We hope this blog sheds some light.