U.S. News & World Report Best Colleges Ranking


The U.S. News & World Report Best Colleges Ranking is an annual set of rankings of American colleges and universities published by U.S. News & World Report beginning in 1983. TheyIn 1983, U.S. News & World Report published its first "America's Best Colleges" report. The rankings have been compiled and published annually since 1985 and are the most widely quoted of their kind in the United States.
The rankings are split into four categories: National Universities, Liberal Arts Colleges, Regional Universities, and Regional Colleges, with the latter two categories further split into North, South, Midwest, and West. The rankings are based upon data that U.S. News & World Report collects from an annual survey sent to each school, as well as opinion surveys of faculty members and administrators from other schools. The publication's methodology was created by Robert Morse, who continues to oversee its application as chief data strategist.
The rankings are popular with the general public, and influence high school seniors' application patterns. However, they have been widely denounced by many higher education experts. Detractors argue that they ignore individual fit by comparing institutions with widely diverging missions on the same scale, imply a false precision by deriving an ordinal ranking from questionable data, encourage gamesmanship by institutions looking to improve their rank, and contribute to the admissions frenzy by unduly highlighting prestige.
In addition to the rankings, U.S. News & World Report also publishes college guides in book form, and ranks American graduate schools and academic programs in a number of specific disciplines, including business, law, engineering, nursing, and medicine. In October 2014, the magazine began publishing a "Best Global University" ranking that focuses more on research and includes non-American schools.

Methodology

The magazine U.S. News & World Report's rankings are based upon information they collect from educational institutions via an annual survey, government and third party data sources, and school websites. It also considers opinion surveys of university faculty and administrators outside the school. Their college rankings were first published in 1983 and have been published in all years thereafter, except 1984.
The US News listings have gained such influence that some universities have made it a specific goal to reach a particular level in the US News rankings. Belmont University president Bob Fisher stated in 2010, "Rising to the Top 5 in U.S. News represents a key element of Belmont's Vision 2015 plan." Clemson University made it a public goal to rise to the Top 20 in the US News rankings, and made specific changes, including reducing class size and altering the presentation of teacher salaries, so as to perform better in the statistical analysis by US News. At least one university, Arizona State, has actually tied the university president's pay to an increase in the school's placement in the US News rankings.
The following are elements in the US News rankings as of the 2020 edition.
U.S. News determined the relative weights of these factors and changed them over time. The National Opinion Research Center reviewed the methodology and stated that the weights "lack any defensible empirical or theoretical basis". The first four of the listed factors account for the great majority of the U.S. News ranking, and the "reputational measure" is especially important to the final ranking.
A New York Times article reported that, given the U.S. News weighting methodology, "it's easy to guess who's going to end up on top: the Big Three, Harvard, Yale and Princeton round out the first three essentially every year. When asked how he knew his system was sound, Mel Elfin, the rankings' founder, often answered that he knew it because those three schools always landed on top. When a new lead statistician, Amy Graham, changed the formula in 1999 to one she considered more statistically valid, the California Institute of Technology jumped to first place. Ms. Graham soon left, and a modified system pushed Princeton back to No. 1 the next year."
A 2010 study by the University of Michigan found that university rankings in the United States significantly affect institutions' applications and admissions.
The research analyzed the effects of the U.S. News & World Report rankings, showing a lasting effect on college applications and admissions by students in the top 10% of their class. In addition, they found that rankings influence survey assessments of reputation by college presidents at peer institutions, such that rankings and reputation are becoming much more similar over time.
A 2014 study published in Research in Higher Education removed the mystique of the U.S. News ranking process by producing a ranking model that faithfully recreated U.S. News outcomes and quantified the inherent "noise" in the rankings for all nationally ranked universities. The model developed provided detailed insight into the U.S. News ranking process. It allowed the impact of changes to U.S. News subfactors to be studied when variation between universities and within subfactors was present. Numerous simulations were run using this model to understand the amount of change required for a university to improve its rank or move into the top 20. Results show that for a university ranked in the mid-30s it would take a significant amount of additional resources, directed in a very focused way, to become a top-ranked national university, and that rank changes of up to +/- 4 points should be considered "noise".

Ranking results

Top 10 map

Criticism

During the 1990s, several educational institutions in the United States were involved in a movement to boycott the U.S. News & World Report college rankings survey. The first was Reed College, which stopped submitting the survey in 1995. The survey was also criticized by Alma College, Stanford University, and St. John's College during the late 1990s. SAT scores play a role in The U.S. News & World Report college rankings even though U.S. News is not empowered with the ability to formally verify or recalculate the scores that are represented to them by schools. Since the mid-1990s there have been many instances documented by the popular press wherein schools lied about their SAT scores in order to obtain a higher ranking. An exposé in the San Francisco Chronicle stated that the elements in the methodology of the U.S. News and World Report are redundant and can be reduced to one thing: money.
On June 19, 2007, during the annual meeting of the Annapolis Group, members discussed the letter to college presidents asking them not to participate in the "reputation survey" section of the U.S. News & World Report survey. As a result, "a majority of the approximately 80 presidents at the meeting said that they did not intend to participate in the U.S. News reputational rankings in the future". The statement also said that its members "have agreed to participate in the development of an alternative common format that presents information about their colleges for students and their families to use in the college search process". This database will be web-based and developed in conjunction with higher-education organizations including the National Association of Independent Colleges and Universities and the Council of Independent Colleges. On June 22, 2007, U.S. News & World Report editor Robert Morse issued a response in which he argued, "in terms of the peer assessment survey, we at U.S. News firmly believe the survey has significant value because it allows us to measure the 'intangibles' of a college that we can't measure through statistical data. Plus, the reputation of a school can help get that all-important first job and plays a key part in which grad school someone will be able to get into. The peer survey is by nature subjective, but the technique of asking industry leaders to rate their competitors is a commonly accepted practice. The results from the peer survey also can act to level the playing field between private and public colleges". In reference to the alternative database discussed by the Annapolis Group, Morse also argued, "It's important to point out that the Annapolis Group's stated goal of presenting college data in a common format has been tried before... U.S. News has been supplying this exact college information for many years already. And it appears that NAICU will be doing it with significantly less comparability and functionality. U.S. News first collects all these data. Then we post the data on our website in easily accessible, comparable tables. In other words, the Annapolis Group and the others in the NAICU initiative actually are following the lead of U.S. News".
Some higher education experts, such as Kevin Carey of Education Sector, have asserted that U.S. News and World Report's college rankings system is merely a list of criteria that mirrors the superficial characteristics of elite colleges and universities. According to Carey, the U.S. News ranking system is deeply flawed. Instead of focusing on the fundamental issues of how well colleges and universities educate their students and how well they prepare them to be successful after college, the magazine's rankings are almost entirely a function of three factors: fame, wealth, and exclusivity. He suggests that there are more important characteristics parents and students should research to select colleges, such as how well students are learning and how likely students are to earn a degree.
The question of college rankings and their impact on admissions gained greater attention in March 2007, when Michele Tolela Myers shared in an op-ed that the U.S. News & World Report, when not given SAT scores for a university, chooses to simply rank the college with an invented SAT score of approximately one standard deviation behind those of peer colleges, with the reasoning being that SAT-optional universities will, because of their test-optional nature, accept higher numbers of less academically capable students.
In a 2011 article regarding the Sarah Lawrence controversy, Peter Sacks of The Huffington Post criticized the U.S. News rankings' centering on test scores and denounced the magazine's "best colleges" list as a scam:
In the U.S. News worldview of college quality, it matters not a bit what students actually learn on campus, or how a college actually contributes to the intellectual, ethical and personal growth of students while on campus, or how that institution contributes to the public good... and then, when you consider that student SAT scores are profoundly correlated parental income and education levels – the social class that a child is born into and grows up with – you begin to understand what a corrupt emperor 'America's Best Colleges' really is. The ranking amounts to little more than a pseudo-scientific and yet popularly legitimate tool for perpetuating inequality between educational haves and have nots – the rich families from the poor ones, and the well-endowed schools from the poorly endowed ones.