College and university rankings

From Wikipedia, the free encyclopedia

In higher education, college and university rankings are listings of educational institutions in an order determined by any combination of factors. Rankings can be based on subjectively perceived "quality," on some combination of empirical statistics, or on surveys of educators, scholars, students, prospective students, or others. Such rankings are often consulted by prospective students as they choose which schools they will apply to or which school they will attend. Among college and university rankings, there are rankings of undergraduate and graduate programs. Rankings are conducted by magazines and newspapers and in some instances by academic practitioners. For details on ranking of law programs, see Law School Rankings.

Rankings vary significantly from country to country. A Cornell University study found that the rankings in the United States significantly affected colleges' applications and admissions. In the United Kingdom, several newspapers publish league tables which rank universities. GranSeveral organizations provide worldwide rankings, including:

Contents

[edit] Background

The much-publicized Academic Ranking of World Universities [1] compiled by the Shanghai Jiao Tong University, which was a large-scale Chinese project to provide independent rankings of Universities around the world on behalf of the Chinese government. The results have often been cited by The Economist magazine in ranking universities of the world [2]. As with all rankings, there are issues of methodology, and one of the primary criticisms of the ranking is its bias towards the natural sciences, over other subjects. This is evidenced by the inclusion of criteria such as the volume of articles published by Science or Nature (both Journals devoted to the natural sciences), or the number of Nobel Prize and Fields Medal winners (both of which are predominantly awarded to the physical sciences). This results in some strange anomalies, e.g. the London School of Economics (LSE), consistently ranked within the UK as being among its top five universities,[1] [2] finds itself ranked by Shanghai Jiaotong among the 23rd-33rd best universities in Britain.

The Times Higher Education Supplement, a British publication annually publishes a list of 200 ranked universities from around the world. However, when one compares THES ranking with that of others, one will note that there are many more non-American universities that populate the upper tier of the ranking. Furthermore, it is to be noted that THES ranking also faces criticism due to the more subjective nature of its assessment criteria, which are largely based on a 'peer review' system of 1000 academics in various fields. An Australian researcher castigates the THES ranking because it arbitrarily put his very own Australian university way higher than it deserves.[3]

The Webometrics ranking of universities is based entirely on the web-presence of the university (a computerised assessment of the size and sophistication of the website). As such it is unlikely to accurately reflect the academic performance directly, but will reflect the internet based activities of the universities.

One refinement of the Webometrics approach is the G-Factor methodology, which counts the number of links only from other university websites. The G-Factor is an indicator of the popularity or importance of each university's website from the combined perspectives of the creators of many other university websites. It is therefore a kind of extensive and objective peer review of a university through its website - in social network theory terminology, the G-Factor measures the 'nodality' of each university's website in the 'network' of university websites.

A University ranking using Google search engine is also provided by a Stanford student on his blog Stanford ranking. The results of this ranking appear to be an objective peer review assessment of universities around the United States of America [citation needed]. A total of 1720 schools are ranked.

Some rankings include ones based on numbers of Nobel Prizes obtained by Universities[4].

[edit] Regional and national rankings

In alphabetical order.

[edit] Canada

Maclean's, a news magazine in Canada, ranks Canadian Universities on an annual basis known as the Maclean’s University Rankings. Their criteria are based on a number of factors, which include characteristics of the student body, classes, faculty, finances, the library, and reputation. The criteria are described here. The rankings are split into three categories: primarily undergraduate (schools that focus on undergraduate studies with few to no graduate programs), comprehensive (schools that focus on undergraduate studies but have a healthy selection of graduate programs), and medical doctoral (schools that have a very wide selection of graduate programs). As the most prominent ranking of Canadian universities, these rankings have received much scrutiny and criticism from universities, especially those that receive unfavourable rankings. For example, the University of Calgary produced a formal study examining the methodology of the ranking, illuminating the factors that determined the university's rank, and criticizing certain aspects of the methodology[5]. Even fairly renowned universities, like the University of Alberta, have expressed displeasure over the Maclean's ranking system. A notable difference between US rankings and Maclean's rankings, however, is that Maclean's does not include privately-funded universities in its rankings. However, the vast majority and best-known universities in Canada are publicly funded. As of September 2006, many Canadian universities have refused to participate in the Maclean's survey. Macleans will continue to rank universities based on publicly available data.The primarily undergraduate rankingsThe Comprehensive University rankingsThe medical doctoral rankings

[edit] European Union

The European Commission also weighed in on the issue, when it compiled a list of the 22 European universities with the highest scientific impact[6], measuring universities in terms of the impact of their scientific output. This ranking was compiled as part of the Third European Report on Science & Technology Indicators[7], prepared by the Directorate General for Science and Research of the European Commission in 2003 (updated 2004).

Being an official document of the European Union (from the office of the EU commissioner for science and technology), which took several years of specialist effort to compile, it can be regarded as a highly reliable source (the full report, containing almost 500 pages of statistics is available for download free from the EU website). Unlike the other rankings, it only explicitely considers the top European institutions, but ample comparison statistics with the rest of the world are provided in the full report.

In this ranking, the top 2 European universities are also Oxford and Cambridge, as in the Jiao Tong and Times ranking. This ranking, however, stresses more the scientific quality of the institution, as opposed to its size or perceived prestige. Thus smaller, technical universities, such as Eindhoven (Netherlands) and Munchen (Germany) closely follow Oxbridge. Furthermore, the report does not provide a direct comparison between these and US/world universities - although it does compute complex scientific impact score, measured against a world average.

[edit] Ireland

The Sunday Times compiles an league of Irish universities[8] based a mix of criteria, for example:

  • Average points needed in the Leaving Certificate (end-of-secondary-school examination) for entry into an undergraduate course
  • Completion rates, staff-student ratio and research efficiency
  • Quality of accommodation and ports facilities
  • Non-standard entry (usually mature students or students from deprived neighbourhoods)

[edit] UK

See also: League tables of British universities

HESA (Higher Education Statistics Agency) oversees three yearly statistical returns (Financial, Student and Staff) which must be compiled by ever HEI in the UK. These are then disseminated into usable statistics which make up a major part of the HE ranking e.g. Student Staff Ratio, Number of Academic Staff with Doctorates, Money spent on Student Service etc.

The Research Assessment Exercises (RAE) are attempts by the UK government to evaluate the quality of research undertaken by British Universities. Each subject, called a unit of assessment is given a ranking by a peer review panel. The rankings are used in the allocation of funding each university receives from the government. The last assessment was made in 2001. The RAE provides quality ratings for research across all disciplines. Panels use a standard scale to award a rating for each submission. Ratings range from 1 to 5*, according to how much of the work is judged to reach national or international levels of excellence. Higher education institutions (HEIs) which take part receive grants from one of the four higher education funding bodies in England, Scotland, Wales and Northern Ireland.

Standards of undergraduate teaching are assessed by the Quality Assurance Agency for Higher Education (QAA), an independent body established by the UK's universities and other higher education institutions in 1997. The QAA is under contract to the Higher Education Funding Council for England to assess quality for universities in England. This replaced a previous system of Teaching Quality Assessments (TQAs) which aimed to assess the administrative, policy and procedural framework within which teaching took place did directly assess teaching quality. The new QAA return is notoriously inaccurate due to the use of student polls and a number of Universities (Warwick being the most prominent) have refused to take part in the survey.

[edit] USA

[edit] University of Florida The Top American Research Universities

A research ranking of American universities is researched and published in the Top American Research Universities by University of Florida TheCenter. The list has been published since 2000 and attempts to understand the research aspects of American universities better.

The measurement used in this report is anchored heavily on relevant objective data, such as research publications, citations, recognitions and funding - the quintessential measurements for research. The information used can also be found in public-accessible materials, reducing the possibility of manipulation. The approach for this publication appears to be more scientific than those by popular magazines as the researchers all come from scientific backgrounds and the fact that schools are grouped into tiers instead of singular ranking. The research method used is consistent from year to year and any changes is explained in the publication itself. References from any other study are cited as it would in a scientific publication. This report has been circulated and sought after among academia itself.

[edit] U.S. News & World Report College and University rankings

The best-known American college and university rankings have been compiled since 1983 by the magazine U.S. News & World Report based on a combination of statistics provided by institutional researchers and surveys of university faculty and staff members. The college rankings were not published in 1984, but were published in all years since. The precise methodology used by the U.S. News rankings has changed many times, and the data are not all available to the public, so peer review of the rankings is limited. As a result, many other rankings arose and seriously challenged the result and methodology of US News's ranking, as shown in other rankings of US universities section below.

The U.S. News rankings, unlike some other such lists, create a strict hierarchy of colleges and universities in their "top tier," rather than ranking only groups or "tiers" of schools; the individual schools' order changes significantly every year the rankings are published. The most important factors in the rankings are:

  • Peer assessment: a survey of the institution's reputation among presidents, provosts, and deans of admission of other institutions
  • Retention: six-year graduation rate and first-year student retention rate
  • Student selectivity: standardized test scores of admitted students, proportion of admitted students in upper percentiles of their high-school class, and proportion of applicants accepted
  • Faculty resources: average class size, faculty salary, faculty degree level, student-faculty ratio, and proportion of full-time faculty
  • Financial resources: per-student spending
  • Graduation rate performance: difference between expected and actual graduation rate
  • Alumni giving rate

All these factors are combined according to statistical weights determined by U.S. News. The weighting is often changed by U.S. News from year to year, and is not empirically determined (the National Opinion Research Center methodology review said that these weights "lack any defensible empirical or theoretical basis"). Critics have charged that U.S. News intentionally changes its methodology every year so that the rankings change and they can sell more magazines. Indeed a popular web site, http://www.rankyourcollege.com, has created a spoof of US News that focuses on their lack of rigor in ranking. The first four such factors account for the great majority of the U.S. News ranking (80%, according to U.S. News's 2005 methodology), and the "reputational measure" (which surveys high-level administrators at similar institutions about their perceived quality ranking of each college and university) is especially important to the final ranking (accounting by itself for 25% of the ranking according to the 2005 methodology).[3]

A New York Times article reported that, given the U.S. News weighting methodology, "it's easy to guess who's going to end up on top: Harvard, Yale and Princeton round out the first three essentially every year. In fact, when asked how he knew his system was sound, Mel Elfin, the rankings' founder, often answered that he knew it because those three schools always landed on top. When a new lead statistician, Amy Graham, changed the formula in 1999 to what she considered more statistically valid, the California Institute of Technology jumped to first place. Ms. Graham soon left, and a slightly modified system pushed Princeton back to No. 1 the next year."[4] A San Francisco Chronicle article notes that almost all of US News factors are redundant and can be boiled down to one characteristic: the size of the college or university's endowment."[5]

More statistical criticisms involve the different standards of information used for different universities. For instance, for SAT scores, private schools tend to use best verbal + best math SAT methodology for their reporting metric, while public schools tend to use best one sitting SAT score. For students who generally score above 1300 on the SAT, the difference in the two metrics can be anywhere from 20-50 points in reported score for universities. Also, factors that measure endowment are not uniform, for instance a yearly federal grant can be consistent with a 5% cash flow from an endowment. Criticisms of US News range from political and arbitrary to statistical inaccuracy.

[edit] Vanguard College Rankings of research-doctorate universities

Vanguard college ranking is a ranking system that uses objective data. Vanguard College Rankings. The ranking is a carryover of a graduate school class project conceived by James A. Johnson. The Vanguard Rankings embody profiles of the leading research-doctorate universities in the United States, based on program rankings compiled by the National Research Council (NRC). Because the NRC studies are depictions of 'scholarly quality of program faculty' in American universities, the Vanguard Rankings have a strong root in intellectual exclusivity.

The strength of those roots can be found in their simple, yet cogent transformation of the NRC's college rankings by program into program rankings by college. (The transformation is achieved by means of a reverse order scoring method that yields a composite score for each institution.)

A common criticism of the Vanguard hierarchy is that their dependence, exclusively, upon NRC studies conducted only every twelve years renders them dated after one or two years of publication. But since his rankings reflect professorial attributes only, Johnson argues that they remain valid over several years, due to the scholarly longevity of faculty reputation as a gauge of quality. This observation is borne out in a major way by the National Research Council's report of little change in program rankings from one 12-year interval to another.

[edit] Washington Monthly College rankings

The Washington Monthly's "College Rankings" (an alternative college guide to the Vanguard College Rankings and U.S. News and World Report) began as a research report in 2005 and introduced its first official rankings in the September 2006 issue. It offers American university and college rankings [6] based upon the following criteria:

  • a. "how well it performs as an engine of social mobility (ideally helping the poor to get rich rather than the very rich to get very, very rich)"
  • b. "how well it does in fostering scientific and humanistic research"
  • c. "how well it promotes an ethic of service to country" [7].

As can be seen, WM ranks universities according to very different criteria from other rankings. These criteria result in relatively higher rankings for public universities, compared to other ranking systems. Public universities have to bear responsibilities that are not present in private universities. At undergraduate level especially, public universities must serve their states' mandates and, hence, often have less freedom to pick and choose only a certain group of people.

[edit] Other rankings of US universities

Other organizations which compile general US annual college and university rankings include the Fiske Guide to Colleges,the Princeton Review, and College Prowler. Many specialized rankings are available in guidebooks for undergraduate and graduate students, dealing with individual student interests, fields of study, and other concerns such as geographical location, financial aid, and affordability.

One commercial ranking service is Top Tier Educational Services. [9] Student centered criterion are used and despite the two year completely updated study, the rankings are updated every quarter from new input data. The criterion uses subjective data, such as peer assessment, desirability, and objective data, such as SAT, GPA.

Such new rankings schemes measures what decision makers think as opposed to why. They may or may not augment these statistics for reputation with hard, qualitative information. The authors discuss their rankings system and methodology with students but do not share their specific research tools or formulas. Again, the problem with such a ranking that uses subjective opinions is that it is very prone to personal bias, prejudice and bounded rationality. Also, public universities will be penalized because besides an academic mission, they have a social mission. They simply cannot charge as much money, or be as selective, as private universities. The fact that it is a commercial company, one can ask if there is any hidden business motives behind such a ranking. Perhaps, it can charge more for a private university entrant.

Among the rankings dealing with individual fields of study is the Philosophical Gourmet Report or "Leiter Report" (after its founding author, Brian Leiter of the University of Texas at Austin), a ranking of departments of analytic philosophy. This report has been at least as controversial within its field as the general U.S. News rankings, attracting criticism from many different viewpoints. Notably, practitioners of continental philosophy, who perceive the Leiter report as unfair to their field, have compiled alternative rankings.

Avery et al. recently published a working paper for the National Bureau of Economic Research titled "A Revealed Preference Ranking of U.S. Colleges and Universities." Rather than ranking programs by traditional criteria, their analysis uses a statistical model based on applicant preferences. They based their data on the applications and outcome of 3,240 high school students. The authors feel that their ranking is less subject to manipulation compared to conventional rankings (see criticism below).

[edit] Criticisms of rankings

College and university rankings, especially the well-known U.S. News rankings, have drawn significant criticism from within and outside higher education. Critics feel that the rankings are arbitrary and based on criteria unimportant to education itself (especially wealth and reputation); they also charge that, with little oversight, colleges and universities inflate their reported statistics. Beyond these criticisms, critics claim that the rankings impose ill-considered external priorities on college administrations, whose decisions are sometimes driven by the need to create the most desirable statistics for reporting to U.S. News rather than by sound educational goals. A study titled "Broken Ranks" by Washington Monthly supported this thinking. [8]

Furthermore, some have suggested that the formulae and methodologies used to turn the various data into a ranking are arrived at specifically, if unconsciously, to keep a few key institutions at the top of the chart — not because of any undue partisanship among the editors, but simply due to a subconscious assumption that a system which flies in the face of conventional wisdom must somehow be faulty. Hence editorial decisions would tend to reinforce preconceptions--rankings would become less a tool for guidance and more a means to confirm assumptions.

Furthermore, there is a strong counter-argument to THES and others' criticism on SJTU ranking, which is claimed to place more emphasis on science. The argument is because even though there are more Nobel Prizes or awards in science, every university has an equal opportunity to win as many Nobel Prizes or awards in any fields, science and non-science. Thus, it is unfair to penalize science because the very purpose of the ranking is to measure both the breadth and the depth of a trully diverse university. A university that only focuses on certain areas then perhaps has not reached the level of diversity required to be qualified as a University. The fact of the matter is that it is very difficult to establish a university that covers and excels in all fields. It takes time, resources and tremendous amount of energy. Thus, the ability of an academic body to reach that level has to be recognized. Weighting should not be an issue because the opportunity for each university to excel is equal in the sense that nobody can stop any university to excel if the very university chooses to do so.

Some of the specific data used for quantification are also frequently criticized. For instance, Rice University, with a top 5 per-student endowment and a generous Financial Aid department, is ranked in the mid-twenties for per-student "Financial Resources". As another example, the "Peer Assessment" equally weighs the opinions of administrators at less-known schools such as Florida Atlantic and North Dakota State with those of say, Harvard, Stanford and Duke. Students with their sights set on the best graduate schools may not be interested in knowing which programs the administrators of bottom schools have heard of, or vice versa.

Other critics, seeing the issue from students' and prospective students' points of view, claim that the quality of a college or university experience is not quantifiable, and that the ratings should thus not be weighed seriously in a decision about which school to attend. Individual, subjective, and random factors all influence the educational experience to such an overwhelming extent, they say, that no general ranking can provide useful information to an individual student.

Suppose, as these critics illustrate, that the difference between an "excellent" school and a "good" one is often that most of the departments in the excellent school are excellent, while only some of the departments in the good school are excellent. And the difference between an excellent department and a good one might be, similarly, that most of the professors in the excellent department are excellent, while only some in the good department are. For an individual student, depending on the student's choices of field of study and professors, this will often mean that there is no difference between an excellent college or university and a merely good one; the student will be able to find excellent departments and excellent faculty to work with even at an institution which might be ranked "second-tier" or lower. Statistically, the rankings are distributions with large variances and small differences between the individual universities' means (averages).

Complicating matters further, as most educators and students observe, individuals' opinions about the excellence of academic departments and, especially, of professors, exhibit a wide range of variation depending on personal preferences. And the quality of an individual student's education is most determined by whether or not the student happens to encounter a small number of professors that "click" with and inspire him or her. Similarly, the main difference between a "good" or "second-tier" large state university and an "excellent" or "top-tier" prestigious smaller institution, for the student, is often just that, at the larger school, the student needs to work a bit harder and be a bit more assertive and motivated in order to actively extract a good education. For many students this will not be difficult enough to justify a preference for the smaller institution, though some individuals do prefer a smaller school.

Additionally, if one looks at the criteria used by U.S. News to make the ranking, one can easily see that most of the criteria are based on self-select attributes, that is attributes that are dependent on the quality of the students themselves and not on the quality of the school itself. One very good example of this is US News ranking for MBA. Based on U.S. News, a good business school is one where the students have the highest GMAT, GPA, starting salary and others. As can be readily understood, these measurements have a lot to do with the students and little with the schools.

Moreover, the problem with this ranking is that most students are just taking the easy way out and do not bother to search why certain people go to certain schools. They just assume the ranking is the authority and hence it only increases the divide among the schools. This is simply the tragedy of the common. It is very possible that students love a location so much that they are willing to exchange salary for quality of life. Given this factor, the ranking of a school may not be as high as it should be in the following year. Consequently, the better students will try to avoid the said school because they are afraid that they will not be able to find good employments, which is absolutely not the case. After some years, a good school is then dubbed a bad school, without any change in the quality of the education itself - a proposition that is supported by a study done by Cornell. [9]

What happens here is that the years of efforts to build a school to a reputable status is washed down the drain because of a simple numerical ranking. A catastrophe cannot even describe the potential outcome of this situation if people are not starting to become more discerning when it comes to ranking. It is also very sad to see that leading universities, which are supposed to be the beacons of light, are flaunting their rankings left, right and center without any due regards to what damage such a practice could do to the whole education system. For example, a good student from a small town in Mississippi may need to go to a public university in Mississippi because he or she needs to take care of the family. This does not mean that the person cannot go to a "better" or more well-known school. However, because of the ranking, people can label that person as an underachiever for not having gone to a good school.

Lastly, criticism against ranking is not to be interpreted as criticism against certain schools. However, human tends to be short-sighted and the rankings will only reinforce such short-sightedness and deprive outstanding individuals to earn equitable opportunity just because they do not fit a typical molding.

[edit] Forget U.S. News Coalition

In the 1990s a coalition of student activists calling themselves the "Forget U.S. News Coalition" (and occasionally substituting "fuck" for "forget") arose, based initially at Stanford University. FUNC attempted to influence college and university administrations to reconsider their cooperation with the U.S. News rankings. They met with limited success, finding administrations encouraged the development of alternatives to the rankings, though most institutions (including Stanford) continued to cooperate with U.S. News. Critics of FUNC question its motives claiming that the organization is dissatisfied with the rankings not for principled objections to the ranking process, but rather because they are dissatisfied with Stanford's fourth place standing. One school which has criticized U.S. News, but also has held steady in the rankings is Emory University which generally ranks in the top 20 colleges nationwide. Cornell has criticized U.S. News as well. In general though, positions of Colleges will vary a great deal between rankings, and no one ranking is accepted as definitive, with some rankings applying more specifically to the needs of a prospective student than others on the basis of methodology or criteria.

[edit] Colleges and criticism of U.S. News rankings

Reed College has not cooperated with the U.S. News rankings nor submitted any institutional data to U.S. News since 1994; its administration has been outspoken in its criticism of the rankings. Critics charge, and Rolling Stone magazine reported, that Reed's "second-tier" status in U.S. News's lists is artificially depressed by U.S. News as retribution for Reed's harsh criticism of the rankings. The Reed College Prowler Guidebook was recently a topic of serious debate on campus. It has been banned from the Reed bookstore because of its controversial portrayal of campus life.

Similarly, Ohio Wesleyan University and St. John's College have not cooperated with the U.S. News rankings, and thus fell in the ratings. Furthermore, for years, progressive Bard College did not cooperate with the U.S. News rankings, and thus saw its rankings artificially depressed; however, it now cooperates and consistently ranks among the Top 40 liberal arts colleges.

[edit] References

[edit] See also

[edit] External links