Philosophical Gourmet Report

From Wikipedia, the free encyclopedia

The Philosophical Gourmet Report edited by Brian Leiter (also known as the Leiter Report) — in response to the Gourmand Report — attempts to score and rank the university philosophy departments in the English-speaking world, based on a survey of philosophers who are nominated as evaluators by the Advisory Board of the Report. Its purpose is to provide a source of guidance to prospective research Ph.D students, particularly those students who intend to pursue a professional career in academic philosophy.

As of December 2004, there are a total of 110 philosophy Ph.D programs in the U.S. alone; in its latest version (2004 - 2006) the report attempts to list the 50 most reputable (based primarily on perceived quality of the philosophical work of the faculty members). It also lists the top 15 from the United Kingdom, top ten from Canada and top five from Australasia.

Philosophy and Law professor Brian Leiter of The University of Texas at Austin first compiled and distributed the report in 1989 and it was first published on the web in 1996, and, since 1997, by Blackwell.

Contents

[edit] Methods

The report's rankings are based on an anonymous survey of faculty members of philosophy departments throughout the English-speaking world. Respondents are asked to assign scores from one to five to lists of the faculty of each department (the name of the department is suppressed in the survey questionnaire.) The results are compiled and sorted into an ordinal ranking which used to be subdivided into "peer groups," but no longer is.

To be completed

[edit] Arguments for

  • There is currently no objective alternative to the survey. Thus this is the most objective and useful source for students that is currently available.
  • Before the report, students largely relied on the opinions of their undergraduate faculty lecturers, without knowing how well informed, or biased the lecturers were in their opinions.
  • This attempts to shift the halo effect away from the prestige and reputation of the university, to more appropriately, the reputation of its philosophy department
  • If anyone would know about the quality, reputation and influence of the published academic works of various philosophy departments, it is most likely to be the senior, experienced and active academic philosophers.
  • This helps to combat unsubstantiated marketing language and claims, eg. 'This university has one of the top philosophy departments in the country'.
  • There is most likely to be a strong correlation between strong research reputation and good Ph.D supervision practise by the respective staff.
  • The sample skew towards popular subjects is likely to have a strong correlation with employment prospects.
  • This reinforces competitive pressure on departments to be more open, honest and transparent about the research records of their staff, relative strengths of their staff, number of supervising staff available, etc.
  • The Report leads to an increased ability of Deans and other higher-ups in the university to see an immediate impact for money and effort spent on building a philosophy department. Some argue that this has increased cash flow to the discipline as a whole.

[edit] Arguments against

  • There is no necessary causal link in quality between research record and teaching practice, as the Report itself says. In some cases, the research record is quite misleading, ie false positives or false negatives.
  • It is extremely difficult for anyone to fairly and consistently assess the relative reputation of an individual academic, or an entire department, or a number of departments.
  • An injustice is done to those departments which are not assessed and to those who narrowly miss out on the top 50/15/5 ranking on a statistically insignificant difference.
  • The scores are likely to reflect (consciously/unconsciously) the biases, interests, knowledge and specialisations of the evaluators.
  • The sample of evaluators is alleged to be skewed towards popular areas of philosophy - this is not necessarily an indicator of quality of research program.
  • The methodology of ranking in the "Breakdown by Specialties" is not the same as the methodology of overall rankings, and has been ably criticized by many for involving the subjective views of advisory boards made up of philosophers who are not peers, i.e., who are not specialists in the very area of specialization they are assessing.

[edit] Observations, concerns and/or other criticisms

  • The survey intentionally acts as a market regulating mechanism of students and staff and recruitment practices.
  • Different survey methodologies can sometimes produce different results. It is not clear that the method applied in this case, is the most appropriate or the most robust.
  • Because the score is an aggregate, the overall rankings very highly favor large research departments over smaller usually private schools with fewer faculty. This is fine for graduate students who have no idea what they might want to study--which is often but by no means always the case — but for students who have a quite clear idea of what they want to do the Specialty Rankings are much more relevant.
  • Coursework students have mistakenly attempted to apply results to their own situations.
  • This can be misleading as to the overall relative strength of a department, as departments with strong research records are omitted from the report if they do not have a Ph.D program.
  • Local vs Foreign biases are evident in the scores.
  • The composition of the consultative board is not proportionally representative of geography or area of specialisation.
  • Heavily biased towards analytic programs.

[edit] Evolution since the first web report

  • Addition of a consultative board who influence policy, ranking refinements, instructions to and nomination of evaluators.
  • Elimination of peer grouping universities based on median scores, as this could potentially mislead consumer choice as to the difference in relative merits of institutions.
  • Addition of specialised rankings (but not yet scores) of various areas of philosophy.
  • Elimination of handicapping based on department size.
  • Addition of median scores, and local (for non-U.S.A. universities) mean and median scores.
  • Addition of some data on placement records and career prospects.
  • Addition of some advice on some individual areas of philosophy.
  • Addition of number of staff evaluated at each department.
  • Addition of names and specialties of evaluators.

[edit] Suggested improvements and/or alternatives

  • The Report should abandon the overall ranking and instead report data and rankings of specific components of Ph.D. programs.
  • A philosophy Ph.D clearinghouse.
  • Change to a larger score range (eg out of 100 instead of out of 5) to allow more finely grained scores.
  • The addition of range (highest-lowest scores) and of a 'standard deviation' score, to indicate the variability in scores of each department.
  • Number of students currently enrolled in philosophy Ph.D programs at each department would be useful and student to supervising staff ratios would be useful.
  • A more open and transparent methodology debate process, eg. arguments for/against individual changes should be published on the website to allow for scrutiny and to improve the rigour of discussion; what the board's voting difference was for each issue; perhaps allow the posting of anonymous constructive comments and criticism on the website of the report and arguments over individual issues. This would help add to the perception of commitment to rigour and intent in following the best of academic and democratic traditions.
  • A more proportionally representative sample of the board in terms of geography and area of specialisation.
  • Board should include several distinguished philosophers who are also known to be critics of Brian Leiter in order to counteract the impression that this effort is a sophisticated representation of Leiter's own personal assessments of Ph.D. programs.
  • Leiter should retire from this project and pass on the administration of the Report to a fair scholar not connected with him personally or with the Report in order to enhance its objectivity.
  • A more comprehensive (ie. as many as possible) survey sample size of departments and academics.
  • Addition of degree entry requirements, modes of study, yearly costs and the minimum and maximum length of doctorate, statistics on drop out rate and average length of completion for each department. These would help to determine affordability, value vs quality comparison. The drop out rate and length of completion, may be an indicator as to the quality of fellow students and the support and quality of teaching.
  • What student help services are available at each institution for postgraduate students, eg accommodation help, resource centres, etc.

[edit] Suggested advice on how to use the report

The report should be used, as itself suggests, as a guide to deciding where, as a prospective student, you should apply. Simply evaluating the fitment of a school based on their mean score, is not only ill-advised, but a waste of time and money. Investigate the school and find out what their faculty are working on and researching; what their placement record for their Ph.D. receipts is like; what their current graduate students are researching, since these are the folks you are likely to have the most contact and discussion with. Focus on those schools who have faculty publishing in the field you are most interested in as an undergraduate, or master's student. The report is a useful tool in narrowing down that search, but it should not be the only resource in applyinf to graduate study in philosophy.

[edit] See also

[edit] External links