Faculty Scholarly Productivity Index
From Wikipedia, the free encyclopedia
This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (July 2007) |
This article or section needs to be wikified to meet Wikipedia's quality standards. Please help improve this article with relevant internal links. (July 2007) |
The Faculty Scholarly Productivity Index, a product of Academic Analytics, is a metric designed to create benchmark standards for the measurement of academic and scholarly quality within and among United States research universities.
The index is based on a set of statistical algorithms developed by Lawrence Martin and Anthony Olejniczak. It measures the annual amount and impact of faculty scholarly work in several areas, including:
- Publications (how many books and peer-reviewed journal articles have been published and what proportion of the faculty is involved in publication activity?)
- Citations of journal publications (who is referring to those journal articles in subsequent work?)
- Federal research funding (what and how many projects have been deemed of sufficient value to merit federal dollars, and at what level of funding?)
- Awards and honors (a key indicator of innovative thinking and/or scholarly excellence that has impacted the discipline over a period of time)
The FSPI analysis creates, by academic field of study, a statistical score and a ranking based on the cumulative scoring of a program's faculty using these quantitative measures compared against national standards within the particular discipline. Individual program scores can then be combined to demonstrate the quality of the scholarly work of the entire university. This information is gathered for over 230,000 faculty members representing 118 academic disciplines in roughly 7,300 Ph.D. programs throughout more than 350 universities in the United States.
Contents |
[edit] Rankings approach
Unlike other annual college and university rankings, e.g., the U.S. News & World Report annual survey, the FSPI focuses on research institutions as defined by the Carnegie Classification of Institutions of Higher Education. It draws on the approach used by the United States National Research Council (NRC), which publishes a ranking of U.S.-based graduate programs approximately every ten years, but focuses on providing a more frequently-gathered set of benchmark measurements that do not include the qualitative and subjective reputation assessments favored by the NRC and other ranking systems.
[edit] History of Faculty Scholarly Productivity Index
The system for evaluating university programs that forms the basis of the FSPI was developed by Lawrence Martin and Anthony Olejniczak, of Stony Brook University. Martin had been studying, speaking, and writing about faculty scholarly productivity since 1995. During that period, a series of discipline-specific, per-capita regression models was created and tested to evaluate their accuracy and the feasibility of predicting the academic reputation of the faculty of doctoral programs.
These prototype materials employed data from the National Research Council's 1995 publication Continuity and Change (and the subsequent CD-ROM publication of data), describing and evaluating American Ph.D. programs by field. Martin and Olejniczak found that the reputation of a program (as measured by faculty scholarly reputation from a survey conducted by the NRC) could be predicted well using a discipline-specific regression equation derived from quantitative, per capita data available for each program (the number of journal articles, citations, federally funded grants, and honorific awards). Reputation could be predicted with high statistical significance but important deviations from the regression line were also apparent; that is to say, some schools were outperforming their reputation, while others were underperforming. The prototype materials based on this method, and the data from the 1995 NRC study, were subsequently presented at numerous academic conferences from 1996 to 2004, and have formed the basis on which the FSP Index was developed.
Martin's concepts became a commercial product when, at the Council of Graduate Schools' 2005 Annual Meeting, he met Mark Shay, then President of Educational Directories Unlimited. The two worked together to develop the company that eventually became Academic Analytics.
Today, the product is used by numerous universities to improve the quality of their programs.[1]
[edit] References
[edit] External links
- The Top 50 Overall
- “A New Standard for Measuring Doctoral Programs,” Piper Fogg, The Chronicle of Higher Education, January 12, 2007. ([1])
- "How Productive Are your Programs?", Scott Jaschik, Inside Higher Education, January 25, 2006. (http://www.insidehighered.com/news/2006/01/25/analytics)
- “Towards a Better Way to Rate Research Doctoral Programs: Executive Summary,” Joan Lorden and Lawrence Martin, position paper from NASULG’s Council on Research Policy and Graduate Education, ([2])
- Academic Analytics website
- "Are Public Universities Losing Ground?", Inside Higher Education, March 14, 2007. (http://www.insidehighered.com/news/2007/03/14/analytics)