An Academic Productivity Index: Integrating Grant Funding and Publications

Vol 3 #6

 

James A. Bourgeois, O.D., M.D. (Corresponding)

Ana Hategan, M.D.

Dr.Bourgeois is Chair, Department of Psychiatry, Baylor Scott & White Health, Central Texas Division; Clinical Professor, College of Medicine, Texas A&M University Health Science Center.

Dr. Hategan is Associate Professor, Department of Psychiatry and Behavioural Neurosciences, Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada.

Corresponding Author: James A. Bourgeois, O.D., M.D., 2401 South 31st Street, Temple TX 76508. Telephone: 254-724-4071/Fax: 254-724-1747, Email: [email protected]

 

Abstract

Quantitation of academic productivity among faculty members remains a challenge for leaders in academic medicine.  Various metrics for assessing academic productivity of faculty members include summation of publications, citation indices and impact factors of journals in which faculty members have published, and tabulation of research and other types of external funding via receipt of competitive grants. The authors propose a novel index, the Academic Productivity Index, as an additional metric to compare the academic performance of faculty members.  The proposed index relates academic products in terms of articles produced to the amount of grant funding awarded to a faculty member by using a ratio of publications to grant funding.  Academic leaders may wish to consider use of this measure as one of many criteria to monitor faculty productivity and academic performance. Departments and institutions may consider pilot studies of the proposed index along with other in-place performance metrics when assessing annual performance and career progression of academic medicine faculty members. Modification of the terms of the proposed index for resonance with other established local markers may be useful for a consistent application of performance measures.

 

Introduction

Department chairs, deans, and other academic medicine leaders are charged with promoting the development of academic productivity in faculty members, particularly junior faculty members. Indeed, among the multiple areas of achievement needed in the academic medical setting, academic products (primarily but not exclusively pieces in peer-reviewed journals) may be the “rate limiting step” in the academic promotion process.  Other areas of professional function (e.g., clinical services delivery, “academic citizenship” such as committee and search participation, education of students and residents, clinical leadership roles) are also expected of academic faculty, but are not the subject of this article.

 

Background

The academic reward structure has required changes over the last half century. The challenge on how best to assess academic productivity when it comes to measuring and rewarding scholarly output remains in modern academia. It is clear that “one size does not fit all” when it comes to assessing academic work and research productivity. Many academic systems and universities provide payments to faculty members in recognition for scholarly productivity during an academic year. Yet, there remains the dilemma regarding the relative weight that academic output and grant funding play in assessing overall academic productivity. In academic systems where scholars are rewarded substantially for increased grant funding, there are thus incentives such that research programs are funded, which generate overhead and promote growth. This implies increased time writing grant proposals, and presumably relatively less time in gathering and processing data.  When scholars are rewarded for increased number of publications, they have incentives to improve their research productivity by producing more publications in a given unit of time.

Obtaining funding through grants itself has become a valid measure of academic accomplishment and in some scientific fields, essentially a necessity for conducting ongoing research. However, as in many other academic disciplines, funding in academic medicine is difficult to obtain or tends to be more available to researchers in the top-ranking universities with robust research infrastructures.  In using grant funding as a metric for assessing academic productivity, considerable care is required to equipoise the incentives for knowledge creation and publication. In an era of hyper-accountability, it is necessary to ask that careful consideration and utmost finesse be exercised when making judgments that often affect the remunerations and academic futures of faculty members. The authors herein propose a new, additional index for measuring academic productivity in medicine. This new index does not include the characteristics of existing indices (e.g., h-index, g-index) such as the year of publication, the number of citations received, and the role of major contributors to papers (e.g., order of authorship) but could be used to supplement these other measures [1-3].

The number of academic articles published per unit time (e.g., academic year) is a convenient, and relatively objective measure of academic productivity.  This can be done in various ways.  An institution may grant “credit” only for PubMed cited articles, for other published work (e.g., non-PubMed journals, books, book chapters, conference abstracts), or may “weight” PubMed cited pieces more “heavily” than other publications, while including all externally disseminated work in an overall score.

In addition to the production of articles in peer-reviewed journals and other professional media, faculty members may be expected to garner competitive grants from various sources.  Grants may either explicitly be tied to the expectation of eventual publication (e.g., a NIH grant to fund a specific research project), or may be associated with another critical institutional function not directly related to the literature (e.g., a seed money grant to deliver medical services to a heretofore underserved population).  Academic departments may have many reasons to seek and secure various sources of grant funding, not all of which relate to related academic products in a quid-pro-quo fashion.

 

Grants “versus” Publications

There is, therefore, an inherent “tension” regarding grant funding specifically earmarked to research projects.  Granting agencies, indeed the field at large, has the right to expect that grant funding will directly produce a meaningful number of articles and other academic products based on the research results.  Therefore, faculty members so funded have an explicit obligation to be more academically productive than those without grant support.  From an evaluative point of view, faculty members who obtain grants but who do not produce grant-related articles, need to be held accountable in some fashion for their grant stewardship. Especially in an era of reduced grant funding opportunities, granting agencies need assurance and evidence that grants lead to academic products, and are not seen as an “end unto themselves.” Stated another way, faculty members who, despite minimal (or even no) grant funding but who still produce meaningful numbers of articles could be considered more “academically productive” than other, robustly grant funded colleagues with a similar number of articles produced.

The management of academic departments can be assisted by the thoughtful application of meaningful metrics applied to various areas of work. Relative value units (RVUs) (though an imperfect measure) tell the story of clinical productivity, while faculty ratings by learners (though an imprecise metric due the subjective nature of impressions) and teaching load (e.g., number of hours taught per academic year) are measures of pedagogic productivity.  In the academic productivity area, publications per academic year (often weighted towards PubMed citations being more valuable than other publications), impact factor of journals published in, and h-index and g-index (as the major indices used so far for quantifying the academic performance of researchers) are productivity metrics in some common use.  It is also customary to tabulate grant awards attributable to a specific faculty member as a metric of academic success [1].

 

The Academic Productivity Index: Linking Grants and Publications

We propose a formula to link academic output and research grant support. We separate grant support into two types: research vs. non-research supporting grants. While some non-research supporting grants (e.g., program development/expansion, educational services) actually can lead to academic products (e.g., articles on results of these initiatives), their main focus is not on academic results, per se, in the near term.  We propose to link annual academic productivity to research grant funding as the number of publications divided by research grant support via the following formula, which puts research grant support into the denominator:

 

 

Academic Productivity Index (API)*  =   

            Number of Publications*

 1 + log (Research Grant Support* in dollars) 

* Per academic year

 

 

The denominator contains the “1” to avoid the mathematically impermissible division by zero in the case of non-funded faculty members, while the logarithm of annual research grant support is used to “contain” the range of numbers in large grants to control the scale of the calculated denominator.  Individual departments would use their conventional definition of “publication” on the numerator (e.g., whether to include only PubMed citations or all publications, or whether to include all publications in a “weighted” fashion). The proposed formula does not distinguish among order of authors on specific publications nor directly address h-index or journal impact factor per se.

To illustrate the API formula, consider the following faculty members as shown in Table 1.

Table 1. Examples of Academic Productivity Index (API) calculation (hypothetical data)

Faculty member Academic output and grant funding API score
1 5 publications, no funding 5
2 10 publications, $10,000 funding 2
3 1 publication, no funding 1
4 15 publications, $100,000 funding 2.5
5 21 publications, $1,000,000 funding 3

 

Application of the API with Other Metrics

The API score would be used in the context of other, current metrics at the department level, as attempt to link the expectation of publications to be at least reasonably proportional to the funds devoted to research activity.  As with all such metrics, the API should be used in context with local expectations and resources. Specific departments may wish to “tweak” the definition of “publications” in such index calculations.  Trending the API over time could allow leaders to see “drifts” in upward or downward productivity over time as a series of  “markers” of career progression reflecting increasing (or decreasing) academic productivity of a given faculty member, and to reallocate research funding to their more consistently productive faculty members.

Weakness of this formula, which is why it should be used in the proper context, includes it not directly addressing author order, h-index, and impact factors of articles.  Some modification of the definition of the term “publication” could address those concerns.  Other nations with other currency values may wish to correct the denominator to $USD/$CAD (US and Canadian currencies often trade at near parity) for valid international comparisons, when relevant.  More perspectives in the application of this measure and further limitations can be revealed through both theoretical work on the mathematical-statistical background and systematic research.

We provide this formula to stimulate debate on the crucial areas of faculty development, academic productivity, and the challenges of objectively comparing the relative academic contributions of different faculty groups who may not have equal and equitable access to resources for academic products. It is important to emphasize that academic productivity by primarily clinical faculty members nonetheless contributes meaningful scholarship to the field.

Academicians should be aware of the strengths and limitations of traditional metrics and should be judicious when selecting any metrics for an objective assessment of scholarly output and research impact. Some databases such as Scopus (Elsevier), Web of Science (Clarivate Analytics), and Google Scholar (Google) have created their own bibliometric measures by using their own unique data, publications, indexes, and subject categories [4]. Of the three databases, Google Scholar is the only online citation database free to the public, whereas the other two are available as a subscription. Table 2 gives a summary of the advantages and disadvantages of some common measures and tools that have been used for evaluating the performance of scholars based on bibliometric data and how they compare to the API [1-6].

Table 2. Common advantages and disadvantages of several bibliometric measures or tools used to evaluate scholarly productivity and how they compare to API [1-6].

Measures Advantages Disadvantages
Number of articles – Measures quantity – Does not measure impact of articles
Number of citations – Measures impact – Gives weight to highly cited articles versus original research contributions
Scopus – Provides indexing and abstracting

– Best indexing platform

– Provides citation tracking

– Automatically calculates the h-index

– Covers wide range of subjects; strong coverage of science and technology journals

– 100% MEDLINE coverage

– Requires subscription (not free)

– Commercial database

– Years covered-citations 1996 to present; skewed h-index for scholars with longer careers than 1996 (the citations and calculations based on them are only available from articles since 1996 onward)

– h-index and citation counts are generated on all the publications of a given author, independent of keyword searched 

Web of Science – User-friendly web interface

– First database to incorporate the h-index

– The h-index may be viewed minus self-citations (it is removed only if listed as first author)

– Enables viewing of orphan records using the “cited references” search feature to expand the citation set

– Requires subscription (not free)

– Years covered-citations 1900 to present

– English language, Western and USA bias

– Lacks citation tracking

– Should not be used alone for locating citations to an author or title

 

Google Scholar – Provides indexing and abstracting

– Does not require subscription (free)

– Automatically calculates the h-index, number of citations

– Covers journal and academic websites, electronic-only publications, preprints, theses, books from the Google Books project and others

 

– Years covered-citations not revealed

– Poorer coverage of print-only material than its competitors (Scopus, Web of Science)

– Highly cited articles appear in top positions and gain more citations, while new papers seldom appear in top positions and get less attention by the users and thus fewer citations

– Lacks citation tracking

– No impact factor

– Lacks useful search filters

h-index – Provides indexing and abstracting

– Objective and easy to calculate metric

– Combines publication output and citation impact

– Robust cumulative indicator over time

– Measures “durable” performance (not only single peaks)

– Score can never decrease

– Any scholarly document type can be included since the h-index is not changed by adding uncited articles

– Favors senior researchers with strong publication records (it can show research and its impact in the most positive light)

– Puts small but highly-cited article sets at an advantage

– May be useful for identifying outstanding performance but less so in assessing fair/good performance due to skewed rank-frequency distributions

– Cannot be used to compare scholars across disciplines due to discipline-based variations in scholarly output and citation patterns

– Depends on the individual’s scientific age

– Does not account for the number of authors

– Does not differentiate between active and inactive scholars

– Deemphasizes single, successful publications

– Increasing number of publications alone does not have immediate effect on h-index

– Lacks sensitivity to performance changes, and is only weakly sensitive to the number of citations received

– Newcomers are at a disadvantage (publication output and observed citation rates are projected to be low)

– Allows scholars to rest on their achievements since the number of citations received might increase over time even if no new articles are published

– Based on rather long-term observations, and it does not show decline in a scholar’s career

– Overlooks the number of coauthors and their individual contributions, giving equal credit to all contributing authors

– Does not disregard self-citations and thus may lead to an inflated score

g-index – Gives more weight to highly cited articles – Similar limitations as the h-index
API – Emphasizes publication output (with or without grant funding), with an immediate effect on API score

– Does not involve number of citations received

– Depicts annual academic achievements

– Can track academic performance trajectory in a scholar’s career

– Newcomers are not at disadvantage compared to senior counterparts  because productivity output is measured cross-sectionally for the respective academic year

– High sensitivity to performance changes—it can provide incentives to generate scholarship

– Unlike h-index which does not change by

adding uncited articles, any scholarly document type can potentially be counted

– Puts small article sets (with or without citations) at an advantage (“A publication is a publication.”)

– Score is based on rather short-term observations (annual performance)

 

– Cannot be used to compare scholars across disciplines due to discipline-based variations in research output and grant funding

– Tied to number of publications per academic year

– Less influence from grant funding

– Overlooks the number of coauthors and their individual contributions, giving equal credit to all author on a paper

– In contrast to h-index that is “durable”, API measures  are variable over time

 

API: Academic Productivity Index.

By summarizing these strengths and limitations, we can propose that the API may prove to be a promising indicator, the strength of which lies in the potential application to the assessment of all academic productivity “sets” including brief articles where the application of the more traditional metrics proved can be problematic. In contrast to the h-index, the API is a non-cumulative indicator that does take into full account the dynamics and incentives of publication activity while de-emphasizing grant attribution.

Conclusions

We propose the API as a novel method to link output of articles and other publications to research grant funding, which carries with it the expectation of articles to follow to report completed research.  Academic departments may consider this index among many other metrics used in the important area of quantitation of the performance of faculty members.  Tracking the API over time may give leaders a helpful marker of relative productivity of faculty members over their careers, which would possibly be of use in promotion actions and other administrative decisions which benefit from a more numerical, or “objective” approach.  In addition, the decision to allocate research funding and research logistical support may be aided by a decisional support instrument, such as the API.  Institutions may wish to develop modifications of the API to be resonant with other academic metrics at their respective institutions.

References

  1. Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA. 2005;102(46):16569-16572.
  2. Glänzel W. On the opportunities and limitations of the h-index. Science Focus. 2006;1(1):10-11.
  3. Egghe L. Theory and practice of the g-index. Scientometrics. 2006;69(1):131-152.
  4. Agarwal A, Durairajanayagam D, Tatagari S, et al. Bibliometrics: tracking research impact by selecting the appropriate metrics. Asian J Androl. 2016;18(2):296-309.
  5. Sidiropoulos A, Katsaros D, Manolopoulos Y. Generalized h-index for disclosing latent facts in citation networks. Scientometrics. 2007;72(2):253-280.
  6. Lehmann S, Jackson AD, Lautrup BE. Measures for measures. Nature. 2006;444:1003-1004.