Jump to content


Measuring Scientific Productivity


1 reply to this topic

#1 Dr. Joseph Lorenzo

    Advanced Member

  • Administrators
  • 94 posts

Posted 31 May 2013 - 11:31 AM

Science is a field that revolves around quantification. Scientists spend much of their time utilizing technologies to quantitate natural events in order to determine if the manipulation of those events alters them. Hence, it is not unexpected that scientists, when evaluating themselves or their peers, frequently employ numeric metrics. In recent years such measures have often been applied to the productivity of individual scientists. There are numerous ways to measure this productivity but, not infrequently, the quality of a manuscript is evaluated by the Journal Impact Factor (JIF) of where it was published. The JIF is a measure of the average number of times any article in a journal was cited over the previous two years. However, a number of scientists and editors of journals have recently questioned the wisdom of this approach and their concerns have now been published as the San Francisco Declaration On Research Assessment (DORA) (1).

The DORA statement points out that: “the Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article”. It goes on to highlight a number of problems with this measure. These are that: “A) citation distributions within journals are highly skewed B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews C) Journal Impact Factors can be manipulated (or "gamed") by editorial policy and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public”.

Since funding agencies and promotion committees often use the JIF to evaluate the scientific output of individuals, it may be misused as a measure of scientific quality and therefore, inappropriately influence the decision of who is awarded, funding or promotion. In an editorial in the journal Science, Bruce Alberts, its editor, expanded on the dangers of relying on JIF data to measure scientific productivity (2). He points out that relying on such a measure encourages “safe”, “me-too science“. Furthermore, he states that: “Any evaluation system in which the mere number of a researcher's publications increases his or her score creates a strong disincentive to pursue risky and potentially groundbreaking work, because it takes years to create a new approach in a new experimental context, during which no publications should be expected. Such metrics further block innovation because they encourage scientists to work in areas of science that are already highly populated, as it is only in these fields that large numbers of scientists can be expected to reference one's work, no matter how outstanding”.

So what should be done to prevent the problems related to relying on the JIF to evaluate scientists? The DORA document suggests: “…the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations; the need to assess research on its own merits rather than on the basis of the journal in which the research is published and the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).

Implementing these recommendations will not be easy. The scientific establishment moves slowly and it is much harder to thoughtfully analyze the contributions of a scientist by critically evaluating the impact of the entirety of his or her work on a scientific field rather than use a simple measure like JIF. However, relying on imperfect measures is destructive to scientific advancement, discourages innovation and should be discontinued.

Joe Lorenzo,
Farmington, CT, USA

#2 Jonathan Reeve

    Newbie

  • Members
  • Pip
  • 1 posts

Posted 06 June 2013 - 02:48 PM

Using Journal Impact Factors (JIFs) in any form of peer review is the cancer at the heart of scientific assessment. It should have been abandoned in 1997 or before, on the publication of Per Seglen's thorough and thoroughly scientific demolition of the utility of JIF's for any other purpose than their original one - which was to help librarians choose which Journals to purchase (Seglen P Why the Impact Factor of Journals Should not be Used for Evaluating Research British Medical Journal 1997; 314:497). The fact is that JIFs are the means not the medians of log-normal distributions and that many papers in high impact journals are rarely if ever cited. Also a particular individual's own citations often shows no significant correlation with the JIFs of the Journals in which (s)he publishes.




Reply to this topic


This post will need approval from a moderator before this post is shown.

  


2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users