As a retired statistician, having joined ResearchGate (RG), I could not help but notice that statistics journals have very low impact factors, especially compared to the extremely large number of journals in biology and medicine. Not having been in academia, but in US Federal Govt applications, my research results were largely published in conference proceedings and a less formal source, but there are university professors I would think who feel the pressure to "publisher or perish" as much as those in the biology department. Comparison of cumulative impact factors between such academicians, as is done by RG, appears ludicrous.  First, does a popularity contest between two journals really reflect their "impact?"  If you look at newspaper circulation, then according to

 http://m.huffpost.com/us/entry/3188612

that places the New York Post ahead of the Washington Post.  Really?

Statistics have a major "impact" on virtually ALL disciplines!  But in RG if one is on a team of 6 authors for a biology journal article with an "impact factor" of 5, even if it is an article that is only ancillary to scientific interests and for just a short time, RG assigns each person not 5/6th impact factor 'point,' but 5 points each.  Thus the article is 'worth' 30 impact factor points!   But if you are a statistician who develops a very useful new methodology or statistical 'tool,' you are usually working with at most one other person.  Since statistics journals rarely have an impact factor of greater than 1, that means that for two authors, the article is 'worth' 2 impact factor points, as opposed to 30.  From what I have seen on RG, though not a rigorous statistical study,  :-)  , the situation appears to often be even more lopsided than my hypothetical example above.

Below are some 'typical' examples I found, again not a statistical study, but perhaps a starting point, for comparing impact points for the many popular journals in biology and medicine to the pitifully lowly journals in statistics:

                  BIOLOGY examples

Diversity and Distributions: A Journal of Conservation Biogeography

5.47

Molecular Ecology

5.84

New Phytologist

7.67

 

           HEALTH and MEDICINE

Journal of Personality and Social Psychology

5.08

Osteoarthritis and Cartilage (OSTEOARTHR CARTILAGE)

4.66

Apparently several in neurology with impact factors over 1.0 each

http://library.umassmed.edu/ebpph/top25.cfm lists the "top 25" journals for public health, starting with #1:

International Journal of Epidemiology

9.20

To #25:

Genetic Epidemiology

2.95

And then there are the very high impact factor journals such as the

New England Journal of Medicine

54.4

                     STATISTICS

Journal of the American Statistical Association (JASA)

2.11

The American Statistician

0.88

Pakistan Journal of Statistics

0.34

Communication in Statistics- Theory and Methods

0.28

     -------------   

So does the impact factor system unfairly deemphasize statistics?  It appears to me that the very obvious answer is "Yes."  Some statisticians may "achieve" a high cumulative impact factor status by appearing in a long list of authors for a non-statistical journal, and if that encourages more direct communication between statisticians and all of the disciplines supported and made valid by statistics,  then that is good news. But in general, it deemphasizes statistics, especially original research. Granted, statisticians often seem to go off on some tangent that may not have much promise for later application.  Just look at perhaps the most prestigious statistics journal, the Journal of the American Statistical Association (JASA), which can be quite abstract and often too divorced from reality in my opinion.  Knowledge and research for its own sake is great, but 'real life' problem solving is interesting to me. 

Regardless of your interests, however, I think - as I stated - that the current "impact" factor system is grossly biased against statisticians and their actual impact on society.

Your thoughts?   

Similar questions and discussions