Impact Factors: Aiming at the Wrong Target

Impact Factors: Aiming at the Wrong Target

IMPACT FACTORS: AIMING AT THE WRONG TARGET Kenneth M. Adams (Ann Arbor Veterans Affairs Health System, Departments of Psychiatry and Psychology, The U...

17KB Sizes 0 Downloads 82 Views

IMPACT FACTORS: AIMING AT THE WRONG TARGET Kenneth M. Adams (Ann Arbor Veterans Affairs Health System, Departments of Psychiatry and Psychology, The University of Michigan, USA)

The perceived need for a metric gauging scientific “impact” of journals and the neuropsychologists who publish in them is not hard to understand. The real evaluation of publications for 1) quality of scholarship, 2) creativity in science, 3) demonstrated mastery of methods, 4) technical facility in analyses, and 5) skill in interpretation is hard work and involves both quantitative and qualitative judgments. How much easier to just use a number which is “objective”; especially when proponents (Garfield, 1999) baldly assert equivalence between the hard process of quality appraisal of journal work and “impact factors” which amounts to counting mentions of papers. The reasons offered to embrace this method are that, 1) there is nothing better out there; 2) we’ve been calculating them for some time; and 3) we mostly get it right, and we can reliably discriminate the best stuff appearing in crème-de-la-crème toney medical publications from workmanlike but unglamorous reports appearing in more pedestrian subspecialty journals. The use of “impact factors” is all the more appealing since it appears to bring fairness and rationality to decisions that might be “supported” by such indices. Often these numbers can be used to affect rewards such as funding, fame, or promotion. The principal applications where “impact” metrics have been held to have value involve: 1) the relative importance of neuropsychological journals; 2) the relative scientific productivity and academic impact of individual neuropsychologists; and 3) the relative scientific standing of either academic units or departments. However, again the proponents are not shy in suggesting additional uses that are completely inappropriate. Perhaps implying that such methodology can even forecast future Nobel Prize laureates (Garfield and Wellams-Dorof, 1992) is a bit over the top for most people, but such assertions are no less masquerades in search of grace than other less grand assertions for the “impact factor”. Other endeavors in non-academic worlds (e.g., business) have routinely relied upon such impact indices for a long time and have arguably put them in perspective. However, it is critical to note that in business, these numbers are not thought to relate to anything permanent or immutable, and perhaps not even to the parameters of fame or quality they try to capture. Most observers of them expect that there will be new winners and losers when the next round of ratings emerges. Interestingly, “impact” ratings for neuropsychology journals appear increasingly to behave in this shifting fashion. “Impact” indices fail in all three of these applications because they cannot move beyond the fiction that the cumulative use of published works is

Cortex Forum

601

equivalent to intellectual salience. Reading the pleadings of proponents of this methodology is interesting (Garfield, 1996; 1999), because you can find yourself almost getting lost in the minutiae of how the “impact factor” can be tweaked to account for problems and forgetting the central truth that a citation is not quality. I can only cite a few of the reasons here why decisions are poorly served when “impact” factors are given more than the cursory value they reflect. Publishers of journals in neuropsychology perhaps most easily abide the notion of “impact” indices because they can tout an “impact factor” when it is good, and downplay it when it is unflattering or not useful in marketing. For Editors, it is perhaps said best by Hoey (2001), “Ah, impact factors. I think that most journal editors would rather do without them”. Journal publishers and editors can also, if they choose, devote efforts to boosting “impact factor” by inviting luminaries to publish, unveiling new paradigms, hosting controversies, or publishing review papers of the “current review” type. Proponents strike up poses of mock outrage at such suggestions and suggest further tweaking machinations to foil the upwardly mobile, but where there is a numerator and a denominator that can be augmented or diminished there can be purposeful plans to influence them. The current methodology to assess “impact” also has problems in that neuropsychological journals straddle at least four major journal realms: cognitive (science, clinical neuropsychology, neuromedical sciences, and methodology/hardware journals (i.e., rehabilitation/neuroimaging). A list of journal impacts in neuropsychology is perforce a value judgment on the blend of these realms that the ranked journal covers. Qualitatively, journal editors can often provide some sense of where a neuropsychology journal stands in relative terms. The number, source, nature, and novelty of manuscripts coming in gives clues as to how the field is viewing a journal. It should also be noted that there is often an informal “food chain” of journal submission, and frequently one sees a paper rejected from one journal appearing in the submission queue of the next journal in the chain. However, even “obvious” truths can be deceptive. Reports of important clinical trials or collaborative studies run by governmental agencies do bring together centers and large numbers of subjects that would seem momentous, but many of these kinds of papers with double digit authors and affiliations are disappointing in the end because neuropsychological ideas and procedures end up being a small abbreviated part of a larger effort. Those viewing “impact” factors as a reflection of academic market forces also ignore the realities of the divergent resources behind organizations and publishing corporations that publish journals in neuropsychology and elsewhere. Journals that must be sufficiently “valued” to be purchased by subscribers in order to be solvent cannot be reasonably benchmarked against journals that are maintained at massive annual losses in order to “force” the journal into the marketplace. Journals of this type (e.g., the American Psychological Association’s Neuropsychology) may be “bundled” with more viable and established journals in psychology for institutions. This further distorts any real comparison of “impact” factors based upon corporate economic muscle or willingness and ability to spend membership organization money to strong-arm a

602

Kenneth M. Adams Ph.D.

journal into acceptance. Proponents suggest that the “impact factor” methodology is strongest for journals since there are many more observations, but they are entirely disingenuous in not clearly declaring that individual “impact factors” should not be used in any form. One suspects that there is private pleasure at the notion that promotions or reviews have increasingly cited these glory quotients as actually reflecting scientific merit or standing in one’s field. However, individual neuropsychologists show no constancy or consistency in the ways in which they (and non-neuropsychologists to an even greater degree) cite neuropsychological work. Some authors are spare in their citation style, while others are almost Talmudic in their quest to memorialize the provenance of every thought in their papers. Ego or a sense of scientific importance plays a role, too, and many colleagues have a too-readily recalled memory of reading a new paper covering the very topic of their previously published work and yet finding no reference to it. Indeed, some colleagues almost appear to be studied in their repeated failure to cite parallel work, unless compelled to do so by an editor. Another distorting effect is the effect of auto-citation, wherein some colleagues never seem to fail to want to include citation of all their work, however tangential to the topic at hand. There are also quite often citations generated out of “knock off” papers that repackage or restate already reported data or ideas for new audiences or constituencies. Finally, there is the very serious problem of the effect of multiple publications of data. This used to be construed as the recycling of the same data in different publication forums. However, the more recent and insidious form of this is done on the basis of dividing a larger protocol into reports of individual test or variable constructs. Most recently, this has even extended into the appearance of separate reports on the components of single tests! This kind of “wedding cake” publication program is most difficult to detect at the review stage, and awkward for an editor to confront when a paper is identified as a “slice” of something that should appear in its entirety for the sake of intellectual integrity. This pattern contributes to the “impact” index before it becomes obvious as such, since those pursuing such programs usually have the good sense to submit to different journals. The third use of “impact” factors to assess relative scientific “importance” of academic departments is an exercise in vanity akin to the annual listings of the 100 “best” or “top 10” academic departments or hospitals that appear annually and are rarely useful or informative except to tell us that more money is better than less money when running either one. In the end, “impact” is something beyond harvesting citations. The real notion of value here is that of “intellectual salience”. Publications that shape the thinking of neuropsychologists are sometimes ones that don’t promptly make their way into the reference section of the next paper – if at all - and may even require a period of gestation to inform or change one’s thinking. Some papers have no obvious neuropsychological significance, but change the way one looks at problems. To evaluate the scientific progress or promotion of a colleague, to gauge the

Cortex Forum

603

likely merit of a grant and its applicant, or to consider the relative intellectual contribution or importance of a neuropsychology journal based on “impact” factors trivializes the assessment of intellectual salience and undermines and supplants true peer review with a fatally flawed system of Brownie point scholarship. Those unconvinced of this view have already moved on to applying this paradigm to the world wide web (Bjorneborn and Ingwersen, 2001). Real time quotients of scientific “importance” cannot be far behind. REFERENCES BJORNEBORN, L., and INGWERSEN, P. Perspectives of webometrics. Scientometrics, 50: 65-82, 2001. DUMONTIER, C., NIZARD, R., and SAUTET, A. Impact factor: do we have to choose between the impact factor and the Revue de Chirurgie Orthopedique? Review de Chirurgie Orthopedique et Reparatrice de la Appareil Moteur, 87: 115-128, 2001. GARFIELD, E. How can impact factors be improved? British Medical Journal, 313: 411-413, 1996. GARFIELD, E. Journal impact factor: a brief review. Canadian Medical Association Journal, 161: 979980, 1999. GARFIELD, E., and WELLJAMS-DOROF, A. Of Nobel class: a citation perspective on high impact research authors. Theoretical Medicine, 13: 117-135, 1992. HOEY, J. Improved ranking for CMAJ. Canadian Medical Association Journal, 164: 747, 2001. Kenneth M. Adams Ph.D., Ann Arbor Veterans Affairs Health System, Departments of Psychiatry and Psychology, The University of Michigan, Ann Arbor, MI, USA.