DISCUSSION, YES; TESTING, YES; BUT IMPLEMENTATION: A BIT PREMATURE! Stevan Harnad (Centre de Neuroscience de la Cognition, CNC, Université du Quebec, Montreal, Canada)
Classical peer review is imperfect (as is human judgment and performance, in all areas, as far as I know). But it has yielded the current quality-level of our refereed research literature. Peer review reform proposals need to be empirically tested and demonstrated to be able to yield at least a comparable level of quality. And the sample needs to be large and representative enough to give a reasonable likelihood that positive findings are not just a hawthorn effect, and will reliably scale up to the literature as a whole. Until such data are available, it does not seem wise to put research quality at risk by recommending or implementing untested variants of, or alternatives to, classical peer review on the “live” literature. (The trials should at least be conducted in parallel with classical peer review as a backup and a control baseline for comparison.) “One possible alternative is to substitute referees with sponsors, chosen by the authors, who overtly review and promote the papers (with their names as sponsors on it)…the editorial board could still reject it on scientific or nonscientific grounds (e.g., inappropriateness for the journal audience)... overt and rewarding for the reviewers... more responsibility, because... names would be on it. Some problems... nepotism, favouritism... over-commitment... awkwardness... covert blackmailing. All true, though not dissimilar [to] the imperfections of the current system”. These variants (author-chosen referees, open refereeing, etc.) have all been proposed before, and some have even been given some experimental trials. But the jury is still out, and until there is reliable positive evidence, these are still merely speculative hypotheses, rather than empirical findings ready to be applied in practise. It is not clear a priori whether any of these deviations from classical peer review would work, and even less clear what quality papers they would yield, in the short and long run. Each needs to be tested before it can be implemented. See: Harnad, S. (1998) The invisible hand of peer review. Nature [online] (c. 5 Nov. 1998) http://helix.nature.com/webmatters/invisible/invisible.html Longer version: http://www.exploit-lib.org/issue5/peer-review/ http://www.cogsci.soton.ac.uk/~harnad/nature2.html “Peer Review Reform Hypothesis-Testing” http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Amsci/0479.html “A Note of Caution About ‘Reforming the System’” http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Amsci/1169.html Stevan Harnad, Department of Electronics and Computer Science , University of Southampton, Highfield, Southampton SO17 1BJ United Kingdom.
[email protected], http://www.cogsci.soton.ac.uk/~harnad/
Cortex, (2002) 38, 425