Votes for quotes

Votes for quotes

TIBS - October I976 N 232 Letter from Washington Nicholas Wade Votes for quotes The National Institutes of Health recently came under criticism fr...

335KB Sizes 1 Downloads 69 Views

TIBS - October I976

N 232

Letter from Washington Nicholas

Wade

Votes for quotes The National Institutes of Health recently came under criticism from the General Accounting Office, the investigatory arm of Congress, for not making its own scientists put their research proposals through the same peer review system by which the NIH judges the proposals of others. The NIH replied that it didn’t need to, one reason being that its intramural research was so good. What was interesting about the response, apart from its bravado, was the rationale for the claim - a study based on the technique known as citation analysis. Researchers are likely to hear a lot more about citation analysis as it gains popularity with science administrators. Those who think it can be laughed to scorn as mere footnote counting will find they have seriously underrated the power and accuracy of the technique. It has already proved a valuable tool for historians and sociologists of science, and it is now being gingerly tested out in practical ways that affect the everyday conduct of science. Citation analysis rests on the simple postulate that the number of times an article is cited in the scientific literature is a rough measure of its scientific importance. On this basis one can devise quantitative measures of the importance or influence of any particular source of articles, whether it be an individual scientist, an institution, a journal or a country. The NIH rebuttal quoted above is based on a citations analysis technique which assesses the relative influence, in terms of citedness, of the principal biomedical journals. From this measure a score or ‘average Country

U.S.

U.K. Japan G.F.R. France U.S.S.R. Others World total

publication weight’ is assigned to the major United States institutions whose scientists published papers in these journals. Here is how the 15 leading institutions came out in terms of average publication weight: 1 Harvard University 2 NIH 3 University of Wisconsin 4 Yale University 5 Johns Hopkins University 6 University of Pennsylvania 7 Columbia University 8 Stanford University 9 University of Washington 10 University of California 11 New York University 12 State University of New York 13 University of Michigan 14 University of Minnesota 15 Mayo Foundation

31.2 29.8 28.2 21.3 24.9 24.8 24.8 24.2 23.5 22.0 21.9 20.5 19.8 19.8 18.8

These kinds of numbers may impress the penny counters at the General Accounting Office, but is it not absurd to suppose that qualities such as scientific creativity can be assessed by so crude a measure? And in any case, the critic may object, you include articles in your list of references for many reasons other’ than their pertinence, such as because your professor wrote them, or to show how widely read you are, or because they present erroneous conclusions which you wish to dispute. Citation analysts believe that these kinds of citation are just noises in the system and have no serious effect on the real message. They also note that citation analysis accords closely with all the conventional means of evaluating scientific merit, such as peer review. In fact a recent comparison provided the interesting result

Citation to publication ratios Biomedical research

Clinical medicine

1.37 1.15 0.92 0.76 0.61 0.24 0.74 1.00

1.31 1.26 0.57 0.55 0.50 0.18 0.78 1.00

Nobel prize winners in medicine 1955-73 30 7 _ 4 3 9 53

(Adapted from ‘Dirtribution of International Biomedical Literature’by J. Davidson Frame, Computer Horizons Inc., New Jersey, June 1976.)

that not only did a citation analysis correlate well with two peer reviews of the same material, but the peer reviews each correlated better with the citation analysis than with each other. Of all the uses to which this new found bibliographic tool can be put, the most provocative is to assess the performance of individual scientists. A current court case concerns a biochemist who was denied tenure at an east coast university because of alleged sex discrimination. Her supporters have devised a method for estimating the number of citations an article will ever receive from the number it has received to date. They calculate that the biochemist has a lifetime citation rate of 53.5 times per article, while two men in her department who received tenure at the same time she was denied it have lifetime citation rates of 21.8 and 50.9 per paper. Many practitioners of the technique fear that it will get a bad name from this kind of usage. They believe the method is accurate enough for assessing large scale phenomena, such as the quality of an institution or the interaction between two fields of inquiry, but that it is too crude to assess individuals. The counterargument is that citation analysis represents the integrated peer review of everyone in the field and that, with appropriate precautions, it should play a part along with conventional measures in deciding matters of promotion, tenure and so forth. At the other extreme the technique can be used to assess the performance of a nation’s scientists. A recent survey conducted for the NIH by Computer Horizons Inc. compared the efforts in biomedical research and clinical medicine of the world’s six scientific powers. The ratio of citation to publication (detined as percentage of world citations going to country X divided by percentage of world publications produced by country X) shows the extent to which a country’s research is used by others. This measure of productivity is in rough accordance with such crude indices as the number of Nobel prize winners in physiology or medicine. There is, however, an evident anomaly in the case of Japan, which has fewer Nobel prizes in Physiology or Medicine than its citation to publication ratio indicates it deserves. The absence of any non-Western country from the roster of prize winners ‘suggests that a Western bias is indeed a possibility worthy of consideration,’ the author of the study,observes. A type of study with considerable pertinence to research funding concerns the extent to which papers in a given field are cited. It is a sobering fact only half of the scientific articles published are ever referred to in the scientific literature,

TIBS - October 1976

N 233

which raises the question of how, if at all, the research reported in the other half made any contribution to the_ onward march of science. Of the papers that do get cited at all, the average article is cited only 1.7 times a year. A celebrated study reported in Science (1972, 178,368) by two sociologists of science at Columbia University noted that even in a high quality journal such as Physical Review, 80% of the articles were being cited seldom or not at all within three years after publication. The two sociologists, Jonathan and Stephen Cole, inferred that it was not the case, as is commonly supposed, that every scientist contributes his own littli bit toward pushing back the frontiers of knowledge. Rather, it is the best scientists whose work is cited over and over again, while the great bulk of published literature seems to have little abiding importance. The Coles procede to raise the chilling question of ‘whether the same rate of advance in

physics could be maintained if the number of active research physicists were to be sharply reduced.. .’ Such lines of argument, if pushed to their logical conclusion, could engender demands for a radical cut-back in the scientific enterprise, or at any rate a painful reordering of priorities. At the least, citation analysis lays the scientific community open to judgement by outsiders. Despite these dangers, as some may consider them to be, the technique is being vigorously developed by administrators at the NIH and National Science Foundation who see it as a manifestly objective tool for corroborating the much criticised peer review system.

Eugen Baumann and sulphate esters Alexander

B. Roy

There appeared in 1876 in Berichte der deutschen chemischen Gesellschaft a paper by Baumann entitled ‘Ueber Sulfosauren im Harn’ [l] which can be regarded as the starting point for all later work on sulphate esters. It is interesting to note that the date on this paper is 25 December 1875: Christmas Day 1975 is not likely to be the date of many current publications. Eugen Baumann was born in Canstatt, now a suburb of Stuttgart, on 12 December 1846. He set out to become, like his father, an apothecary and while serving his apprenticeship with him attended Fehling’s lectures in Stuttgart and worked in his laboratory. Apparently

Eugen Baumann

at Fehling’s instigation Baumann spent three years working first in Liibeck, in northern Germany, and then in Gothenburg, in Sweden, so that only in 1870 did he enrol at Tiibingen, to take his examinations six months later. His interests were greatly stimulated by the work in Fittig’s laboratory and it was then that he decided to take up an academic career. This opportunity soon came because Hoppe-Seyler, his examiner in toxicology, was so impressed by Baumann that he offered him an assistantship in his laboratory, then still housed in the kitchen of the Castle at Tiibingen. It was there that he prepared his doctoral dissertation ‘Ueber einige Vinylverbindungen’ which he submitted in 1872. In the same year Baumann went with Hoppe-Seyler, as first assistant to the new Hochschule in Strassburg where he began his work on sulphate esters. Baumann subsequently moved, in 1887, to the Physiological Institute in Berlin land then, in 1883, to Freiburg where he remained until his death in 1896. Baumann’s entry into the field qf sulphate biochemistry came through the studies then in progress in several laboratories, including Hoppe-Seyler’s, on the ‘phenol-forming’, ‘catechol-forming’ and ‘indigo-forming’ substances of urine and stemmed directly from his own observation that the hydrolysis of the last-named yielded an equivalent amount of sulphuric acid. This observation led to the development of a method [2] for the determination of free and bound sulphuric acid in urine, a method still in use today. Free sulphuric acid, precipitated by BaCl,, is simply SO:-, but bound sulphuric acid, precipitated by BaCl, only after hydrolysis by acid, was at that time of an unknown nature. In 1876 was published the paper [l] mentioned above: in this Baumann described the isolation of crystalline ‘phenol-forming’ substance from horse urine. He showed that its analysis corresponded to that of a substance having the formula (1) and concluded HO GH,’

(1) xKOSO,

that it was the potassium salt of a phenolsulphonic acid. He pointed out that its acid-lability distinguished it from the known m- and p-phenolsulphonic acids and that it must therefore be the potassium salt of _the then unknown o-phenolsulphonic acid. In the-same paper Baumann showed that the urine of hospital patients treated with phenol contained no free phenol but formed this, and sulphuric acid, on acid hydrolysis. Similar findings