Ranking national research systems by citation indicators. A comparative analysis using whole and fractionalised counting methods

Ranking national research systems by citation indicators. A comparative analysis using whole and fractionalised counting methods

Journal of Informetrics 6 (2012) 36–43 Contents lists available at SciVerse ScienceDirect Journal of Informetrics journal homepage: www.elsevier.com...

504KB Sizes 1 Downloads 46 Views

Journal of Informetrics 6 (2012) 36–43

Contents lists available at SciVerse ScienceDirect

Journal of Informetrics journal homepage: www.elsevier.com/locate/joi

Ranking national research systems by citation indicators. A comparative analysis using whole and fractionalised counting methods夽 Dag W. Aksnes a,∗ , Jesper W. Schneider b , Magnus Gunnarsson c a b c

NIFU – Nordic Institute for Studies in Innovation, Research and Education, Wergelandsveien 7, NO-0167 Oslo, Norway Royal School of Library and Information Science, Fredrik Bajers Vej 7K, DK-9220 Aalborg Ø, Denmark Swedish Research Council – Department of Research Policy Analysis, Västra Järnvägsgatan 3, SE-101 38 Stockholm, Sweden

a r t i c l e

i n f o

Article history: Received 7 June 2011 Received in revised form 11 August 2011 Accepted 16 August 2011 Keywords: Citation indicators Bibliometric methods International co-authorship

a b s t r a c t This paper presents an empirical analysis of two different methodologies for calculating national citation indicators: whole counts and fractionalised counts. The aim of our study is to investigate the effect on relative citation indicators when citations to documents are fractionalised among the authoring countries. We have performed two analyses: a time series analysis of one country and a cross-sectional analysis of 23 countries. The results show that all countries’ relative citation indicators are lower when fractionalised counting is used. Further, the difference between whole and fractionalised counts is generally greatest for the countries with the highest proportion of internationally co-authored articles. In our view there are strong arguments in favour of using fractionalised counts to calculate relative citation indexes at the national level, rather than using whole counts, which is the most common practice today. © 2011 Elsevier Ltd. All rights reserved.

1. Introduction Citation indicators play a prominent role in the assessment of the competitiveness of national research systems. In this paper we will examine the methodological basis for such indicators, specifically the overall citation rate of a country (average citation rate for all publications), which is usually interpreted as an indicator of a country’s general scientific performance. This indicator is considered very important by many policymakers and has a great impact on public perception of the comparative position of a nation’s research system. In this paper we will demonstrate how two different methods of calculating the relative citation index of a country – namely whole and fractionalised counting methods – affect the results, and we will look at the conclusions that may be drawn concerning national scientific impact. Although this study focuses on citation indicators at the national level, the results are equally relevant for analyses at other levels, such as universities, departments or research groups. Over the years there have been many discussions concerning the methodological basis for national bibliometric indicators (and indicators more generally). A prime example is the dispute about the “decline of British science” that appeared at the end of the 1980s. In several papers Ben Martin and his colleagues at the Science Policy Research Unit (SPRU) showed that

夽 The article is based on results presented at the 11th International Conference on Science and Technology Indicators, Leiden University, The Netherlands, 8–11 September 2010. ∗ Corresponding author. E-mail addresses: [email protected] (D.W. Aksnes), [email protected] (J.W. Schneider), [email protected] (M. Gunnarsson). 1751-1577/$ – see front matter © 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.joi.2011.08.002

D.W. Aksnes et al. / Journal of Informetrics 6 (2012) 36–43

37

the UK’s share of world publications decreased during the 1970s and the first half of the 1980s (Irvine, Martin, Peacock, & Turner, 1985). However, this conclusion was challenged by others who argued that by using alternative methodologies a rather different picture emerged (see e.g. Leydesdorff, 1988). The adequacy of different measurement principles was heavily debated, for example with regard to the type of publications to be included and how to count papers involving international co-authorship. The debate reveals that the bibliometric community at the time was far from reaching consensus on the best way of measuring national scientific performance. In 1994 Martin concluded that there was no simple way of unambiguously establishing the relative position of a country’s science system and whether that position is improving or deteriorating: “Instead we are confronted with a slightly bewildering mass of possible indicators, all of them imperfect to a greater or lesser extent” (Martin, 1994). While the debate about the “decline of British science” focused on the measurement of publication output by whole and fractional counting of author addresses, we are here turning to citations indicators. National citation indicators are based on the set of publications that have at least one author address from a country. In the most basic version a national indicator is simply calculated as the total number of citations of a country’s publications divided by the total number of publications. A relative citation index is constructed by dividing this average by the corresponding worldwide average. This basic version of the indicator can be found in standard bibliometric products produced by Thomson Reuters such as the National Science Indicators (NSI). However, there are several problems related to calculating national citation indicators this way. It is a wellknown fact that there are large differences in average citation rates among the various scientific disciplines and subfields (e.g. Hurt, 1987). Garfield (1979) used the term “citation potential” to describe this difference, referring to the fact that the average number of references per paper is significantly lower in for example mathematics than in biochemistry. Moreover, there are significant differences in national scientific specialisation profiles (Glänzel, 2000). This means that countries with high relative publication activity in highly cited fields will have a comparative advantage. Over the years various normalisation procedures have been developed for the construction of citation indicators; e.g. involving reference standards based on journal and subfield averages (Schubert & Braun, 1986, 1996; Schubert, Glänzel, & Braun, 1988; Vinkler, 1986). The methodological principles underlying citation and publication indicators have been the topic of extensive discussion. Recently we have seen a revitalisation of the debate concerning methods for measuring scientific performance bibliometrically. One issue of debate is the adequacy of various methods for calculating publication indicators (Gauffriau & Larsen, 2005). Another concerns the methodological basis for normalisations of citation indicators. Traditional indicators such as the “crown indicator” (citations per publication/field-based world average (CPP/FSCm)) (Moed, de Bruin, & van Leeuwen, 1995; van Raan, 2000) correct for differences among fields by using existing classification schemes. An issue of debate is whether normalisation should be calculated at the aggregated level as a ratio of sums as in the crown indicator described above, or as a sum of ratios at the publication level, as suggested by Lundberg (2007) and more recently by Opthof and Leydesdorff (2010). It is interesting to note that the Centre for Science and Technology Studies (CWTS) at Leiden University has produced a new crown indicator based on normalisation at the publication level (Waltman, van Eck, Leeuwen, Visser, & van Raan, 2010). Recently, Zitt and Small (2008), Zitt (2010), and Leydesdorff and Opthof (2010) have proposed normalisation procedures based on the characteristics of the citing publications. Their aim is to normalise the variability of citing practices between fields by utilising a classification-free approach. These are referred to as citing-side or source normalisations, in contrast to the traditional cited-side normalisation. In bibliometric analyses, credit for publications is ascribed to countries (or other units such as institutions or departments). There are different ways of doing this (see e.g. Gauffriau & Larsen, 2005) and the principles and limitations of the various methods are well-known. Most producers of bibliometric analyses apply whole counting of publications in the calculation of citation indicators, which means that each country in internationally co-authored publications receives full credit for its participation. In contrast, fractionalised publication counting, in which a country is credited a fraction of a publication equal to the fraction of the author addresses from that country, is rarely applied. With the exception of a few studies the question concerning the use of whole and fractionalised counts in the calculation of citation indicators has received little attention. van Hooydonk (1997) showed that the citation impact of a researcher could be dramatically affected by using fractional instead of whole counting procedures. Recently, Leydesdorff and Bornmann (2011) applied fractional counting of citations to journals as a mean to normalise for differences in citation averages among disciplines. This was done by using the citing-side fractionalisation method described above. This article adds to the discussion by analysing the difference between whole and fractionalised counting of publications in the construction of relative citation indexes at the country level. We are interested in examining how the relative citation index of a country is influenced by the counting methods used and to what extent this re-ranks countries compared to rankings based on whole counts. Given that the share of publications involving international co-authorship is large and growing, this is an important and timely issue to address. In the fractionalisation we have used the principle that each of the addresses of a paper is weighted as 1/N of a publication, where N is the total number of addresses (cf. Section 2). It should be noted that our approach differs from the method applied by Leydesdorff and Bornmann (2011) in comparing journals. Leydesdorff and Bornmann (2011) apply a citing-side normalisation where they fractionalise citations according to the number of references in citing publications. Their purpose is field normalisation. We have applied a traditional cited-side approach in order to normalise for field variations in citing practices. We also apply fractional citation counts, but different from Leydesdorff and Bornmann (2011), our purpose is to distribute authorship credits among countries. Hence, there is a difference between fractional citation counting (each citation carries different weight to the cited publication) and fractional

38

D.W. Aksnes et al. / Journal of Informetrics 6 (2012) 36–43

attribution of citations to author addresses (the citations to a publication are split between all author addresses). Our study focuses on the latter. 2. Data and methods We have used bibliometric data from the Thomson Reuters database at the Swedish Research Council (covering the Science Citation Index Expanded, Social Sciences Citation Index, and Arts & Humanities Citation Index), which contains publications published between 1982 and 2008.1 The database corresponds to the data that can be retrieved from Web of Science.2 This study is restricted to articles, letters and reviews published from 2004 to 2007. Data collection was performed in February 2010. We have calculated overall field-normalised citation rates for all countries for the period 2004–2007. We have used open-ended citation windows. Citation rates are normalised according to publication type, citation year after publication, and field specific citation rates (using Thomson Reuter’s Subject Categories).3 Normalisation is done at the publication level, i.e. it is a sum of ratios. In other words, we are using the principle introduced by Lundberg (2007). Some publications are classified under more than one subfield category. In these cases we have used the average citation rate for the respective subfields as reference values (see Kronman, Gunnarsson, & Karlsson, 2010 for details). Field-normalisation using Thomson Reuter’s Subject Categories is a commonly applied method for constructing relative citation indicators, although an alternative method for normalisation has recently been suggested as described above (Leydesdorff & Bornmann, 2011; Leydesdorff & Opthof, 2010). The National Science Indicators (NSI) database by Thomson Reuters was used in one of the analyses. This database contains aggregated bibliometric data at the country level for the period 1981–2009. The database is not publicly available and can be purchased from Thomson Reuters (http://thomsonreuters.com/products services/science/science products/az/national science indicators/). We have calculated two sets of relative citation scores based on whole counts and fractional counts, respectively. In whole counting each collaborating country receives one credit for its participation. In fractionalised counting a country is credited a fraction of a publication equal to the fraction of the author addresses from this country (1/n). For example, an article with three addresses of which two are from the US is attributed to the US with 2/3rd. This means that in the calculation the national citation index of USA, this paper is weighted as 2/3 of a publication while a publication with only US-addresses is weighted as 1. Throughout the paper this principle has been described as “fractional counting of citations”, and by this we, more precisely, mean fractional attribution of citations based on author addresses. 3. Results First, we calculated national citation indexes for a single country: Norway. The results are provided in Fig. 1, which shows annual citation indexes for the period 1981–2008. In addition to field-normalised citation indexes based on whole and fractionalised counting methods, the figure also shows the results of calculations using the NSI standard indicator (also based on whole counting). As described above, the latter indicator is simply calculated as the total number of citations of a country’s publications divided by the total number of publications. A relative citation index is calculated by dividing this average by the corresponding worldwide average. As Fig. 1 illustrates, the relative citation index of the publications from Norway is climbing. The differences between the NSI standard indicator and the field-normalised indicator are rather small, with the exception of the first years of the analysed period. In this period, the non-normalised NSI indicator provides higher index values than the field-normalised indicator. This difference can be attributed to Norway’s publication profile, in which the country has a higher proportion of publications in highly cited fields than the world average. Accordingly, the NSI indicator overestimates the impact of the Norwegian publications. The third line in the figure shows the citation index based on fractionalised publication counts. When using this method Norway’s citation index is considerably lower and only reaches the world average at the end of the analysed period. This means that publications involving international co-authorship obtain significantly higher citation rates than the publications with only Norwegian contributors. At the same time, the proportion of publications involving international co-authorship has increased from 16% in 1981 to 53% in 2008. Next, we calculated the field-normalised relative citation scores for all countries using whole and fractional counts. The analysis is based on the articles published during the period 2004–2007. Table 1 shows the results for 23 of the 209 countries investigated. The two scores are compared by subtracting fractional counts from whole counts, resulting in a difference

1 Certain data included herein are derived from the Science Citation Index Expanded, Social Science Citation Index and Arts & Humanities Citation Index, prepared by Thomson Reuters® , Philadelphia, Pennsylvania, USA. © Thomson Reuters® 2009. All rights reserved. 2 The Swedish Research Council database licensed from Thomson Reuters does not include the ISI Conference Proceedings index, thus the analyses do not include documents from this database. 3 The analysis relies on a predefined subject classification provided by Thomson Reuters with 255 subject classes. The classification method involves journal-based subfield definitions, meaning that all articles in a given journal are assigned to the same subfield. This method for field delineation has various shortcomings, not to be discussed here (see e.g. Aksnes, Olsen, & Seglen, 2000).

D.W. Aksnes et al. / Journal of Informetrics 6 (2012) 36–43

39

Fig. 1. Relative citation index for Norway, 1981–2008, based on different calculation methods: NSI standard, field normalised – whole counting, and field normalised – fractionalised counting.

score. In correspondence with the results above, the relative citation scores based on fractionalised counting generally yield lower values than whole counting because internationally co-authored publications tend to have higher citation rates than nationally authored publications (see e.g. Persson, Glänzel, & Danell, 2004; van Raan, 1998). The countries in Table 1 are ranked according to the figures in Column 4 which indicate the difference between scores for the selected countries. This ranked order strongly corresponds with the proportion of internationally co-authored publications (Column 6). The rank correlation (Spearman’s rank order correlation) between the size of a nation in terms of publication output and its degree of international co-authorship for the 23 countries in Table 1 is −0.67 and −0.73 when whole counts and fractionalised counts are used as a proxy for country size respectively. The negative correlations clearly indicate that larger countries in general have a lower degree of international co-authorship among its publication output. As illustrated in Fig. 2, there is a strongly negative correlation (Pearson’s correlation coefficient: −0.82) between shares of Table 1 Difference in field-normalised relative citation scores and ranking of selected countries based on whole and fractionalised counting schemes, 2004–2007. Country

Iceland Belgium Denmark Ireland Norway Switzerland Austria Israel Sweden Finland Netherlands Italy Canada France Germany Australia Spain UK Brazil Japan India China USA

Citation scores based on Whole counting

Fractionalised counting

Difference between scores

Changes in rank order

Share of international co-publicationsa

1.56 1.24 1.39 1.19 1.23 1.46 1.16 1.11 1.25 1.16 1.36 1.03 1.20 1.08 1.14 1.12 0.99 1.23 0.68 0.85 0.65 0.84 1.35

1.15 1.05 1.22 1.02 1.07 1.30 1.01 0.96 1.11 1.03 1.23 0.90 1.08 0.96 1.03 1.01 0.88 1.13 0.58 0.78 0.61 0.81 1.33

−0.41 −0.19 −0.17 −0.17 −0.16 −0.16 −0.15 −0.15 −0.14 −0.13 −0.13 −0.13 −0.12 −0.12 −0.11 −0.11 −0.11 −0.10 −0.10 −0.07 −0.04 −0.03 −0.02

−4 −3 −1 −2 −1 0 −1 −1 −1 0 1 0 2 1 3 0 0 3 −1 −1 1 1 4

68% 55% 55% 50% 52% 60% 55% 41% 51% 47% 49% 39% 43% 47% 45% 41% 38% 42% 28% 23% 19% 22% 27%

Source: Nordforsk (2010a). a The column shows each country’s proportion of international publications of its total publication output.

40

D.W. Aksnes et al. / Journal of Informetrics 6 (2012) 36–43

Fig. 2. Decline in indicator values as a function of share of international co-publications for 23 countries, 2004–2007. Indicator values correspond to the difference between scores found in Table 1, Column 4.

international co-authored publications and the decline in indicator values (i.e. the difference between scores). Consequently, smaller countries in general have a larger share of internationally co-authored publications, and countries with a high proportion of international publications have the largest decline in indicator values going from whole to fractionalised counting. This means that countries with higher proportions of international co-authorship benefit more from a whole count method than countries with low proportions. Iceland is most affected by changing the calculation method. Using whole counts, the country obtains the highest citation rate of all 23 countries, with a relative citation index of 1.56. Using fractionalised counts, the index is reduced to 1.15 and Iceland falls to fifth place. The citation index of the USA, on the other hand, is only marginally reduced by using fractionalised counts. The USA ranks fifth using whole counts and first using fractionalised counts. The general patterns are illustrated in Fig. 3. The length of the lines indicates the relative drop in indicator values from whole counts (×) to fractionalised counts (). Please note that the abscissa is “country size” indicated as the natural logarithm of a country’s publication count for the period investigated. Thus, the degree of backward direction of the lines (i.e. to the left), from whole counts to fractionalised counts, indicates the proportion of international co-authorship. In correspondence with the findings above, Iceland shows a drop in “country size” and a considerable drop in indicator value. This means that

Fig. 3. Relative drop in indicator values and “country size” as a consequence of whole counting or fractionalised counting. Numbers on the y-axis are indicator values, where 1 is the world average. Log normalised publication counts are used as a proxy for “country size”.

D.W. Aksnes et al. / Journal of Informetrics 6 (2012) 36–43

41

Table 2 Stability between counting methods for large countries in relation to publication counts and relative citation indicators, 2004–2007. Country

Difference in relative citation indicators between whole and fractionalised countinga

Difference in “country size” between whole and fractionalised countingb

Japan India China USA

−0.07 −0.04 −0.03 −0.02

0.99 1.00 0.99 0.99

a b

Difference between scores from Table 1. Ratio of fractionalised counts to whole counts, where one is equal.

Iceland has a large share of international co-publications that contribute significantly to the country’s overall relative citation index. Conversely, there is almost no drop in “country size” and indicator value for the USA. This is also the case for countries such as Japan, India and China, although their relative citation indexes are considerably lower than the USA’s. It is evident that publication counts and relative citation indicators are quite stable for the four above-mentioned countries, regardless of counting scheme (Table 2). However, publications from the USA – with or without international authors – generally have a very high citation index, and international co-authorship has almost no effect on the country’s relative citation index. 4. Discussion In this paper we have analysed the results of using different counting methods in the calculation of national citation indicators. The citation index of the analysed countries decreases when fractionalised counts, rather than whole counts, are used, and the decrease varies from 0.41 to 0.01 points. Although nations with high scientific impact remain highly cited, and vice versa, ranking is altered and some countries are more affected by calculation using fractionalised counts than other countries. Nations with a high proportion of international co-authorship perform relatively better when the indicator is based on whole counts. The question of how to handle papers involving co-authorship in general and international co-authorship in particular has been a recurrent issue in the literature on the use of bibliometric indicators (see e.g. Egghe, Rousseau, & van Hooydonk, 2000; Gauffriau & Larsen, 2005; Harsanyi, 1993; Narin, 1976; Price, 1981; van Hooydonk, 1997). Whole counting and fractional counting have been considered complementary methods. According to Moed (2005), the integer or whole count method can be interpreted to measure participation, while the fractional counting method measures the number of papers creditable to a country. The whole count method is the most commonly applied when constructing citation indicators; one exception is the Science and Engineering Indicator report published by the US National Science Foundation, in which articles and citations are counted on a fractional basis (see e.g. National Science Board, 2010). However, field normalisation is not carried out for either the latter indicator or the NSI indicator. This methodology is in conflict with well-established principles for the construction of citation indicators. The validity of the indicator is reduced due to the fact that average citation rates vary significantly among the disciplines and that there are differences in countries’ scientific specialisation profiles (e.g. their relative activity in highly cited fields). Other instances where citations are counted on fractional basis, include analyses by the Research Council of Sweden (Kronman, Gunnarsson, & Karlsson, 2010), and two recently published reports by Nordforsk (2010b, 2011). As mentioned above, methods for calculating relative citation indexes have been a topic of recent discussion. Today almost all analyses are based on whole counts. The use of whole and fractionalised counts in the calculation of relative citation indexes is not a matter of right or wrong, as justifications may be provided for both methods. However, despite the common practice of using whole counts, there are, in our view, stronger arguments for using fractionalised counts in calculations, at least at the national level. With citation indicators – like all indicators – the question of validity is at stake: Does the indicator measure what it claims to measure? In the case of national citation indicators, the indicators are usually claimed or interpreted to measure a nation’s scientific impact. However, when citation indicators are based on whole counts, an individual country is credited with the contributions of many scientists in other countries. In extreme cases, the majority of the article contributions are actually made by international researchers. For example, in the case of Iceland, the only reason that the country appears to be the world’s leading country in terms of scientific impact is because Icelandic researchers cooperate extensively with researchers in other countries. In our view, it is rather counterintuitive to assign Iceland such a ranking. We believe that national citation indicators will have greater validity and better justification when fractionalised counts are used. It should also be pointed out that the whole count method is inflationary, as internationally co-authored papers are counted fully under more than one country. A world average in which internationally co-authored papers are counted more than once would be higher than 1.0. From a mathematical perspective the whole count method is inconsistent; this is also emphasised by van Hooydonk (1997). The main focus of our analyses has been on whole and simple fractionalised counts. It should be mentioned that there are other methodological alternatives that we have not considered, for example, fractionalisation by giving more weight to

42

D.W. Aksnes et al. / Journal of Informetrics 6 (2012) 36–43

first authors, or proportional counting (Egghe et al., 2000; Galam, 2011; Gauffriau & Larsen, 2005; van Hooydonk, 1997). Moreover, alternative citation windows may be applied, and an alternative temporal approach may be taken – a synchronous approach (citations in a year to a given set of publications from previous years) rather than the diachronic approach (citations to a given set of publications in subsequent years) that we took. National citation indicators have a major impact on science policy. It is therefore important that they are based on sound and consistent methodology. We have focused on indicators at the national level in this paper; however, our analyses and the issues raised concerning methodology are relevant for citation indicators applied at all levels. The effect of fractional counting is stronger for smaller units such as universities, departments, research groups or individuals than at the national level. This follows from the fact that the degree of external collaboration is inversely correlated with the size of the unit studied. In cases in which individual researchers are studied, each person will be credited a publication fraction of only 1/n, where n is the number of authors. Thus, the difference between whole and fractional counts will be much larger than it is for countries. If fractionalisation is to have any effect at the citation index, publications that involve cooperation and publications that do not must differ in citation rates (which is generally the case). Not only are publications with international co-authorship cited more frequently than purely domestic publications, publications with many authors are generally cited more frequently than publications with fewer authors (Aksnes, 2003; Herbertz, 1995). In this study we have shown how different methodologies for calculating national citation indicators result in different relative scores and rankings. For most countries, the citation index is lower when fractional counts, rather than whole counts, are used. This difference is generally the most striking for countries with the highest proportion of internationally co-authored publications. In our view there are strong arguments in favour of deviating from current practice and using fractionalised counts, not whole counts, in the calculation of relative citation indexes.

References Aksnes, D. W. (2003). A macro study of self-citation. Scientometrics, 56(2), 235–246. Aksnes, D. W., Olsen, T. B. & Seglen, P. O. (2000). Validation of bibliometric indicators in the field of microbiology. A Norwegian case study. Scientometrics, 49(1), 7–22. Egghe, L., Rousseau, R. & van Hooydonk, G. (2000). Methods for accrediting publications to authors or countries: Consequences for evaluation studies. Journal of the American Society for Information Science (JASIS), 51(2), 145–157. Galam, S. (2011). Tailor based allocations for multiple authorship: A fractional gh-index. Retrieved July 14, 2011. arXiv:1007.3708v2 [physics.soc-ph] Garfield, E. (1979). Is citation analysis a legitimate evaluation tool? Scientometrics, 1(4), 359–375. Gauffriau, M. & Larsen, P. O. (2005). Counting methods are decisive for rankings based on publication and citation studies. Scientometrics, 64(1), 85–93. Glänzel, W. (2000). Science in Scandinavia: A bibliometric approach. Scientometrics, 48(2), 121–150. Harsanyi, M. A. (1993). Multiple authors, multiple problems – Bibliometrics and the study of scholarly collaboration: A literature review. Library and Information Science Research, 15, 325–354. Herbertz, H. (1995). Does it pay to cooperate? A bibliometric case study in molecular biology. Scientometrics, 33(1), 117–122. van Hooydonk, G. (1997). Fractional counting of multiauthored publications: Consequences for the impact of author. Journal of the American Society for Information Science, 48(10), 944–945. Hurt, C. D. (1987). Conceptual citation differences in science, technology, and social sciences literature. Information Processing & Management, 23, 1–6. Irvine, J., Martin, B., Peacock, T. & Turner, R. (1985). Charting the decline in British science. Nature, 316(6029), 587–590. Kronman, U., Gunnarsson, M. & Karlsson, S. (2010). The bibliometric database at the Swedish Research Council – Contents, methods and indicators. Stockholm: Swedish Research Council. Leydesdorff, L. (1988). Problems with the “measurement” of national scientific performance. Science and Public Policy, 15(3), 149–152. Leydesdorff, L. & Bornmann, L. (2011). How fractional counting of citations affects the impact factor: Normalization in terms of differences in citation potentials among fields of science. Journal of the American Society for Information Science and Technology, 62(2), 217–229. Leydesdorff, L. & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61(11), 2365–2369. Lundberg, J. (2007). Lifting the crown – Citation z-score. Journal of Informetrics, 1(2), 145–154. Martin, B. R. (1994). British science in the 1980s – Has the relative decline continued? Scientometrics, 29(1), 27–56. Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht: Springer. Moed, H. F., De Bruin, R. E. & van Leeuwen, T. N. (1995). New bibliometric tools for the assessment of national research performance – Database description, overview of indicators and first applications. Scientometrics, 33(3), 381–422. Narin, F. (1976). Evaluative bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity. Computer Horizons, Inc. National Science Board. (2010). Science and Engineering Indicators – 2010. Arlington, VA: National Science Foundation. Nordforsk. (2010a). International Research Cooperation in the Nordic Countries. A publication from the NORIA-net “The use of bibliometrics in research policy and evaluation activities”. Ed. Gunnarsson, M. Oslo: NordForsk. Nordforsk. (2010b). Bibliometric Research Performance Indicators for the Nordic Countries. A publication from the NORIA-net “The use of bibliometrics in research policy and evaluation activities”. Ed. Schneider, J. W. Oslo: NordForsk. Nordforsk. (2011). Comparing Research at Nordic Universities using Bibliometric Indicators. A publication from the NORIA-net “Bibliometric Indicators for the Nordic Universities”. Ed. Piro, F. Oslo: NordForsk. Opthof, T. & Leydesdorff, L. (2010). Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics, 4(3), 423–430. Persson, O., Glänzel, W. & Danell, R. (2004). Inflationary bibliometric values: The role of scientific collaboration and the need for relative indicators in evaluative studies. Scientometrics, 60(3), 421–432. Price, D. d. S. (1981). Letter to the editor. Science, 212, 987. van Raan, A. F. J. (1998). The influence of international collaboration on the impact of research results. Scientometrics, 42(3), 423–428. van Raan, A. F. J. (2000). The Pandora’s Box of citation analysis: Measuring scientific excellence – The last evil? In B. Cronin, & H. B. Atkins (Eds.), The web of knowledge. A Festschrift in Honor of Eugene Garfield (pp. 301–319). Medford: ASIS. Schubert, A. & Braun, T. (1986). Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics, 9(5–6), 281–291. Schubert, A. & Braun, T. (1996). Cross-field normalization of scientometric indicators. Scientometrics, 36(3), 311–324. Schubert, A., Glänzel, W. & Braun, T. (1988). Against absolute methods: Relative scientometric indicators and relational charts as evaluation tools. In A. F. J. van Raan (Ed.), Handbook of quantitative studies of science and technology. Amsterdam: Elsevier.

D.W. Aksnes et al. / Journal of Informetrics 6 (2012) 36–43

43

Vinkler, P. (1986). Evaluation of some methods for the relative assessment of scientific publications. Scientometrics, 10(3–4), 157–177. Waltman, L., van Eck, N. J., Leeuwen, T. N., Visser, M. S. & van Raan, A. F. J. (2010). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37–47. Zitt, M. (2010). Citing-side normalization of journal impact: A robust variant of the audience factor. Journal of Informetrics, 4(3), 392–406. Zitt, M. & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology (JASIST), 59(11), 1856–1860.