RAE and ERA—Spot the difference

RAE and ERA—Spot the difference

International Journal of Nursing Studies 49 (2012) 375–377 Contents lists available at SciVerse ScienceDirect International Journal of Nursing Studi...

116KB Sizes 7 Downloads 38 Views

International Journal of Nursing Studies 49 (2012) 375–377

Contents lists available at SciVerse ScienceDirect

International Journal of Nursing Studies journal homepage: www.elsevier.com/ijns

Guest editorial

RAE and ERA—Spot the difference

It is a given that university research underpins economic and social developments. In 2010, it was reported that £3.5 Billion of publicly funded research generates £45 Billion a year in job creation and new products (THE, 2010). However, the benefits are not limited to industrial innovation and products. In the United Kingdom (UK), the Arts and Humanities Research Council asserted that for every £1 spent on arts and humanities research each year, the UK reaps up to £10 in immediate benefit and another £15–£20 in the long term (Owens, 2010). Similar metrics have been provided in Australia with similar returns on investment for research dollars spent. In Australia it has been estimated that there is a $5 AUD return for every $1 AUD invested in health and medical research (National Health & Medical Research Council, 2003). From this it is clear why most governments want to support university research. However, in a climate of shrinking resources and increasing fiscal accountability, governments need to show that they are making best possible use of public funds and they cannot afford to waste valuable public resources on poor quality research. Naturally they want to reward excellence and so are interested in funding only centers in which internationally excellent and world class research is taking place. How best to do this has been a challenge for those charged with distributing public research funding. In the UK, Finland and Hong Kong research assessment exercises have taken place for many years. Though these exercises have been shown to have intended and unintended consequences (Elton, 2000), their aim is to produce ratings of research quality so that university funding bodies can determine how to best to allocate research funding. In the UK, nursing and midwifery were late starters in the research assessment game. This is not surprising considering that most nursing and midwifery schools only entered the university sector in the mid 1990s. Before that only a handful of university departments of nursing and midwifery submitted their research to the research assessment exercise (RAE), with little success. In Hong Kong and Finland the results for nursing in such exercises were also mixed, and nursing tended to score less well 0020-7489/$ – see front matter ß 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijnurstu.2011.11.013

compared to the established disciplines of medicine, engineering and science. The question as to whether this was a game where nursing would always be the underdog was often posed. In the UK’s 2001 RAE, nursing (which included midwifery) bucked the trend by showing large improvements compared to its previous showing in 1996. As in previous exercises, the submissions were scored by a panel of peers. However, unlike earlier exercises, the research ratings had progressed from being mainly regional and national to being mainly national and international. It looked like nursing had begun to learn to play the game, to climb up the rankings. In response to these results, the Government funded those university departments that were undertaking research that spanned the continuum from national to international. In some parts of the UK, nursing received special funding for research capacity building in recognition of it having recently joined the academy. At the next UK RAE in 2008 panel members scored submissions across three categories: research outputs (publications), research environment and research esteem. Once again, there was an improvement in nursing’s profile. Possibly, some of this was due to the research capacity funding that was allocated after the 2001 RAE. This time there was less national research being submitted for assessment and the spread was across an ‘international standard’, ‘internationally excellent’ and ‘world leading’ categories. There was much celebration in the profession; UK nursing had shown the world that it could undertake international and world leading research. Meanwhile Australia considered adopting the UK model of research assessment because the Australian Government also wanted to focus its funding efforts on the very best. Professor Margaret Sheil, the CEO of the Australian Research Council who led the country’s first research assessment scheme called it ‘Excellence in Research in Australia’ (ERA). However, it was different and in some ways better than the UK exercise. For example it ranked refereed academic journals in which research nurses and midwives publish their work, something the UK was always reluctant to do. In addition, it was to include an

376

Guest editorial / International Journal of Nursing Studies 49 (2012) 375–377

assessment of the ‘impact’ of research – something that was missing from UK Research Assessment Exercises. However the Australian Minister for Innovation, Industry and Research, The Hon Kim Carr, dispensed with the impact assessment as it was too problematic to develop a metric that would be seen as valid and acceptable across a number of professions. The results of the ERA were released in 2010 and ‘In the discipline review for nursing some 23 universities were assessed . . .the results for nursing and midwifery were impressive and they demonstrated that nine of the 23 research programs assessed were world class or above world class. In fact, nursing and midwifery in the FoR (Field of Research) Code 1110 was noted to be a particularly strong performer (Australian Research Council, 2011). This demonstrates that nurses and midwives in Australia are engaged in high quality research which is influencing practice and policy and making a difference to the healthcare of Australians’ (Davidson et al., 2011, p. 43). The Australian Government has announced that the next ERA will occur in 2012 and there is speculation that its outcomes will influence funding for research policy and the structure of the university sector. It is likely to be performance based and could lead to a situation where only a select number of high performing universities will be funded to undertake research, including supervision of research students, while the others could be relegated to ‘teaching only’ status. The next UK exercise will differ from previous ones. The UK Government wants to have a greater focus on the very best research. With this in mind, it will only fund research that is ‘internationally excellent’ and ‘world leading’. Research that is of a ‘national’ standard or ‘internationally recognised’’ will not be funded. The Government has been accused of wishing to concentrate research funding in a small number of elite universities with the inevitable result that the others will have to concentrate on quality teaching. This has not been well received by the majority of UK universities. The other main difference is that the impact of research will be recognized and rewarded. This focus is in line with the design of the original Australian ERA, and has been driven by the UK Treasury. This should not come as a surprise. Given that billions of pounds of public money are being allocated for research, the Government believes it is not unreasonable that researchers should be able to show the benefits that their research brings to the economy, to society and culture and to quality of life. It has been decided that 20% of the final funding (total funding is £2 Billion per year) will be allocated to the impact aspect of the universities’ submissions. This is a major amount of funding, and impact has been an unpopular addition to the exercise for many researchers since they believe that research, particularly blue sky studies, may generate impact that the researchers cannot predict at the outset of their work. Furthermore, the impact of some research is not evident for many years, meaning that it can be very difficult to demonstrate clear impact in the short term. There is little doubt that in making determinations about the impact of research it is important that any

measure is valid, reliable, and able to accurately reflect impact in its many forms and even over the longer term. It is undeniable that research data drives service and quality improvement and has a major influence in shaping public policy. Therefore research assessment exercises are important in achieving focus and deriving a strategic direction. Impact and how it might be demonstrated, has become a concern for researchers, and has led to discussion in the literature about how researchers might be able to measure and demonstrate the impact of their research (Canavan et al., 2009). The need to be able to determine research impact is of particular importance for practice based disciplines such as nursing and midwifery. Deriving a measure of impact on health outcomes and moderating professional practice will be of particular importance in assessing the value and worth of research and scholarship. In the context of a global economic recession, money from the public purse is being squeezed and industry contributions and philanthropy are also shrinking. Yet, despite this climate of shrinking resources, our health and social systems continue to require data based solutions to meet contemporary challenges. This requires a cadre of health professionals who are well prepared not only to undertake research but also, to be effective consumers of research outputs. Achieving this nexus between teaching and research is most readily achieved in faculties which have active researchers. Starving research funding from teaching institutions is likely to have a negative effect on teaching outcomes. There is also realization that university research can help lift some countries out of financial difficulties by generating an economic recovery as seen in the stimulus package provided by Obama in the United States (Tanne, 2009). It is indisputable that university research has to be funded but, as we have highlighted earlier, the question faced by governments in many countries is where they should focus their funding efforts to get the ‘best bang for their buck’. This is particularly the case for countries such as Australia and the United Kingdom which derive considerable tax payer dollars for infrastructure and support. Results from the research assessment exercises help governments with this decision. However, over the decades this has resulted in more and more money being focused on only the very best research. The end result will be the establishment of a relatively small number of elite research intensive institutions potentially in geographical locations that are affluent. We can expect to see large numbers of non research technological universities and teaching only universities, potentially in areas of social and economic disadvantage. This will undermine the philosophy that research can address the needs of local communities, and also that research underpins teaching and that teaching gives voice to research findings. The view that universities are environments where knowledge is generated, challenged, tested, and taught will likely crumble. This is a sad and dismal outcome for nursing and midwifery, which have made considerable achievements in their short time in the academy. Therefore it is important that we have a metric for research achievement in nursing and midwifery which

Guest editorial / International Journal of Nursing Studies 49 (2012) 375–377

allows us to monitor our progress, benchmark against comparable disciplines and institutions and strive for excellence. The price of excellence should not be at the expense of quality, equity and opportunity. Striving to develop and refine these metrics should be an important focus to ensure that nurses and midwives are recognized for their valuable contribution to science and the creation of knowledge that enhances peoples’ lives, but also get their fair share of funding and resources. Conflict of interest There are no conflicts of interest. References Canavan, J., Gillen, A., Shaw, E., 2009. Measuring research impact: developing practical and cost-effective approaches. Evidence & Policy: A Journal of Research Debate and Practice 5 (2), 167–177. Davidson, P.M., Homer, C.S.E., Duffield, C., Daly, J., 2011. A moment in history and a time for celebration: the performance of nursing and midwifery in Excellence in Research for Australia. Collegian 18, 43–44. Elton, L., 2000. The UK Research Assessment Exercise: unintended consequences. Higher Education Quarterly 54, 274–283.

377

National Health and Medical Research Council (NHMRC), 2003. NHMRC performance measurement report 2000–2003. A report on the performance of the National Health and Medical Research Council. ISBN Print: 1864961910. ISBN Online: 186496197X. Owens, B., November 2010. Research and the economy. Research Fortnight 6. Tanne, J.H., 2009. Obama’s stimulus package includes funds for public health, nutrition, and effectiveness research. BMJ 338, b794. THE, 2010. Research and Its Impact, TSL Education, London, vol. 1940, p. 8.

Hugh McKenna PhD, RN, CBE, FRCN* University of Ulster, Ireland John Daly PhD, RN, FRCNA Patricia Davidson PhD, RN, FRCNA Christine Duffield PhD, RN Debra Jackson PhD, RN Faculty of Nursing, Midwifery & Health, University of Technology, Sydney, Australia

*Corresponding

author. Tel.: +353 02870324491 E-mail address: [email protected] Received 25 November 2011