Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Friday, 18 September 2009

Playing the game: the impact of research assessment

Yesterday I was sent this report, produced by the Research Information Network, in conjunction with the Joint Information Systems Committee, and entitled "Communicating knowledge: How and why UK researchers publish and disseminate their findings". The report used a literature review, bibliometric analysis, an online survey of UK researchers, and focus groups or interviews with researchers to look at how and why researchers put information into the public domain. Being an early-career researcher, I'm interested in this sort of thing: I know why I'm publishing and disseminating information, but it's interesting to see why everyone else is doing it. It's also interesting to see the extent to which research assessment in the UK - until recently the Research Assessment Exercise (RAE) and in future the mysterious Research Excellence Framework (REF) - influence the decisions that researchers make. What particularly struck me about the report was the number of times researchers talked about "playing games": the framework of research assessment is seen as a game to be played, with the needs of research being subordinated to the need to put in a good performance. This has important implications for the REF, in which bibliometric indicators are likely to play an important role.

The key point of the report is that there is some confusion among researchers about what exactly it is they're supposed to be doing. There are conflicting and unclear messages form different bodies about what sort of research contributions are valued. The perception is that the only thing that really counts in terms of research assessment is peer-reviewed journal articles. Other contributions, such as conference proceedings, books, book chapters, monographs, government reports and so on are not valued. As a result, the proportion journal articles compared to other outputs increased significantly between 2003 and 2008. A couple of comments by researchers quoted in the report (p.15):

[There is] much more emphasis on peer reviewed journals …Conferences, working papers and book chapters are pretty much a waste of time … Books and monographs are worth concentrating on if they help one demarcate a particular piece of intellectual territory.


There is a strong disincentive to publish edited works and chapters in edited works, even though these are actually widely used by researchers and educators in my field, and by our students.


This is certainly the impression I get from my own field. In fact, I have been advised by senior colleagues to target high-impact journals, rather than, for example, special publications. I have never received any formal guidance on what research outputs are expected of me, but the prevailing atmosphere gives the impression that it's all about journal articles. After publishing a couple of things from my PhD, it took another three years to publish anything from my first post-doc. I worried about that: it seemed that the numerous conferences and internal company reports and presentations I produced over that time counted for nothing career-wise.

The report makes it clear that, in the case of the RAE, it is more perceptios than the reality causing the problem: the RAE rules meant that most outputs were admissible, and all would be treated equally. But it's perceptions that drive the way researchers respond to research assessment. Clearer guidance is needed.

An interesting point brought up by the report is how, when there is more than one author for a journal article, the list of authors is arranged. In my field, authors are typically listed in order of contribution, so I was surprised to find that this is by no means always the case. In some fields, especially in the humanities and social sciences, authors are commonly listed alphabetically. In some cases, the leader of the research group is listed first, in other cases last. And there are various mixtures of listing by contribution, grant-holding and alphabetic order. There is even a significant minority where papers based on work done by students have the student's supervisor as first author! This means that there is no straightforward way of apportioning credit to multiple authors of a paper, something that David Colquhoun has already pointed out. This is a huge problem for any system of assessment based on bibliometrics.

The report also examines how researchers cite the work of other people. Other researcher's work should be cited because it forms part of the background of the new research, because it supports a statement made in the new paper, or as part of a discussion of how the new paper fits into the context of previous research. Crucially, this includes citing work with which the authors disagree, or that is refuted or cast into doubt in the light of the new work (p.30):

Citing somebody often indicates opposition / disagreement, rather than esteem and I am as likely to cite and critique work that I do not rate highly as work I value.

So any system that relies on bibliometric indicators is likely to reward controversial science as much as good science (not that those categories are mutually exclusive, but they don't completely overlap either).

Researchers are perfectly clear that a system based on bibliometrics will cause them to change their publication behaviour: 22% will try to produce more publications, 33% will submit more work to high-status journals, 38% will cite their collaborators work more often, while 6% will cite their competitors work less often. This will lead to more journal articles of poorer quality, a the decline of perfectly good journals that have low "impact", and corruption in citation behaviour. In general, researchers aren't daft, and they've clearly identified the incentives that would be created by such a system.

The report presents a worrying picture of research, and scientific literature, distorted by the perverse incentives created by poorly thought-out and opaque forms of research assessment. It can be argued that scientists who allow their behaviour to be distorted by these incentives are acting unprofessionally: I wouldn't disagree. But for individuals playing the game, the stakes are high. Perhaps we ought to be thinking about whether research is the place for playing games. It surely can't lead to good science.

Thursday, 19 March 2009

What is the Russell Group for?

The Russell Group contains the 20 major research-intensive universities in the UK. The University of Liverpool is a member of the group, and has recently made the news by earmarking its departments of Politics and Communication, Statistics, and Philosophy for closure. The reason is that those departments are seen as having underperformed in the 2008 RAE (Research Assessment Exercise).

In the RAE, departments are ranked by the proportion of research they have in five different categories, as follows:

4*: Quality that is world-leading in terms of originality, significance and rigour.

3*: Quality that is internationally excellent in terms of originality, significance and rigour but which nonetheless falls short of the highest standards of excellence.

2*: Quality that is recognised internationally in terms of originality, significance and rigour.

1*: Quality that is recognised nationally in terms of originality, significance and rigour.

Unclassified: Quality that falls below the standard of nationally recognised work. Or work which does not meet the published definition of research for the purposes of this assessment.


The three departments faced with closure had no research ranked in category 4*. According to Times Higher Education, "The university has questioned whether this is “acceptable” for a member of the Russell Group of 20 research-led institutions".

So, how did the threatened departments do overall? Here's their breakdown from the 2008 RAE (source):


Statistics: 4*, 0%; 3*, 35%; 2*, 50%; 1*, 15%; UC, 0%.

Politics and Communication: 4*, 0%; 3*, 15%; 2*, 55%; 1*, 25%; UC, 5%.

Philosophy: 4*, 0%; 3*, 25%; 2*, 60%; 1*, 15%; UC, 0%.


These results are surely not disastrously bad. In all cases, the vast majority of research is ranked at 3* and 2* levels: that is, it is considered to be internationally excellent or internationally recognised. Is this really such a poor performance that it requires the closure of the departments?

The threat of closure of these departments raises the question of what a university is actually for. If it only exists to receive as much research funding as possible, then closure is a perfectly sensible action. But if you consider the university as a community of scholars, with everyone (from undergraduates to professors) learning from each other, then closing these departments is going to contribute to the narrowing of the university experience for everyone. Is that really what the University of Liverpool wants to acheive? And is that what the Russell Group is supposed to be about?

Friday, 14 December 2007

Labour 'flexibility' in our universities

I went to a meeting of the Universities and Colleges Union the other day, about fixed-contract staff at the university. I'm on a fixed-term contract myself which will shortly be ending, so this is of more than academic interest for me. I was astounded by the statistics presented. Apparently, in 2005/6 the proportion of research staff at UK universities who were on fixed-term contracts was 85%! And at Manchester, it was 94%! Although the University of Manchester is supposed to be committed to reducing the number of fixed-term contracts, the proportion has only decreased to 94%, from 96% in 2004/5. Meanwhile, the proportion of teaching staff on fixed-term contracts has soared from 46% to 63%. The proportion of staff responsible for both teaching and research who are on fixed term contracts has declined significantly, from 24% in 2004/5 to 12% in 2005/6.

This raises a number of problems. Staff who are both teachers and researchers (i.e. they are lecturers, readers or professors) are generally responsible for heading up research groups, chasing up funding, and supervising research. They're the brains of the operation, if you like. But the backbone, the people who are responsible for most of the actual research, are post-docs on short-term contracts and graduate students. Research is essentially being done by temps.

Since they have no long-term job security, post-docs tend to leave as soon as they have an opportunity to do so. Especially as only around 20% of post-docs will ever get a permanent position as a lecturer. Essentially you have the option of sticking around on short-term contracts until you become unemployable, or you go and get a proper job. People tend to stay longer than maybe they should, because research is what they want to do, and they'll put up with a lot to keep doing it. There are post-docs at Manchester who have been here for over 10 years, all on short-term contracts.

Obviously, this is stressful and demoralising for individual post-docs. But it also can't be good for research. Good people leave, and those who stay are looking over their shoulder. And after training for many years to become a post-doc, at the bottom rung of the academic ladder, you can't blame people for thinking that they deserve better.