Friday 18 September 2009

Playing the game: the impact of research assessment

Yesterday I was sent this report, produced by the Research Information Network, in conjunction with the Joint Information Systems Committee, and entitled "Communicating knowledge: How and why UK researchers publish and disseminate their findings". The report used a literature review, bibliometric analysis, an online survey of UK researchers, and focus groups or interviews with researchers to look at how and why researchers put information into the public domain. Being an early-career researcher, I'm interested in this sort of thing: I know why I'm publishing and disseminating information, but it's interesting to see why everyone else is doing it. It's also interesting to see the extent to which research assessment in the UK - until recently the Research Assessment Exercise (RAE) and in future the mysterious Research Excellence Framework (REF) - influence the decisions that researchers make. What particularly struck me about the report was the number of times researchers talked about "playing games": the framework of research assessment is seen as a game to be played, with the needs of research being subordinated to the need to put in a good performance. This has important implications for the REF, in which bibliometric indicators are likely to play an important role.

The key point of the report is that there is some confusion among researchers about what exactly it is they're supposed to be doing. There are conflicting and unclear messages form different bodies about what sort of research contributions are valued. The perception is that the only thing that really counts in terms of research assessment is peer-reviewed journal articles. Other contributions, such as conference proceedings, books, book chapters, monographs, government reports and so on are not valued. As a result, the proportion journal articles compared to other outputs increased significantly between 2003 and 2008. A couple of comments by researchers quoted in the report (p.15):

[There is] much more emphasis on peer reviewed journals …Conferences, working papers and book chapters are pretty much a waste of time … Books and monographs are worth concentrating on if they help one demarcate a particular piece of intellectual territory.


There is a strong disincentive to publish edited works and chapters in edited works, even though these are actually widely used by researchers and educators in my field, and by our students.


This is certainly the impression I get from my own field. In fact, I have been advised by senior colleagues to target high-impact journals, rather than, for example, special publications. I have never received any formal guidance on what research outputs are expected of me, but the prevailing atmosphere gives the impression that it's all about journal articles. After publishing a couple of things from my PhD, it took another three years to publish anything from my first post-doc. I worried about that: it seemed that the numerous conferences and internal company reports and presentations I produced over that time counted for nothing career-wise.

The report makes it clear that, in the case of the RAE, it is more perceptios than the reality causing the problem: the RAE rules meant that most outputs were admissible, and all would be treated equally. But it's perceptions that drive the way researchers respond to research assessment. Clearer guidance is needed.

An interesting point brought up by the report is how, when there is more than one author for a journal article, the list of authors is arranged. In my field, authors are typically listed in order of contribution, so I was surprised to find that this is by no means always the case. In some fields, especially in the humanities and social sciences, authors are commonly listed alphabetically. In some cases, the leader of the research group is listed first, in other cases last. And there are various mixtures of listing by contribution, grant-holding and alphabetic order. There is even a significant minority where papers based on work done by students have the student's supervisor as first author! This means that there is no straightforward way of apportioning credit to multiple authors of a paper, something that David Colquhoun has already pointed out. This is a huge problem for any system of assessment based on bibliometrics.

The report also examines how researchers cite the work of other people. Other researcher's work should be cited because it forms part of the background of the new research, because it supports a statement made in the new paper, or as part of a discussion of how the new paper fits into the context of previous research. Crucially, this includes citing work with which the authors disagree, or that is refuted or cast into doubt in the light of the new work (p.30):

Citing somebody often indicates opposition / disagreement, rather than esteem and I am as likely to cite and critique work that I do not rate highly as work I value.

So any system that relies on bibliometric indicators is likely to reward controversial science as much as good science (not that those categories are mutually exclusive, but they don't completely overlap either).

Researchers are perfectly clear that a system based on bibliometrics will cause them to change their publication behaviour: 22% will try to produce more publications, 33% will submit more work to high-status journals, 38% will cite their collaborators work more often, while 6% will cite their competitors work less often. This will lead to more journal articles of poorer quality, a the decline of perfectly good journals that have low "impact", and corruption in citation behaviour. In general, researchers aren't daft, and they've clearly identified the incentives that would be created by such a system.

The report presents a worrying picture of research, and scientific literature, distorted by the perverse incentives created by poorly thought-out and opaque forms of research assessment. It can be argued that scientists who allow their behaviour to be distorted by these incentives are acting unprofessionally: I wouldn't disagree. But for individuals playing the game, the stakes are high. Perhaps we ought to be thinking about whether research is the place for playing games. It surely can't lead to good science.

10 comments:

Dr Aust said...

The gaming is "multi-level" - apart from the obvious selection pressure imposed on researchers' behaviour, there has always been pressure and diktats coming from the University/Faculty/Department.

A vast amount of effort in most Univs has gone, in the last few RAEs, in trying to "read the tea-leaves" on how panels in different disciplines would view author orders, shared works etc.

In my Faculty we went through several distinct stages in the run-up to RAE2008:

Starting rules:

(i) Peer-reviewed research papers only, no review articles, no book chapters.

(ii) No-one to submit a paper someone else in their UoA (Unit of Assessment) was submitting, even if it was a genuinely "shared" work. (Cue arguments about who got to submit what)

But:

(iii) OK to submit a paper if it was also being submitted by someone (in the same Univ) in a DIFFERENT UoA (which could mean someone in a different Faculty, or Department, but could also mean your neighbour who was being "returned" somewhere else for strategic reasons.

(iv) - c. mid-2006 - strategy changed to:

Only return papers which can be "badged" as NON-joint projects with you as boss / PI "It must be seen as your work alone". Less than four such papers available to submit anticipated to mean NOT being returned.

(v) - c. early-mid-2007 - strategy changed again to:

Joint papers (as in iii above) now OK as long as you can say in a sentence what "distinctive thing" your lab did and make it sound believable.

This meant some people, including me, went through a cycle of "you'll be in -- oh, no you won't -- oh, yes you will".

Riotous fun, I think you'll agree.

As I said at the top, this sort of process would not be unique to the place where I work. It is pretty much a certainty that most science Faculties in the research-intensive Univs will have done some version of all this gaming / jockeying / strategizing - probably worked out by very senior academics spending many hrs in meetings and working their contacts, and then with dress rehearsal dry-run exercises (at least one and sometimes two) for all "returnable" staff.

Paul Wilson said...

Thanks for this, Dr Aust.

The more I learn about the way research is assessed (and as a mere research associate, no-one goes out of their way to tell you), the more it seems like a massive counter-productive mess.

There must be a better way.

Dr Aust said...

I think the reason people don't tell you is that we don't want to disillusion younger scientists with the more depressing realities of the modern scientific system!!

A lot of people think the research assessement process has nothing to do with doing actual science (or actively hinders it - David Colquhoun is always very outspoken about this), in the same way many doctors think the endless paper-chasing, targets, PR bullshit and league tables in the NHS have nothing to do with patient care.

Anyway, it is our problem, and may one day be yours... but in the meantime you should probably ignore it for as long as you possibly can.

Paul Wilson said...

I'm not sure whether I entirely agree with David Colquhoun: given that a big chunk of research funding comes from the taxpayer, there has to be some way of making sure that it's spent properly. But it seems that the current system is both massively wasteful of time and resources, and leads to perverse incentives.

I'm glad that my bosses seem perfectly happy to leave me to get on with my work, so I can ignore the research assessment minefield to an extent. But as a researcher, I think research is important, and I don't think we should be "playing games" with it, so I want to be aware of what is happening.

Well, if it all goes tits up I can probably join the oil industry...

Dr Aust said...

Re. "making sure [taxpayers' money] is spent properly"

The old-style rule of thumb with responsive mode grants from the Research Councils or charities used to be, in effect, that the judgement of "what did they do with the money?" amounted to whether you had used the cash you had been given to do decent research. This would be judged by whether you had published any quality papers.

At the end of a grant you always had to write a report. These were assumed to go in a file to be dug out when you applied for another grant from the same Research Council. If you had done little or nothing with the last lot of dosh, your chances of scoring more would be pretty poor. Again, a common rule of thumb, assuming papers in reasonable journals, was roughly one decent original research paper per grant-funded postdoc associate per year, or perhaps per eighteen months. If you had a three year project grant, and published less than two papers, people might wonder what you had done with the cash.

Of course, this depended on journal quality. One paper from a 3-yr grant might be OK if it was a seminal piece of work in a major general journal, for instance.

RAE/REF is a bit different as it assesses whole departments (historically), and individuals only much more recently. The problem is that in most disciplines RAE has revealed nothing about the UK University pecking order (and the money earned goes out in a lump to Universities; they choose where they then put it) that any academic couldn't have guessed in about 5 minutes. Thus Oxbridge top the research leagues, UCL and Imperial are there or thereabouts, Kings were 5th but are now battling the merged M'cr and perhaps Edinburgh for that place, the Russell Group Univs and some of the 60s research intensives (e.g. Warwick) are next, etc etc. I really seriously doubt that the proportional share (i.e. % of the whole pie) of most Univs has changed much, if at all, over the 25 years of RAEs, if you take a "long view".

Following which one might question whether the whole exercise is worth even a fraction of what it costs to run.

PS I nearly mentioned the oil industry as a possible destination for you in one of my responses on the other thread, as I always assume they are the geology researcher's equivalent of Big Pharma...! One of my mates from school who did geology became a rig geologist back in the mid 80s. He got pretty well paid, although flying on WW2 surplus DC-3s (Dakotas), and getting held up by gun-toting rebels in West African war zones were less pleasant aspect of his job.

Paul Wilson said...

Yes, checking that researchers have actually done some research with their grant seems to be a sensible and relatively cheap way of ensuring that the money is well spent.

Just saw this, via David Colquhoun's miniblog: apparently, NSERC (the Natural Sciences and Engineering Research Council in Canada) would save money by ditching their peer review system and just giving everyone a $30,000 base grant.

I believe that NSERC works in a similar way to NERC in the UK, though I'm sure there are differences. Still, food for thought...

Dr Aust said...

Also via David's miniblog, this rather gloomy (and indeed outraged) PloS Biology article by Peter Lawrence is worth a look.

Some discussion of both articles can be found on this thread over at Nature Network.

Paul Wilson said...

That's a great article. You can't beat a heading like "The granting system turns young scientists into bureaucrats and then betrays them", plus Kafka and Hesse references, for shear quality entertainment.

Greg Kochanski said...

You need to make the distinction between "The taxpayers have a right to have their money well spent" and "We need a massive exercise that pretends to prove that the money is well spent." The former is desireable, the second is achievable. Some people (like me) would contend that the massive exercise can actually prevent the money from being well spent.

Of course people "game" the system. Except that you could equally well call it "responding to poorly designed incentives." How can anyone complain if they introduce incentives into a system and then find that they are followed?

alina said...

Great! I Will share this appreciative post. It is really impressive and informative blog.
https://kingsmedicalcenter.ae/departments/laser-hair-removal/