Enough with the stupid heuristics already!

Today’s post is inspired by another nonsensical proposal that made the rounds and that reminded me why I invented the Devil’s Neuroscientist back in the day (Don’t worry, that old demon won’t make a comeback…). So apparently RetractionWatch created a database allowing you to search for an author’s name to list any retractions or corrections of their publications*. Something called the Ochsner Journal then declared they would use this to scan “every submitting author’s name to ensure that no author published in the Journal has had a paper retracted.” I don’t want to dwell on this abject nonsense – you can read about this in this Twitter thread. Instead I want to talk about the wider mentality that I believe underlies such ideas.

In my view, using retractions as a stigma to effectively excommunicate any researcher from “science” forever is just another manifestation of a rather pervasive and counter-productive tendency of trying to reduce everything in academia to simple metrics and heuristics. Good science should be trustworthy, robust, careful, transparent, and objective. You cannot measure these things with a number.

Perhaps it is unsurprising that quantitative scientists want to find ways to quantify such things. After all, science is the endeavour to reveal regularities in our observations to explain the variance of the natural world and thus reduce the complexity in our understanding of it. There is nothing wrong with meta-science and trying to derive models of how science – and scientists – work. But please don’t pretend that these models are anywhere near good enough to actually govern all of academia.

Few people you meet still believe that the Impact Factor of a journal tells you much about the quality of a given publication in it. Neither does an h-index or citation count tell us anything about the importance or “impact” of somebody’s research, certainly not without looking at this relative to the specific field of science they operate in. The rate with which someone’s findings replicate doesn’t tell you anything about how great a scientist they are. And you certainly won’t learn anything about the integrity and ability of a researcher – and their right to publish in your journal – when all you have to go on is that they were an author on one retracted study.

Reducing people’s careers and scientific knowledge to a few stats is lazy at best. But it is also downright dangerous. As long as such metrics are used to make consequential real-life decisions, people are incentivised to game them. Nowhere can this be seen better than with the dubious tricks some journals use to inflate their Impact Factor or the occasional dodgy self-citation scandals. Yes, in the most severe cases these are questionable, possibly even fraudulent, practices – but there is a much greater grey area here. What do you think would happen, if we adopted the policy that only researchers with high replicability ratings get put up for promotion? Do you honestly think this would encourage scientists to do better science rather than merely safer, boring science?

This argument is sometimes used as a defence of the status quo and a reason why we shouldn’t change the way science is done. Don’t be fooled by that. We should reward good and careful science. We totally should give credit to people who preregister their experiments, self-replicate their findings, test the robustness of their methods, and go the extra mile to ensure their studies are solid. We should appreciate hypotheses based on clever, creative, and/or unique insights. We should also give credit to people for admitting when they are wrong – otherwise why should anyone seek the truth?

The point is, you cannot do any of that with a simple number in your CV. Neither can you do that by looking at retractions or failures to replicate as a plague mark on someone’s career. I’m sorry to break it to you, but the only way to assess the quality of some piece of research, or to understand anything about the scientist behind it, is to read their work closely and interpret it in the appropriate context. That takes time and effort. Often it also necessitates talking to them because no matter how clever you think you are, you will not understand everything they wrote, just as not everybody will comprehend the gibberish you write. If you believe a method is inadequate, by all means criticise it. Look at the raw data and the analysis code. Challenge interpretations you disagree with. Take nobody’s word for granted and all that…

But you can shove your metrics where the sun don’t shine.

2 thoughts on “Enough with the stupid heuristics already!

  1. *) Sidenote: In principle, I have nothing against a database of retractions and corrections. I can see how this could be useful, although it sounds this current database conflates the two a lot and at present also omits a lot of corrections. But in my view it is much more important that retracted/corrected studies are flagged at the source (on the journal website and on search engines etc) because in the first instance that is what we are actually be interested in.

    Like

  2. *Correction*

    After some feedback from RetractionWatch, it turns out that only corrections related to retractions should be included in the database as seen in the User Guide of the database (#11): https://retractionwatch.com/retraction-watch-database-user-guide/
    There is a large number of corrections in the literature and listing them all would be a lot of effort (especially if the entries require checks by a human being). It is also possible to search the database for retractions specifically. So the database doesn’t systematically conflate corrections and retractions.

    Anyway, my main point still stands: using a database like this to decide whether or not an author can publish in your journal is deeply flawed. Authors should be encouraged to admit their mistakes and retract if needed, so that the scientific record can be corrected. Banning people who might have co-authored a retracted article from publishing without even any regard at the reason is not only draconian but also actively incentivises people to never retract. This can only be bad for science.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s