Category Archives: scientific publishing

What is a “publication”?

I was originally thinking of writing a long blog post discussing this but it is hard to type verbose treatises like that from inside my gently swaying hammock. So y’all will be much relieved to hear that I spare you that post. Instead I’ll just post the results of the recent Twitter poll I ran, which is obviously enormously representative of the 336 people who voted. Whatever we’re going to make of this, I think it is obvious that there remains great scepticism about treating preprints the same as publications. I am also puzzled by what the hell is wrong with the 8% who voted for the third option*. Do these people not put In press articles on their CVs?

screenshot 2019-01-19 at 16.53.25

*) Weirdly, I could have sworn that when the poll originally closed this percentage was 9%. Somehow it was corrected downwards afterward. Should this be possible?

An open review of open reviewing

a.k.a. Love v Kriegeskorten

An interesting little spat played out on science social media today. It began with a blog post by Niko Kriegeskorte, in which he posted a peer review he had conducted on a manuscript by Brad Love. The manuscript in question is publicly available as a preprint. I don’t want to go into too much detail here (you can read that all up for yourself) but Brad took issue with the fact and the manner in which Niko posted the review of their manuscript after it was rejected by a journal. A lot of related discussion also took place via Twitter (see links in Brad’s post) and on Facebook.

I must say, in the days of hyper-polarisation in everyday political and social discourse, I find this debate actually really refreshing. It is actually pretty easy to feel outrage and disagreement with a president putting children in cages or holding a whole bloody country hostage over a temper tantrum – although the fact that there are apparently still far too many people who do not feel outraged about these things is certainly a pretty damning indictment of the moral bankruptcy of the human race… Anyway, things are actually far more philosophically challenging when there is a genuine and somewhat acrimonious disagreement between two sides you respect equally. For the record, Brad is a former colleague of mine from my London days, whose work I have the utmost respect for. Niko has for years been a key player in multivariate and representational analysis and we have collaborated in the past.

Whatever my personal relationship to these people, I can certainly see both points of view in this argument. Brad seems to object mostly to the fact that Niko posted the reviews on this blog and without their consent. He regards this as a “self-serving” act. In contrast, Niko regards this as a substantial part of open review. His justification for posting this review publicly is that the manuscript is already public anyway, and that this invites public commentary. I don’t think that Brad particularly objects to public commentary, but he sees a conflict of interest in using a personal blog as a venue for this, especially since these were the peer reviews Niko wrote for a journal, not on the preprint server. Moreover, since these were reviews that led to the paper being rejected by the journal, he and his coauthors had no opportunity to reply to Niko’s reviews.

This is a tough nut to crack. But this is precisely the kind of discussion we need to have for making scientific publishing and peer review more transparent. For several years now I have argued that peer reviews should be public (even if the reviewers’ names are redacted). I believe reviewers’ comments and editorial decisions should be transparent. I’ve heard “How did this get accepted for publication?” in journal clubs just too many times. Show the world why! Not only is it generally more open but it will also make it fairer when there are challenges to the validity of an editorial decision, including dodgy decisions to retract studies.

That said, Brad certainly has a point that ethically this openness requires up-front consent from both parties. The way he sees it, he and his coauthors did not consent to publishing these journal reviews (which, in the present system, are still behind closed doors). Niko’s view is clearly that because the preprint is public, consent is implicit and this is fair game. Brad’s counterargument to this is that any comments on the preprint should be made directly on the preprint. This is separate from any journal review process and would allow the authors to consider the comments and decide if and how to respond to them. So, who is right here?

What this really comes down to is a philosophical worldview as to how openness should work and how open it should be. In a liberal society, the right to free expression certainly permits a person to post their opinions online, within certain constraints to protect people from libel, defamation, or threats to their safety. Some journals make reviewers sign a confidentiality agreement about reviews. If this was the case here, a post like this would constitute a violation of that agreement, although I am unaware of any case where this has ever been enforced. Besides, even if reviewers couldn’t publicly post their reviews and discuss the peer review of a manuscript, this would certainly not stop them from making similar comments at conferences, seminars – or on public preprints. In that regard, in my judgement Niko hasn’t done anything wrong here.

At the same time, I fully understand Brad’s frustration. I personally disagree with the somewhat vitriolic and accusatory tone of his response to Niko. This seems both unnecessary and unhelpful. But I agree with him that a personal blog is the wrong venue for posting peer reviews, regardless of whether they are from behind the closed doors of a journal review process or from the outside lawn of post-publication discussion. Obviously, nobody can stop anyone from blogging their opinion on a public piece of science (and a preprint is a public piece of science). Both science bloggers and mainstream journalists constantly write about published research, including preprints that haven’t been peer reviewed. Twitter is frequently ablaze with heated discussion about published research. And I must say that when I first skimmed Niko’s post, I didn’t actually realise that this was a peer review, let along one he had submitted to a journal, but simply thought it was his musings about the preprint.

The way I see it, social media aren’t peer review but mere opinion chatter. Peer review requires some established process. Probably this should have some editorial moderation – but even without that, at the very least there should be a constant platform for the actual review. Had Niko posted his review as a comment on the preprint server, this would have been entirely acceptable. In an ideal world, he would have done that after writing it instead of waiting for the journal to formally reject the manuscript*. This isn’t to say that opinion chatter is wrong. We do it all the time and talking about a preprint on Twitter is not so different from discussing a presentation you saw at a conference or seminar. But if we treat any channel as equivalent for public peer review, we end up with a mess. I don’t want to constantly track down opinions, some of which are vastly ill-informed, all over the wild west of the internet.

In the end, this whole debacle just confirms my already firmly held belief (Did you expect anything else? 😉 ) that the peer review process should be independent from journals altogether. What we call preprints today should really be the platform where peer review happens. There should be an editor/moderator to ensure a decent and fair process and facilitate a final decision (because the concept of eternally updating studies is unrealistic and infeasible). However, all of this should happen in public. Importantly, journals only come into play at the end, to promote research they consider interesting and perhaps some nice editing and formatting.

The way I see it, this is the only way. Science should happen out in the open – including the review process. But what we have here is a clash between promoting openness in a world still partly dominated by the traditional way things have always been done. I think Niko’s heart was in the right place here but by posting his journal reviews on his personal blog he effectively went rogue, or took the law into his own hands, if you will. Perhaps this is the way the world changes but I don’t think this is a good approach. How about we all get together and remake the laws. They are for us scientists after all, to determine how science should work. It’s about time we start governing ourselves.

Addendum:
I want to add links to two further posts opining on this issue, both of which make important points. First, Sebastian Bobadilla-Suarez, the first author of the manuscript in question, wrote a blog post about his own experiences, especially from the perspective of an early career researcher. Not only are his views far more important, but I actually find his take far more professional and measured than Brad’s post.
Secondly, I want to mention another excellent blog post on this whole debacle by Edwin Dalmaijer which very eloquently summarises this situation. From what I can tell, we pretty much agree in general but Edwin makes a number of more concrete points compared to my utopian dreams of how I would hope things should work.

Chris Chambers is a space alien

Imagine you are a radio astronomer and you suddenly stumble across a signal from outer space that appears to be evidence of an extra-terrestrial intelligence. Let’s also assume you already confidently ruled out any trivial artifactual explanation to do with naturally occurring phenomena or defective measurements. How could you confirm that this signal isn’t simply a random fluke?

This is actually the premise of the novel Contact by Carl Sagan, which happens to be one of my favorite books (I never watched the movie but only caught the end which is nothing like the book so I wouldn’t recommend it…). The solution to this problem proposed in the book is that one should quantify how likely the observed putative extraterrestrial signal would be under the assumption that it is the product of random background radiation.

This is basically what a p-value in frequentist null hypothesis significance testing represents. Using frequentist inference requires that you have a pre-specified hypothesis and a pre-specified design. You should have an effect size in mind, determine how many measurements you need to achieve a particular statistical power, and then you must carry out this experiment precisely as planned. This is rarely how real science works and it is often put forth as one of the main arguments why we should preregister our experimental designs. Any analysis that wasn’t planned a priori is by definition exploratory. The most extreme form of this argument posits that any experiment that hasn’t been preregistered is exploratory. While I still find it hard to agree with this extremist position, it is certainly true that analytical flexibility distorts the inferences we can make about an observation.

This proposed frequentist solution is therefore inappropriate for confirming our extraterrestrial signal. Because the researcher stumbled across the signal, the analysis is by definition exploratory. Moreover, you must also beware of the base-rate fallacy: even an event extremely unlikely under the null hypothesis is not necessarily evidence against the null hypothesis. Even if p=0.00001, a true extraterrestrial signal may be even less likely, say, p=10-100. Even if extra-terrestrial signals are quite common, given the small amount of space, time, and EM bands we have studied thus far, how probable is it we would just stumble across a meaningful signal?

None of that means that exploratory results aren’t important. I think you’d agree that finding credible evidence of an extra-terrestrial intelligence capable of sending radio transmissions would be a major discovery. The other day I met up with Rob McIntosh, one of the editors for Registered Reports at Cortex, to discuss the distinction between exploratory and confirmatory research. A lot of the criticism of preregistration focuses on whether it puts too much emphasis on hypothesis-driven research and whether it in turn devalues or marginalizes exploratory studies. I have spent a lot of time thinking about this issue and (encouraged by discussions with many proponents of preregistration) I have come to the conclusion that the opposite is true: by emphasizing which parts of your research are confirmatory I believe exploration is actually valued more. The way scientific publishing works conventionally many studies are written up in a way that pretends to be hypothesis-driven when in truth they weren’t. Probably for a lot of published research the truth lies somewhere in the middle.

So preregistration just keeps you honest with yourself and if anything it allows you to be more honest about how you explored the data. Nobody is saying that you can’t explore, and in fact I would argue you should always include some exploration. Whether it is an initial exploratory experiment that you did that you then replicate or test further in a registered experiment, or whether it is a posthoc robustness test you do to ensure that your registered result isn’t just an unforeseen artifact, some exploration is almost always necessary. “If we knew what we were doing, it would not be called research, would it?” (a quote by Albert Einstein, apparently).

One idea I discussed with Rob is whether there should be a publication format that specifically caters to exploration (Chris Chambers has also mentioned this idea previously). Such Exploratory Reports would allow researchers to publish interesting and surprising findings without first registering a hypothesis. You may think this sounds a lot like what a lot of present day high impact papers are like already. The key difference is that these Exploratory Reports would contain no inferential statistics and critically they are explicit about the fact that the research is exploratory – something that is rarely the case in conventional studies. However, this idea poses a critical challenge: on the one hand you want to ensure that the results presented in such a format are trustworthy. But how do you ensure this without inferential statistics?

Proponents of the New Statistics (which aren’t actually “new” and it is also questionable whether you should call them “statistics”) will tell you that you could just report the means/medians and confidence intervals, or perhaps the whole distributions of data. But that isn’t really helping. Inspecting confidence intervals and how far they are from zero (or another value of no interest) is effectively the same thing as a significance test. Even merely showing the distribution of observations isn’t really helping. If a result is so blatantly obvious that it convinces you by visual inspection (the “inter-ocular trauma test”), then formal statistical testing would be unnecessary anyway. If the results are even just a little subtler, it can be very difficult to decide whether the finding is interesting. So the way I see it, we either need a way to estimate statistical evidence, or you need to follow up the finding with a registered, confirmatory experiment that specifically seeks to replicate and/or further test the original exploratory finding.

In the case of our extra-terrestrial signal you may plan a new measurement. You know the location in the sky where the signal came from, so part of your preregistered methods is to point your radio telescope at the same point. You also have an idea of the signal strength, which allows you to determine the number of measurements needed to have adequate statistical power. Then you carry out this experiment, sticking meticulously to your planned recipe. Finally, you report your result and the associated p-value.

Sounds good in theory. In practice, however, this is not how science typically works. Maybe the signal isn’t continuous. There could be all sorts of reasons why the signal may only be intermittent, be it some interstellar dust clouds blocking the line of transmission, the transmitter pointing away from Earth due to the rotation of the aliens’ home planet, or even simply the fact that the aliens are operating their transmitter on a random schedule. We know nothing about what an alien species, let alone their civilization, may be like. Who is to say that they don’t just fall into random sleeping periods in irregular intervals?

So some exploratory, flexible analysis is almost always necessary. If you are too rigid in your approach, you are very likely to miss important discoveries. At the same time, you must be careful not to fool yourself. If we are really going down the route of Exploratory Reports without any statistical inference we need to come up with a good way to ensure that such exploratory findings aren’t mostly garbage. I think in the long run the only way to do so is to replicate and test results in confirmatory studies. But this could already be done as part of a Registered Report in which your design is preregistered. Experiment 1 would be exploratory without any statistical inference but simply reporting the basic pattern of results. Experiment 2 would then be preregistered and replicate or test the finding further.

However, Registered Reports can take a long time to publish. This may in fact be one of the weak points about this format that may stop the scientific community from becoming more enthusiastic about them. As long as there is no real incentive to doing slow science, the idea that you may take two or three years to publish one study is not going to appeal to many people. It will stop early career researchers from getting jobs and research funding. It also puts small labs in poorer universities at a considerable disadvantage compared to researchers with big grants, big data, and legions of research assistants.

The whole point of Exploratory Reports would be to quickly push out interesting observations. In some ways, this is then exactly what brief communications in high impact journals are currently for. I don’t think it will serve us well to replace the notion of snappy (and likely untrue) high impact findings with inappropriate statistical inferences with snappy (and likely untrue) exploratory findings without statistical inference. If the purpose of Exploratory Reports is solely to provide an outlet for quick publication of interesting results, we still have the same kind of skewed incentive structure as now. Also, while removing statistical inference from our exploratory findings may be better statistical practice I am not convinced that it is better scientific practice unless we have other ways of ensuring that these exploratory results are kosher.

The way I see it, the only way around this dilemma is to finally stop treating publications as individual units. Science is by nature a lengthy, incremental process. Yes, we need exciting discoveries to drive science forward. At the same time, replicability and robustness of our discoveries is critical. In order to combine these two needs I believe research findings should not be seen as separate morsels but as a web of interconnected results. A single Exploratory Report (or even a bunch of them) could serve as the starting point. But unless they are followed up by Registered Reports replicating or scrutinizing these findings further, they are not all that meaningful. Only once replications and follow up experiments have been performed the whole body of a finding takes shape. A search on PubMed or Google Scholar would not merely spit out the original paper but a whole tree of linked experiments.

The perceived impact and value of a finding thus would be related to how much of a interconnected body of evidence it has generated rather than whether it was published in Nature or Science. Critically, this would allow people to quickly publish their exciting finding and thus avoid being deadlocked by endless review processes and disadvantaged compared to other people who can afford to do more open science. At the same time, they would be incentivized to conduct follow-up studies. Because a whole body of related literature is linked, it would however also be an incentive for others to conduct replications or follow up experiments on your exploratory finding.

There are obviously logistic and technical challenges with this idea. The current publication infrastructure still does not really allow for this to work. This is not a major problem however. It seems entirely feasible to implement such a system. The bigger challenge is how to convince the broader community and publishers and funders to take this on board.

200px-arecibo_message-svg

Strolling through the Garden of Forking Paths

The other day I got into another Twitter argument – for which I owe Richard Morey another drink – about preregistration of experimental designs before data collection. Now, as you may know, I have in the past had long debates with proponents of preregistration. Not really because I was against it per se but because I am a natural skeptic. It is still far too early to tell if the evidence supports the claim that preregistration improves the replicability and validity of published research. I also have an innate tendency to view any revolutionary proposals with suspicion. However, these long discussions have eased my worries and led me to revise my views on this issue. As Russ Poldrack put it nicely, preregistration no longer makes me nervous. I believe the theoretical case for preregistration is compelling. While solid empirical evidence for the positive and negative consequences of preregistration will only emerge over the course of the coming decades, this is not actually all that important. I seriously doubt that preregistration actually hurts scientific progress. At worst it has not much of an effect at all – but I am fairly confident that it will prove to be a positive development.

Curiously, largely due to the heroic efforts by one Christopher Chambers, a Sith Lord at my alma mater Cardiff University, I am now strongly in favor of the more radical form of preregistration, registered reports (RRs), where the hypothesis and design is first subject to peer review, data collection only commences when the design has been accepted, and eventual publication is guaranteed if the registered plan was followed. In departmental discussions, a colleague of mine repeatedly voiced his doubts that RRs could ever become mainstream, because they are such a major effort. It is obvious that RRs are not ideal for all kinds of research and to my knowledge nobody claims otherwise. RRs are a lot of work that I wouldn’t invest in something like a short student project, in particular a psychophysics experiment. But I do think they should become the standard operating procedure for many larger, more expensive projects. We already have project presentations at our imaging facility where we discuss new projects and make suggestions on the proposed design. RRs are simply a way to take this concept into the 21st century and the age of transparent research. It can also improve the detail or quality of the feedback: most people at our project presentations will not be experts on the proposed research while peer reviewers at least are supposed to be. And, perhaps most important, RRs ensure that someone actually compares the proposed design to what was carried out eventually.

When RRs are infeasible or impractical, there is always the option of using light preregistration, in which you only state your hypothesis and experimental plans and upload this to OSF or a similar repository. I have done so twice now (although one is still in the draft stage and therefore not yet public). I would strongly encourage people to at least give that a try. If a detailed preregistration document is too much effort (it can be a lot of work although it should save you work when writing up your methods later on), there is even the option for very basic registration. The best format invariably depends on your particular research question. Such basic preregistrations can add transparency to the distinction between exploratory and confirmatory results because you have a public record of your prior predictions. Primarily, I think they are extremely useful to you, the researcher, as it allows you to check how directly you navigated the Garden of Forking Paths. Nobody stops you from taking a turn here or there. Maybe this is my OCD speaking, but I think you should always peek down some of the paths at least, simply as a robustness check. But the preregistration makes it less likely that you fool yourself. It is surprisingly easy to start believing that you took a straight path and forget about all the dead ends along the way.

This for me is really the main point of preregistration and RRs. I think a lot of the early discussion of this concept, and a lot of the opposition to it, stems from the implicit or even explicit accusation that nobody can be trusted. I can totally understand why this fails to win the hearts and minds of many people. However, it’s also clear that questionable research practices and deliberate p-hacking have been rampant. Moreover, unconscious p-hacking due to analytical flexibility almost certainly affects many findings. There are a lot of variables here and so I’d wager that most of the scientific literature is actually only mildly skewed by that. But that is not the point. Rather, I think as scientists, especially those who study cognitive and mental processes of all things, shouldn’t you want to minimize your own cognitive biases and human errors that could lead you astray? Instead of  the rather negative “data police” narrative that you often hear, this is exactly what preregistration is about. And so I think first and foremost a basic preregistration is only for yourself.

When I say such a basic preregistration is for yourself, this does not necessarily mean it cannot also be interesting to others. But I do believe their usefulness to other people is limited and should not be overstated. As with many of the changes brought on by open science, we must remain skeptical of any unproven claims of their benefits and keep in mind potential dangers. The way I see it, most (all?) public proponents of either form of preregistration are fully aware of this. I think the danger really concerns the wider community. I occasionally see anonymous or sock-puppet accounts popping up in online comment sections espousing a very radical view that only preregistered research can be trusted. Here is why this is disturbing me:

1. “I’ll just get some fresh air in the garden …”

Preregistered methods can only be as good as the detail they provide. A preregistration can be so vague that you cannot make heads or tails of it. The basic OSF-style registrations (e.g. the AsPredicted format) may be particularly prone to this problem but it could even be the case when you wrote a long design document. In essence, this is just saying you’ll take a stroll in the hedge maze without giving any indication whatsoever which paths you will take.

2. “I don’t care if the exit is right there!”

Preregistration doesn’t mean that your predictions make any sense or that there isn’t a better way to answer the research question. Often such things will only be revealed once the experiment is under way or completed and I’d actually hazard the guess that this is usually the case. Part of the beauty of preregistration is that it demonstrates to everyone (including yourself!) how many things you probably didn’t think of before starting the study. But it should never be used as an excuse not to try something unregistered when there are good scientific reasons to do so. This would be the equivalent of taking one predetermined path through the maze and then getting stuck in a dead end – in plain sight of the exit.

3. “Since I didn’t watch you, you must have chosen forking paths!”

Just because someone didn’t preregister their experiment does not mean their experiment was not confirmatory. Exploratory research is actually undervalued in the current system. A lot of research is written up as if it were confirmatory even if it wasn’t. Ironically, critics of preregistration often suggest that it devalues exploratory research but it actually places greater value on it because you are no longer incentivized to hide it. But nevertheless, confirmatory research does happen even without preregistration. It doesn’t become any less confirmatory because the authors didn’t tell you about it. I’m all in favor of constructive skepticism. If a result seems so surprising or implausible that you find it hard to swallow, by all means scrutinize it closely and/or carry out an (ideally preregistered) attempt to replicate it. But astoundingly, even people who don’t believe in open science sometimes do good science. When a tree falls in the garden and nobody is there to hear it, it still makes a sound.

Late September when the forks are in bloom

Obviously, RRs are not completely immune to these problems either. Present day peer review frequently fails to spot even glaring errors, so it is inevitable that it will also make mistakes in the RR situation. Moreover, there are additional problems with RRs, such as the fact that they require an observant and dedicated editor. This may not be so problematic while RR editors are strong proponents of RRs but if this concept becomes more widespread this will not always be the case. It remains to be seen how that works out. However, I think on the whole the RR concept is a reasonably good guarantee that hypotheses and designs are scrutinized, and that results are published, independent of the final outcome. The way I see it, both of these are fundamental improvements over the way we have been doing science so far.

But I’d definitely be very careful not to over-interpret the fact that a study is preregistered, especially when it isn’t a RR. Those badges they put on Psych Science articles may be a good incentive for people to embrace open science practices but I’m very skeptical of anyone who implies that just because a study was preregistered, or because it shares data and materials, that this makes it more trustworthy. Because it simply doesn’t. It lulls you into a false sense of security and I thought the intention here was not to fool ourselves so much any more. A recent case of data being manipulated after it was uploaded demonstrates how misleading an open data badge can be. In the same vein, just because an experiment is preregistered does not mean the authors didn’t lead us (and themselves) down the garden path. There have also been cases of preregistered studies that then did not actually report the outcomes of their intended analyses.

So, preregistration only means that you can read what the authors said they would do and then check for yourself how this compares to what they did do. That’s great because it’s transparent. But unless you actually do this check, you should treat the findings with the same skepticism (and the authors with the same courtesy and respect) as you would those of any other, non-registered study.

hedgemaze
Sometimes it is really not that hard to find your way through the garden…

3 scoops of vanilla science in a low impact waffle please

A lot of young[1] researchers are worried about being “scooped”. No, this is not about something unpleasantly kinky but about when some other lab publishes an experiment that is very similar to yours before you do. Sometimes this is even more than just a worry and it actually happens. I know that this could be depressing. You’ve invested months or years of work and sleepless nights in this project and then somebody else comes along and publishes something similar and – poof – all the novelty is gone. Your science career is over. You will never publish this high impact now. You won’t ever get a grant. Immeasurable effort down the drain. Might as well give up, sell your soul to the Devil, and get a slave job in the pharmaceutical industry and get rich[2].

Except that this is total crap. There is no such thing as being scooped in this way, or at least if there is, it is not the end of your scientific career. In this post I want to briefly explain why I think so. This won’t be a lecture on the merits of open science, on replications, on how we should care more about the truth than about novelty and “sexiness”. All of these things are undoubtedly true in my mind and they are things we as a community should be actively working to change – but this is no help to young scientists who are still trying to make a name for themselves in a system that continues to reward high impact publications over substance.

No. Here I will talk about this issue with respect to the status quo. I think even in the current system, imperfect as it may be, this irrational fear is in my view unfounded. It is essential to dispel these myths about impact and novelty, about how precedence is tied to your career prospects. Early career scientists are the future of science. How can we ever hope to change science for the better if we allow this sort of madness to live on in the next generation of scientists? I say ‘live on’ for good reason – I, too, used to suffer from this madness when I was a graduate student and postdoc.

Why did I have this madness? Honestly I couldn’t say. Perhaps it’s a natural evolution of young researchers, at least in our current system. People like to point the finger at the lab PIs pressuring you into this sort of crazy behaviour. But that wasn’t it for me. For most of my postdoc I worked with Geraint Rees at UCL and perhaps the best thing he ever told me was to fucking chill[3]. He taught me – more by example than words – that while having a successful career was useful, what is much more important is to remember why you’re doing it: The point of having a (reasonably successful) science career is to be able to pay the rent/mortgage and take some enjoyment out of this life you’ve been given. The reason I do science, rather than making a mint in the pharma industry[4], is that I am genuinely curious and want to figure shit out.

Guess what? Neither of these things depend on whether somebody else publishes a similar (or even very similar) experiment while you’re still running it. We all know that novelty still matters to a lot of journals. Some have been very reluctant to publish replication attempts. I agree that publishing high impact papers does help wedge your foot in the door (that is, get you short-listed) in grant and job applications. But even if this were all that matters to be a successful scientist (and it really isn’t), here’s why you shouldn’t care too much about that anyway:

No paper was ever rejected because it was scooped

While journal editors will reject papers because they aren’t “novel,” I have never seen any paper being rejected because somebody else published something similar a few months earlier. Most editors and reviewers will not even be aware of the scooping study. You may find this hard to believe because you think your own research is so timely and important, but statistically it is true. Of course, some reviewers will know of the work. But most reviewers are not actually bad people and will not say “Something like this was published three months ago already and therefore this is not interesting.” Again, you may find this hard to believe because we’ve all heard too many stories of Reviewer 2 being an asshole. But in the end, most people aren’t that big of an asshole[5]. It happens quite frequently that I suggest in reviews that the authors cite some recently published work (usually not my own, in case you were wondering) that is very similar to theirs. In my experience this has never led to a rejection but I ask to them to put their results in the context of similar findings in the literature. You know, the way a Discussion section should be.

No two scooped studies are the same

You may think that the scooper’s experiment was very similar, but unless they actually stole your idea (a whole different story I also don’t believe but I have no time for this now…) and essentially pre-replicated (preclicated?) your design, I’d bet that there are still significant differences. Your study has not lost any of its value because of this. And it’s certainly no reason to quit and/or be depressed.

It’s actually a compliment

Not 100% sure about this one. Scientific curiosity shouldn’t have anything to do with a popularity contest if you ask me. Study whatever the hell you want to (within ethical limits, that is). But I admit, it feels reassuring to me when other people agree that the research questions I am interested in are also interesting to them. For one thing, this means that they will appreciate you working and (eventually) publishing on it, which again from a pragmatic point of view means that you can pay those rents/mortgages. And from a simple vanity perspective it is also reassuring that you’re not completely mad for pursuing a particular research question.

It has little to do with publishing high impact

Honestly, from what I can tell neither precedence nor the popularity of your topic are the critical factors in getting your work into high impact journals. The novelty of your techniques, how surprising and/or reassuringly expected your results are, and the simplicity of the narrative are actually major factors. Moreover, the place you work, the co-authors you with whom you write your papers, and the accessibility of the writing (in particular your cover letter to the editors!) definitely matter a great deal also (and these are not independent of the first points either…). It is quite possible that your “rival”[6] will publish first, but that doesn’t mean you won’t publish similar work in a higher impact journal. Journal review outcome is pretty stochastic and not really very predictable.

Actual decisions are not based on this

We all hear the horror stories of impact factors and h-indexes determining your success with grant applications and hiring decisions. Even if this were true (and I actually have my doubts that it is as black and white as this), a CV with lots of high impact publications may get your foot in the door – but it does not absolve the panel from making a hiring/funding decision. You need to do the work on that one yourself and even then luck may be against you (the odds certainly are). It also simply is not true that most people are looking for the person with the most Nature papers. Instead I bet you they are looking for people who can string together a coherent argument, communicate their ideas, and who have the drive and intellect to be a good researcher. Applicants with a long list of high impact papers may still come up with awful grant proposals or do terribly in job interviews while people with less stellar publication records can demonstrate their excellence in other ways. You may already have made a name for yourself in your field anyway, through conferences, social media, public engagement etc. This may matter far more than any high impact paper could.

There are more important things

And now we’re coming back to the work-life balance and why you’re doing this in the first place. Honestly, who the hell cares whether someone else published this a few months earlier? Is being the first to do this the reason you’re doing science? I can see the excitement of discovery but let’s face it, most of our research is neither like the work of Einstein or Newton nor are we discovering extraterrestrial life. Your discovery is no doubt exciting to you, it is hopefully exciting to some other scientists in your little bubble and it may even be exciting to some journalist who will write a distorting, simplifying article about it for the mainstream news. But seriously, it’s not as groundbreaking that it is worth sacrificing your mental and physical health over it. Live your life. Spend time with your family. Be good to your fellow creatures on this planet. By all means, don’t be complacent, ensure you make a living but don’t pressure yourself into believing that publishing ultra-high impact papers is the meaning of life.

A positive suggestion for next time…

Now if you’re really worried about this sort of thing, why not preregister your experiment? I know I said I wouldn’t talk about open science here but bear with me just this once because this is a practical point you can implement today. As I keep saying, the whole discussion about preregistration is dominated by talking about “questionable research practices”, HARKing, and all that junk. Not that these aren’t worthwhile concerns but this is a lot of negativity. There are plenty of positive reasons why preregistration can help and the (fallacious) fear of being scooped is one of them. Preregistration does not stop anyone else from publishing the same experiment before you but it does allow you to demonstrate that you had thought of the idea before they published it. With Registered Reports it becomes irrelevant if someone else published before you because your publication is guaranteed after the method has been reviewed. And I believe it will also make it far clearer to everyone how much who published what first where actually matters in the big scheme of things.

[1] Actually there are a lot of old and experienced researchers who worry about this too. And that is far worse than when early career researchers do it because they should really know better and they shouldn’t feel the same career pressures.
[2] It may sound appealing now, but thinking about it I wouldn’t trade my current professional life for anything. Except for grant admin bureaucracy perhaps. I would happily give that up at any price… :/
[3] He didn’t quite say it in those terms.
[4] This doesn’t actually happen. If you want to make a mint you need to go into scientific publishing but the whole open science movement is screwing up that opportunity now as well so you may be out of luck!
[5] Don’t bombard me with “Reviewer 2 held up my paper to publish theirs first” stories. Unless Reviewer 2 signed their review or told you specifically that it was them I don’t take such stories at face value.
[6] The sooner we stop thinking of other scientists in those terms the better for all of us.

Strawberry Ice Cream Cone

Yes, science is self-correcting

If you don’t believe science self-corrects, then you probably shouldn’t believe that evolution by natural selection occurs either – it’s basically the same thing.

I have said it many times before, both under the guise of my satirical alter ego and later – more seriously – on this blog. I am getting very tired of repeating it so I wrote this final post about it that I will simply link to next time this inevitably comes up…

My latest outburst about this was triggered by this blog post by Keith Laws entitled “Science is ‘Other-Correcting‘”. I have no qualms with the actual content of this post. It gives an interesting account of the attempt to correct an error in the publication record. The people behind this effort are great researchers for whom I have the utmost respect. The story they tell is shocking and important. In particular, the email they received by accident from a journal editor is disturbing and serves as a reminder of all the things that are wrong with the way scientific research and publishing currently operates.

My issue is with the (in my view seemingly) ubiquitous doubts about the self-correcting nature of science. To quote from the first paragraph in that post:

“I have never been convinced by the ubiquitous phrase ‘Science is self-correcting’. Much evidence points to science being conservative and looking less self-correcting and more ego-protecting. It is also not clear why ‘self’ is the correct description – most change occurs because of the ‘other’ – Science is other correcting.”

In my view this and similar criticisms of self-correction completely miss the point. The suffix ‘self-‘ refers to science, not to scientists. In fact, the very same paragraph contains the key: “Science is a process.” Science is an iterative approach by which we gradually broaden our knowledge and understanding of the world. You can debate whether or not there is such a thing as the “scientific method” – perhaps it’s more of a collection of methods. However, in my view above all else science is a way of thinking.

Scientific thinking is being inquisitive, skeptical, and taking nothing for granted. Prestige, fame, success are irrelevant. Perfect theories are irrelevant. The smallest piece of contradictory evidence can refute your grand unifying theory. And science encompasses all that. It is an emergent concept. And this is what is self-correcting.

Scientists, on the other hand, are not self-correcting. Some are more so than others but none are perfect. Scientists are people and thus inherently fallible. They are subject to ego, pride, greed, and all of life’s pressures, such as the need to pay a mortgage, feed their children, and having a career. In the common vernacular “science” is often conflated with the scientific enterprise, the way scientists go about doing science. This involves all those human factors and more and, fair enough, it is anything but self-correcting. But to argue that this means science isn’t self-correcting is attacking a straw man because few people are seriously arguing that the scientific enterprise couldn’t be better.

We should always strive to improve the way we do science because due to our human failings it will never be perfect. However, in this context we also shouldn’t forget how much we have already improved it. In the times of Newton, in Europe (the hub of science then) science was largely done only by white men from a very limited socioeconomic background. Even decades later, most women or people of non-European origin didn’t even need to bother trying (although this uphill struggle makes the achievements of scientists like Marie Curie or Henrietta Swan Leavitt all the more impressive). And publishing your research findings was not subject to formal peer review but largely dependent on the ego of some society presidents and on whether they liked you. None of these problems have been wiped off the face of the Earth but I would hope most people agree that things are better than they were 100 years ago.

Like all human beings, scientists are flawed. Nevertheless I am actually optimistic about us as a group. I do believe that on the whole scientists are actually interested in learning the truth and widening their understanding of nature. Sure, there are black sheep and even the best of us will succumb to human failings. At some point or other our dogma and affinity to our pet hypotheses can blind us to the cold facts. But on average I’d like to think we do better than most of our fellow humans. (Then again, I’m probably biased…).

We will continue to make the scientific enterprise better. We will change the way we publish and evaluate scientific findings. We will improve the way we interpret evidence and we communicate scientific discoveries. The scientific enterprise will become more democratic, less dependent on publishers getting rich on our free labour. Already within the decade I have been a practicing scientist we have begun to tear down the wide-spread illusion that when a piece of research is published it must therefore be true. When I did my PhD, the only place we could critically discuss new publications was in a small journal club and the conclusions of these discussions were almost never shared with the world. Nowadays every new study is immediately discussed online by an international audience. We have taken leaps towards scientific findings, data, and materials being available to anyone, anywhere, provided they have internet access.  I am very optimistic that this is only the beginning of much more fundamental changes.

Last year I participated in a workshop called “Is Science Broken?” that was solely organised by graduate students in my department. The growing number of replication attempts in the literature and all these post-publication discussions we are having are perfect examples of science correcting itself. It seems deeply ironic to me when posts like Keith Laws’, which describes an active effort to rectify errors, argue against the self-correcting nature of the scientific process.

Of course, self-correction is not guaranteed. It can easily be stifled. There is always a danger that we drift back into the 19th century or the dark ages. But the greater academic freedom (and generous funding) scientists are given, the more science will be allowed to correct itself.

Beach
Science is like a calm lagoon in the sunset… Or whatever. There is no real reason why this picture is here.

Update (19 Jan 2016): I just read this nice post about the role of priors in Bayesian statistics. The author actually says Bayesian analysis is “self-correcting” and this epitomises my point here about science. I would say science is essentially Bayesian. We start with prior hypotheses and theories but by accumulating evidence we update our prior beliefs to posterior beliefs. It may take a long time but assuming we continue to collect data our assumptions will self-correct. It may take a reevaluation of what the evidence is (which in this analogy would be a change to the likelihood function). Thus the discussion about how we know how close to the truth we are is in my view missing the point. Self-correction describes the process.

Update (21 Jan 2016): I added a sentence from my comment in the discussion section to the top. It makes for a good summary of my post. The analogy may not be perfect – but even if not I’d say it’s close. If you disagree, please leave a comment below.

Why wouldn’t you share data?

Data sharing has been in the news a lot lately from the refusal of the authors of the PACE trial to share their data even though the journal expects it to the eventful story of the “Sadness impairs color perception” study. A blog post by Dorothy Bishop called “Who’s afraid of Open Data?” made the rounds. The post itself is actually a month old already but it was republished by the LSE blog which gave it some additional publicity. In it she makes a impassioned argument for open data sharing and discusses the fears and criticisms many researchers have voiced against data sharing.

I have long believed in making all data available (and please note that in the following I will always mean data and materials, so not just the results but also the methods). The way I see it this transparency is the first and most important remedy to the ills of scientific research. I have regular discussions with one of my close colleagues* about how to improve science – we don’t always agree on various points like preregistration, but if there is one thing where we are on the same page, it is open data sharing. By making data available anyone can reanalyse it and check if the results reproduce and it allows you to check the robustness of a finding for yourself, if you feel that you should. Moreover, by documenting and organising your data you not only make it easier for other researchers to use, but also for yourself and your lab colleagues. It also helps you with spotting errors. It is also a good argument that stops reviewer 2 from requesting a gazillion additional analyses – if they really think these analyses are necessary they can do them themselves and publish them. This aspect in fact overlaps greatly with the debate on Registered Reports (RR) and it is one of the reasons I like the RR concept. But the benefits of data sharing go well beyond this. Access to the data will allow others to reuse the data to answer scientific questions you may not even have thought of. They can also be used in meta-analyses. With the increasing popularity and feasibility of large-scale permutation/bootstrapping methods it also means that availability to the raw values will be particularly important. Access to the data allows you to take into account distributional anomalies, outliers, or perhaps estimate the uncertainty on individual data points.

But as Dorothy describes, many scientists nevertheless remain afraid of publishing their actual data alongside their studies. For several years many journals and funding agencies have had a policy that data should always be shared upon request – but a laughably small proportion of such requests are successful. This is why some have now adopted the policy that all data must be shared in repositories upon publication or even upon submission. And to encourage this process recently the Peer Reviewer Openness Initiative was launched by which signatories would refuse to conduct in-depth reviews of manuscripts unless the authors can give a reason why data and materials aren’t public.

My most memorable experience with fears about open data involve a case where the lab head refused to share data and materials with the graduate student* who actually created the methods and collected the data. The exact details aren’t important. Maybe one day I will talk more about this little horror story… For me this demonstrates how far we have come already. Nowadays that story would be baffling to most researchers but back then (and that’s only a few years ago – I’m not that old!) more than one person actually told me that the PI and university were perfectly justified in keeping the student’s results and the fruits of their intellectual labour under lock and key.

Clearly, people are still afraid of open data. Dorothy lists the following reasons:

  1. Lack of time to curate data;  Data are only useful if they are understandable, and documenting a dataset adequately is a non-trivial task;
  2. Personal investment – sense of not wanting to give away data that had taken time and trouble to collect to other researchers who are perceived as freeloaders;
  3. Concerns about being scooped before the analysis is complete;
  4. Fear of errors being found in the data;
  5. Ethical concerns about confidentiality of personal data, especially in the context of clinical research;
  6. Possibility that others with a different agenda may misuse the data, e.g. perform selective analysis that misrepresented the findings;

In my view, points 1-4 are invalid arguments even if they seem understandable. I have a few comments about some of these:

The fear of being scooped 

I honestly am puzzled by this one. How often does this actually happen? The fear of being scooped is widespread and it may occasionally be justified. Say, if you discuss some great idea you have or post a pilot result on social media perhaps you shouldn’t be surprised if someone else agrees that the idea is great and also does it. Some people wouldn’t be bothered by that but many would and that’s understandable. Less understandable to me is if you present research at a conference and then complain about others publishing similar work because they were inspired by you. That’s what conferences are for. If you don’t want that to happen, don’t go to conferences. Personally, I think science would be a lot better if we cared a lot less about who did what first and instead cared more about what is true and how we can work together…

But anyway, as far as I can see none of that applies to data sharing. By definition data you share is either already published or at least submitted for peer review. If someone reuses your data for something else they have to cite you and give you credit. In many situations they may even do it in collaboration with you which could lead to coauthorship. More importantly, if the scooped result is so easily obtained that somebody beats you to it despite your head start (it’s your data, regardless of how well documented it is you will always know it better than some stranger) then perhaps you should have thought about that sooner. You could have held back on your first publication and combined the analyses. Or, if it really makes more sense to publish the data in separate papers, then you could perhaps declare that the full data set will be shared after the second one is published. I don’t really think this is necessary but I would accept that argument.

Either way, I don’t believe being scooped by data sharing is very realistic and any cases of that happening must be extremely rare. But please share these stories if you have them to prove me wrong! If you prefer, you can post it anonymously on the Neuroscience Devils. That’s what I created that website for.

Fear of errors being discovered

I’m sure everyone can understand that fear. It can be embarrassing to have your errors (and we all make mistakes) being discovered – at least if they are errors with big consequences. Part of the problem is also that all too often the discovery of errors is associated with some malice. To err is human, to forgive divine. We really need to stop treating every time somebody’s mistakes are being revealed (or, for that matter, when somebody’s findings fail to replicate) as an implication of sloppy science or malpractice. Sometimes (usually?) mistakes are just mistakes.

Probably nobody wants to have all of their data combed by vengeful sleuths nitpicking every tiny detail. If that becomes excessive and the same person is targeted, it could border on harassment and that should be counteracted. In-depth scrutiny of all the data by a particular researcher should be a special case that only happens when there is a substantial reason, say, in a fraud investigation. I would hope though that these cases are also rare.

And surely nobody can seriously want the scientific record to be littered with false findings, artifacts, and coding errors. I am not happy if someone tells me I made a serious error but I would nonetheless be grateful to them for telling me! It has happened before when lab members or collaborators spotted mistakes I made. In turn I have spotted mistakes colleagues made. None of this would have been possible if we didn’t share our data and methods amongst each another. I am always surprised when I hear how uncommon this seems to be in some labs. Labs should be collaborative, and so should science as a whole. And as I already said, organising and documenting your data actually helps you to spot errors before the work is published. If anything, data sharing reduces mistakes.

Ethical issues with patient confidentiality

This is a big concern – and the only one that I have full sympathy with. But all of our ethics and data protection applications actually discuss this. The only data that is shared should be anonymised. Participants should only be identified by unique codes that only the researchers who collected the data have access to. For a lot of psychology or other behavioural experiments this shouldn’t be hard to achieve.

Neuroimaging or biological data are a different story. I have a strict rule for my own results. We do not upload the actual brain images of our fMRI experiments to public repositories. While under certain conditions I am willing to share such data upon request as long as the participant’s name has been removed, I don’t think it is safe to make those data permanently available to the entire internet. Participant confidentiality must trump the need for transparency. It simply is not possible to remove all identifying information from these files. Skull-stripping, which removes the head tissues from an MRI scan except for the brain, does not remove all identifying information. Brains are like finger-prints and they can easily be matched up, if you have the required data. As someone* recently said in a discussion of this issue, the undergrad you are scanning in your experiment now may be Prime Minister in 20-30 years time. They definitely didn’t consent to their brain scans being available to anyone. It may not take much to identify a person’s data using only their age, gender, handedness, and a basic model of their head shape derived from their brain scan. We must also keep in mind of what additional data mining may be possible in the coming decades that we simply have no idea about yet. Nobody can know what information could be gleaned from these data, say, about health risks or personality factors. Sharing this without very clear informed consent (that many people probably wouldn’t give) is in my view irresponsible.

I also don’t believe that for most purposes this is even necessary. Most neuroimaging studies involve group analyses. In those you first spatially normalise the images of each participant and the perform statistical analysis across participants. It is perfectly reasonable to make those group results available. For purpose of non-parametric permutation analyses (also in the news recently) you would want to share individual data points but even there you can probably share images after sufficient processing that not much incidental information is left (e.g. condition contrast images). In our own work, these considerations don’t apply. We conduct almost all our analyses in the participant’s native brain space. As such we decided to only share the participants’ data projected on a cortical reconstruction. These data contain the functional results for every relevant voxel after motion correction and signal filtering. No this isn’t raw data but it is sufficient to reproduce the results and it is also sufficient for applying different analyses. I’d wager that for almost all purposes this is more than enough. And again, if someone were to be interested in applying different motion correction or filtering methods, this would be a negotiable situation. But I don’t think we need to allow unrestricted permanent access for such highly unlikely purposes.

Basically, rather than sharing all raw data I think we need to treat each data set on a case-by-case basis and weigh the risks against benefits. What should be mandatory in my view is sharing all data after default processing that is needed to reproduce the published results.

People with agendas and freeloaders

Finally a few words about a combination of points 2 and 6 in Dorothy Bishop’s list. When it comes to controversial topics (e.g. climate change, chronic fatigue syndrome, to name a few examples where this apparently happened) there could perhaps be the danger that people with shady motivations will reanalyse and nitpick the data to find fault with them and discredit the researcher. More generally, people with limited expertise may conduct poor reanalysis. Since failed reanalysis (and again, the same applies to failed replications) often cause quite a stir and are frequently discussed as evidence that the original claims were false, this could indeed be a problem. Also some will perceive these cases as “data tourism”, using somebody else’s hard-won results for quick personal gain – say by making a name for themselves as a cunning data detective.

There can be some truth in that and for that reason I feel we really have to work harder to change the culture of scientific discourse. We must resist the bias to agree with the “accuser” in these situations. (Don’t pretend you don’t have this bias because we all do. Maybe not in all cases but in many cases…)

Of course skepticism is good. Scientists should be skeptical but the skepticism should apply to all claims (see also this post by Neuroskeptic on this issue). If somebody reanalyses somebody else’s data using a different method that does not automatically make them right and the original author wrong. If somebody fails to replicate a finding, that doesn’t mean that finding was false.

Science thrives on discussion and disagreement. The critical thing is that the discussion is transparent and public. Anyone who has an interest should have the opportunity to follow it. Anyone who is skeptical of the authors’ or the reanalysers’/replicators’ claims should be able to check for themselves.

And the only way to achieve this level of openness is Open Data.

 

* They will remain anonymous unless they want to join this debate.