I decided to respond now before I get inundated with the next round of overdue work I need to do this week… I was going to wait until Chris’ response as I think you will probably overlap a bit but there are a lot of deadlines and things to do, so now is a better time. I also decided to write my reply as a post because it is a bit long for a comment and others may find it interesting.
I think most of your answers illustrate how we all miss each other’s points a little. I am not talking about what RR and prereg are like right now. Any evidence we have about it now is confounded by the fact that it is new and that the people trying it are probably for the most part proponents of the approach. Most of the points I raised (except perhaps the last one) are issues that really only come into play once the approach has become normalised, when it is commonplace at many journals, and it stopped being a measure to improve science but just how it works – so a bit like how standard peer review is now (and you know how much people complain about that).
DB: Nope. You have to give a comprehensive account of what you plan, thinking through every aspect of rationale, methods and analysis: Cortex really doesn’t want to publish anything flawed and so they screw you down on the details.
DB: Why any more so than for other publication methods? I really find this concern quite an odd one.
I agree that detailed review is key but the same could be said about the standard system. I don’t buy that author reputation isn’t going to influence judgements there. Like I am sure most of us, I always try my best not to be influenced by it, but I think we’re kidding ourselves if we think we’re perfectly unbiased. If you get a somewhat lacklustre manuscript to review, you will almost inevitably respond better to that author with a proven track record in the field (who probably possesses great writing skills) compared to some nobodies you never heard of, especially if they’re failing to communicate their ideas well (e.g. because their native language isn’t English). Nevertheless the quality of their work could actual be equal.
Now I take your point that this is also an issue in the current system, but the difference is that RR stage 1 reviews are just about evaluating the idea and the design. I think you’re lacking some information that could actually help you make a more informed choice. And it would be very disturbing if we tell people what science they can or can’t do (in the good journals that have RRs) just because of factors like this.
DB: Well, for a start registered reports require you to have very high statistical power and so humungous great N. Most people who just happened to do a study won’t meet that criterion. Second, as Chris pointed out in his talk, if you submit a registered report, then it goes out to review, and the reviewers do what reviewers do, i.e. make suggestions for changes, new conditions, etc etc. They do also expect you to specify your analysis in advance: that is one of the important features of RR.
I think this isn’t really answering my question. It should be very easy to come up with a “highly powered experiment” if you already know the finally observed effect size :P. And as I said in my post, I think many outcome-dependent changes to the protocol are about the analysis not about the design. Again, my point is also that once RRs have become more normal and people have run a bit out of steam (so the review quality may suffer compared to now) it may be a fairly easy thing to do. I could also see there being hybrids (i.e. people have already collected a fair bit of “pilot” data and just add a bit more in the registered protocol.
But I agree that this is perhaps all a bit hypothetical. I was questioning the actual logic of the response to this criticism. In the end what matters though is how likely it is that people engage in that sort of behaviour. If pre-completed grant proposals are really as common as people claim I could see it happening – but that depends largely on how difficult it is compared to being honest. Perhaps you’re right and it’s just very unlikely.
DB: So you would be unlikely to get through the RR process even if you did decide to fake your time stamps (and let’s face it, if you’re going to do that, you are beyond redemption).
I’m sure we all agree on that but I wouldn’t put it past some people. I’ve seen cases where people threw away around a third of their data points because they didn’t like the results. I am not sure that fiddling with the time stamps (which may be easier than actively changing the date) is really all that much worse.
Of course, this brings us to another question in that nothing in RR or data sharing in general really stops people from excluding “bad” subjects. Again, of course this is not different from the status quo but my issue is that having preregistered and open experiments clearly bestows a certain value judgement for people (hell, the OSF actually operates a “badge” system!). So in a way a faked RR could end up being valued more than an honest well-done non-RR. That does bother me.
DB: Since you yourself don’t find this [people stealing my ideas from RRs] all that plausible, I won’t rehearse the reasons why it isn’t.
Again, I was mostly pointing out the holes in the logic here. And of course whether or not it is plausible, a lot of people are quite evidently afraid of what Chris called the “boogieman” of being scooped. My point was that to allay this fear pointing to Manuscript Received dates is not going to suffice. But we all seem to agree that scooping is an exaggerated problem. I think the best way to deal with this worry is to stop people from being afraid of the boogieman in the first place.
DB: Your view on this may be reinforced by PIs in your institution. However, be aware that there are some senior people who are more interested in whether your research is replicable than whether it is sexy. And who find the soundbite form of reporting required by Nature etc quite inadequate.
This seems a bit naive to me. It’s not just about what “some senior people” think. I can’t with all honesty say that these factors don’t play into grant and hiring decisions. I also think it is a bit hypocritical to advise junior researchers not to pursue a bit of high impact glory when our own careers are at least in part founded on that (although mine isn’t nearly as much as some other people’s ;)). I do advise people that just to chase high impact is a bad idea but that you should have a healthy selection of solid studies. But I can also tell from experience that a few high impact publications clearly open doors for you. Anyway, this is really a topic for a different day I guess.
My own view is that I would go for a registered report in cases where it is feasible, as it has three big benefits – 1) you get good peer review before doing the study, 2) it can be nice to have a guarantee of publication and 3) you don’t have to convince people that you didn’t make up the hypothesis after seeing the data. But where it’s not feasible, I’d go for a registered protocol on OSF which at least gives me (3).
I agree this is eminently sensible. I think the (almost) guaranteed publication is probably a quite convincing argument to many people. And by god I can say that I have in the past wished for (3) – oddly enough it’s usually the most hypothesis-driven research where (some) people don’t want to believe you weren’t HARKing…
I think this also underlines an important point. The whole prereg discussion far too often revolves around negative issues. The critics are probably partly to blame for it but I think in general you often hear it mentioned as a response to questionable research practices. But what this discussion suggests is that there are many positive aspects about prereg so rather than being a cure to an ailing scientific process, it can also be seen as a healthier way to do science.