I have been thinking about something I heard on Twitter yesterday (they can take credit for their statement if they wish but for me it’s not important to drag names into this):
…people should be rewarded for counterintuitive findings that replicate
I think that notion is misguided and potentially quite dangerous. Part of the reason why psychology is in such a mess right now is this focus on counterintuitive and/or surprising findings. It is natural that we get excited by novel discoveries and they are essential for scientific progress – so it will probably always be the case that novel or surprising findings can boost a scientist’s career. But that isn’t the same as rewarding them directly.
Rather I believe we should reward good science. By good I mean that experiments are well-designed with decent control conditions, appropriate randomisation of conditions etc, and meticulously executed (you can present evidence of that with sanity checks, analyses of residuals etc). However, there is another aspect which is that the experiments – not the findings – should be replicable. The dictionary definition of ‘replicable‘ is that it should be ‘capable of replication’. In the context of the debates raging in our field this is usually taken to mean that findings you replicate on repeated testing.
However, it can also be taken to mean (and originally I think this was the primary meaning) that it is possible to repeat the experiment. I think good science should come with methods sections that contain sufficient detail for someone with a reasonable background in the field to replicate them. That’s what we teach our students how they should be writing their methods sections. One of Jason Mitchell’s arguments was that all experiments contain tacit knowledge without which a replication is likely to fail. There will doubtless always be methods we don’t report. Mitchell uses the example that we don’t report that we instruct participants in neuroimaging experiments to keep still. Simine Vazire used the example that we don’t mention that experimenters usually wear clothes. However, things like this are pretty basic. Anything that isn’t just common sense should be reported in your methods – especially if you believe that it could make a difference. Of course it is possible you will only later realise a factor matters (like Prof Fluke did with the place where his coin flipping experiment was conducted). But we should seek to minimise these realisations by reporting methods in as much detail as possible.
While things have improved since the days when Science reported methods only as footnotes, the methods sections of many high-impact journals in our field still have very strict word limits. This makes it very difficult to report all the details of your methods. At Nature for instance the methods section should only be about 3000 words and “detailed descriptions of methods already published should be avoided.” While that may seem sensible it is actually quite painful to gather together methods details from previous studies, unless the procedures are well-established and wide-spread. Note that I am not too bothered by the fact that methods are often only available online. In this day and age that isn’t unreasonable. But at the very least the methods should be thorough, easily accessible, and all in one place.
(You may very well think that this post is about replication again – even though I said I wouldn’t write about this any more – but I couldn’t possibly comment…)