On the magic of independent piloting

TL,DR: Never simply decide to run a full experiment based on whether one of the small pilots in which you tweaked your paradigm supported the hypothesis. Use small pilots only to ensure the experiment produces high quality data, judged by criteria that are unrelated to your hypothesis.

Sorry for the bombardment with posts on data peeking and piloting. I felt this would have cluttered up the previous post so I wrote a separate one. After this one I will go back to doing actual work though, I promise! That grant proposal I should be writing has been neglected for too long…

In my previous post, I simulated what happens when you conduct inappropriate pilot experiments by running a small experiment and then continuing data collection if the pilot produces significant results. This is really data peeking and it shouldn’t come as much of a surprise that this inflates false positives and massively skews effect size estimates. I hope most people realize that this is a terrible thing to do because it makes your results entirely dependent on the outcome. Quite possibly, some people would have learned about this in their undergrad stats classes. As one of my colleagues put it, “if it ends up in the final analysis it is not a pilot.” Sadly, I don’t think this as widely known as it should be. I was not kidding when I said that I have seen it happen before or overheard people discussing having done this type of inappropriate piloting.

But anyway, what is an appropriate pilot then? In my previous post, I suggested you should redo the same experiment but restart data collection. You now stick to the methods that gave you a significant pilot result. Now the data set used to test your hypothesis is completely independent, so it won’t be skewed by the pre-selected pilot data. Put another way, your exploratory pilot allows you to estimate a prior, and your full experiment seeks to confirm it. Surely there is nothing wrong with that, right?

I’m afraid there is and it is actually obvious why: your small pilot experiment is underpowered to detect real effects, especially small ones. So if you use inferential statistics to determine if a pilot experiment “worked,” this small pilot is biased towards detecting larger effect sizes. Importantly, this does not mean you bias your experiment towards larger effect sizes. If you only continue the experiment when the pilot was significant, you are ignoring all of the pilots that would have shown true effects but which – due to the large uncertainty (low power) of the pilot – failed to do so purely by chance. Naturally, the proportion of these false negatives becomes smaller the larger you make your pilot sample – but since pilots are by definition small, the error rate is pretty high in any case. For example, for a true effect size of δ = 0.3, the false negatives at a pilot sample of 2 is 95%. With a pilot sample of 15, it is still as high as 88%. Just for illustration I show below the false negative rates (1-power) for three different true effect sizes. Even for quite decent effect sizes the sensitivity of a small pilot is abysmal:

False Negatives

Thus, if you only pick pilot experiments with significant results to do real experiments you are deluding yourself into thinking that the methods you piloted are somehow better (or “precisely calibrated”). Remember this is based on a theoretical scenario that the effect is real and of fixed strength. Every single pilot experiment you ran investigated the same underlying phenomenon and any difference in outcome is purely due to chance – the tweaking of your methods had no effect whatsoever. You waste all manner of resources piloting some methods you then want to test.

So frequentist inferential statistics on pilot experiments are generally nonsense. Pilots are by nature exploratory. You should only determine significance for confirmatory results. But what are these pilots good for? Perhaps we just want to have an idea of what effect size they can produce and then do our confirmatory experiments for those methods that produce a reasonably strong effect?

I’m afraid that won’t do either. I simulated this scenario in a similar manner as in my previous post. 100,000 times I generated two groups (with a full sample size of n = 80, although the full sample size isn’t critical for this). Both groups are drawn from a population with standard deviation 1 but one group has a mean of zero while the other’s mean is shifted by 0.3 – so we have a true effect size here (the actual magnitude of this true effect size is irrelevant for the conclusions). In each of the 100,000 simulations, the researcher runs a number of pilot subjects per group (plotted on x-axis). Only if the effect size estimate for this pilot exceeds a certain criterion level, the researcher runs an independent, full experiment. The criterion is either 50%, 100%, or 200% of the true effect size. Obviously, the researcher cannot know this however. I simply use these criteria as something that the researcher might be doing in a real world situation. (For the true effect size I used here, these criteria would be d = 0.15, d = 0.3, or d = 0.6, respectively).

The results are below. The graph on the left once again plots the false negative rates against the pilot sample size. A false negative here is not based on significance but on effect size, so any simulation for which d was below the criterion. When the criterion is equal to the true effect size, the false negative rate is constant at 50%. The reason for this is obvious: each simulation is drawn from a population centered on the true effect of 0.3, so half of these simulations will exceed that value. However, when the criterion is not equal to the true effect the false negative rates depend on the pilot sample size. If the criterion is lower than the true effect, false negatives decrease. If the criterion is strict, false negatives increase. Either way, the false negative rates are substantially greater than the 20% mark you would have with an adequately powered experiment. So you will still delude yourself a considerable number of times if you only conduct the full experiment when your pilot has a particular effect size. Even if your criterion is lax (and d = 0.15 for a pilot sounds pretty lax to me), you are missing a lot of true results. Again, remember that all of the pilot experiments here investigated a real effect of fixed size. Tweaking the method makes no difference. The difference between simulations is simply due to chance.

Finally, the graph on the right shows the mean effect sizes  estimated by your completed experiments (but not the absolute this time!). The criterion you used in the pilot makes no difference here (all colors are at the same level), which is reassuring. However, all is not necessarily rosy. The open circles plot the effect size you get under publication bias, that is, if you only publish the significant experiments with p < 0.05. This effect is clearly inflated compared to the true effect size of 0.3. The asterisks plot the effect size estimate if you take all of the experiments. This is the situation you would have (Chris Chambers will like this) if you did a Registered Report for your full experiment and publication of the results is guaranteed irrespective of whether or not they are significant. On average, this effect size is an accurate estimate of the true effect.

Simulation Results

Again, these are only the experiments that were lucky enough to go beyond the piloting stage. You already wasted a lot of time, effort, and money to get here. While the final outcome is solid if publication bias is minimized, you have thrown a considerable number of good experiments into the trash. You’ve also misled yourself into believing that you conducted a valid pilot experiment that honed the sensitivity of your methods when in truth all your pilot experiments were equally mediocre.

I have had a few comments from people saying that they are only interested in large effect sizes and surely that means they are fine? I’m afraid not. As I said earlier already, the principle here is not dependent on the true effect size. It is solely a factor of the low sensitivity of the pilot experiment. Even with a large true effect, your outcome-dependent pilot is a blind chicken that errs around in the dark until it is lucky enough to hit a true effect more or less by chance. For this to happen you must use a very low criterion to turn your pilot into a real experiment. This however also means that if the null hypothesis is true an unacceptable proportion of your pilots produce false positives. Again, remember that your piloting is completely meaningless – you’re simply chasing noise here. It means that your decision whether to go from pilot to full experiment is (almost) completely arbitrary, even when the true effect is large.

So for instance, when the true effect is a whopping δ = 1, and you are using d > 0.15 as a criterion in your pilot of 10 subjects (which is already large for pilots I typically hear about), your false negative rate is nice and low at ~3%. But critically, if the null hypothesis of δ = 0 is true, your false positive rate is ~37%. How often you will fool yourself by turning a pilot into a full experiment depends on the base rate. If you give this hypothesis at 50:50 chance of being true, almost one in three of your pilot experiments will lead you to chase a false positive. If these odds are lower (which they very well may be), the situation becomes increasingly worse.

What should we do then? In my view, there are two options: either run a well-powered confirmatory experiment that tests your hypothesis based on an effect size you consider meaningful. This is the option I would chose if resources are a critical factor. Alternatively, if you can afford the investment of time, money, and effort, you could run an exploratory experiment with a reasonably large sample size (that is, more than a pilot). If you must, tweak the analysis at the end to figure out what hides in the data. Then, run a well-powered replication experiment to confirm the result. The power for this should be high enough to detect effects that are considerably weaker than the exploratory effect size. This exploratory experiment may sound like a pilot but it isn’t because it has decent sensitivity and the only resource you might be wasting is your time* during the exploratory analysis stage.

The take-home message here is: don’t make your experiments dependent on whether your pilot supported your hypothesis, even if you use independent data. It may seem like a good idea but it’s tantamount to magical thinking. Chances are that you did not refine your method at all. Again (and I apologize for the repetition but it deserves repeating): this does not mean all small piloting is bad. If your pilot is about assuring that the task isn’t too difficult for subjects, that your analysis pipeline works, that the stimuli appear as you intended, that the subjects aren’t using a different strategy to perform the task, or quite simply to reduce the measurement noise, then it is perfectly valid to run a few people first and it can even be justified to include them in your final data set (although that last point depends on what you’re studying). The critical difference is that the criteria for green-lighting a pilot experiment are completely unrelated to the hypothesis you are testing.

(* Well, your time and the carbon footprint produced by your various analysis attempts. But if you cared about that, you probably wouldn’t waste resources on meaningless pilots in the first place, so this post is not for you…)

MatLab code for this simulation.

5 thoughts on “On the magic of independent piloting

  1. On Sept 1st I added some additional paragraphs and also a few clarifying words earlier in the post. This is to discuss the question what happens when you “only care about strong effects.” The short answer is, that doesn’t matter. Even if the true effect is very strong, you are still quite prone to chasing ghosts unless you are quite certain that your hypothesis is true – and in that case one wonders why you do the pilot in the first place.

    Another thing I should add is that Bayesian hypothesis tests probably solve a lot of the problems here. I think the general problem with chasing noise in outcome-dependent pilot experiments still applies. This is more of an issue of logic than statistics as far as I can see: there simply is no way to be sure that your pilot tweaking really improved anything. The only way you can be sure your tweaking improves your experiment is through an independent and objective criterion.


  2. Sam, I sympathise with the general idea here. It seems to articulate the reason why I don’t like small pilots: if the pilot is expected to be inconclusive then it doesn’t really help. If it is conclusive, then why not call it a study. That said I don’t understand your reasoning here. Surely independent piloting can in principle help us find hypotheses that are more likely to be true. You seem to be saying this is not the case even when the effect size of interest is large (or equivalently the pilot data set is large enough to reliably sense a smaller effect size). That amounts to saying that empirical progress through a sequence of studies, aka science, is in fact magical thinking.


    1. Thanks for your comment, Niko. I think you misunderstand my point. As I said towards the end, if you can afford to, run a full-fledged exploratory study with reasonable power. In this case you are free to tweak your analysis parameters when you have the data or even while it’s coming in. You can then do a confirmatory replication attempt of those experimental parameters. I don’t call this “piloting” because, unless you have a pot of grant money that I would really like to get my hands on, this is a one-shot exploration. There is nothing wrong here and the strength of evidence if your finding replicates is pretty compelling.

      What I am arguing against is using lots of tiny pilots to – as you say – “find hypotheses that are more likely to be true.” It does not work that way. The point of my post is that you simply cannot *know* that you are finding hypotheses that are more likely to be true! The uncertainty associated with small pilots means that it is just as likely that you are only chasing noise. What is worse, you also miss lots of experiments for which the hypothesis is true (even fairly large true effects). It is quite simply meaningless.


    2. Or put another way: the reason I call this “magical thinking” is that it bears all the hallmarks of magical thinking. “I got a good effect when I wiggled my foot and scratched my nose and blinked three times. The effect went away after I sneezed.” In truth, it is extremely likely that all these parameters were just random chance because the sensitivity of your pilot tweaking is just so low.

      Isn’t it much better to do away with this, think about the best possible experiment to test your hypothesis and then do it? As I keep saying, pilot all you like to ensure it is the best experiment that produces clean data but that shouldn’t be down to supporting the hypothesis but orthogonal, objective criteria.

      Now, the thing you are talking about is something different altogether. If you want to find a hypothesis that is worth testing, say whether there is a gene or a brain area related to some behavior, run a well-powered exploratory study to localize that effect and then try to replicate it.


  3. After various discussions the post became bolstered so I added a brief intro summary but that turned out to be a bit unclear. I want to clarify again, the situation I sought to simulate is when you run multiple small pilots of the same experiment over and over until you reach a promising result in support of your hypothesis. Using this strategy has low sensitivity for typical pilots (in my field those would be perhaps 1-5 subjects) and because of this you cannot ever know that your tweaking actually changes anything.

    I am *not* talking about generating a hypothesis in the first place. This can be done through proper exploratory experiments.

    Finally, you may wish to run a pilot and abandon the full experiment if a pilot *doesn’t* support the hypothesis. Obviously this can theoretically save you a lot of resources. But the same problem remains: sensitivity is low so you are fairly likely to get a false negative. Still, this can of course be justified when running lots of people would be costly or ethically problematic. Critically, this is not the same situation I’m talking about in my post.


Comments are closed.